[resmoke] 2019-11-26T14:30:43.847-0500 verbatim resmoke.py invocation: ./buildscripts/resmoke.py --dbpath /home/nz_linux/data --jobs=16 --suites concurrency_sharded_replication jstests/concurrency/fsm_workloads/agg_out.js --repeat=10
[resmoke] 2019-11-26T14:30:43.869-0500 YAML configuration of suite concurrency_sharded_replication
test_kind: fsm_workload_test
selector:
roots:
- jstests/concurrency/fsm_workloads/agg_out.js
executor:
archive:
hooks:
- CheckReplDBHashInBackground
- CheckReplDBHash
- ValidateCollections
tests: true
config:
shell_options:
global_vars:
TestData:
runningWithAutoSplit: false
runningWithBalancer: false
usingReplicaSetShards: true
readMode: commands
fixture:
class: ShardedClusterFixture
enable_autosplit: false
enable_balancer: false
mongod_options:
set_parameters:
enableTestCommands: 1
mongos_options:
set_parameters:
enableTestCommands: 1
num_mongos: 2
num_rs_nodes_per_shard: 3
num_shards: 2
shard_options:
mongod_options:
oplogSize: 1024
hooks:
- class: CheckReplDBHashInBackground
- class: CheckReplDBHash
- class: ValidateCollections
- class: CleanupConcurrencyWorkloads
logging:
executor:
format: '[%(name)s] %(asctime)s %(message)s'
handlers:
- class: logging.StreamHandler
fixture:
format: '[%(name)s] %(message)s'
handlers:
- class: logging.StreamHandler
tests:
format: '[%(name)s] %(asctime)s %(message)s'
handlers:
- class: logging.StreamHandler
[executor] 2019-11-26T14:30:43.869-0500 Shuffling order of tests for fsm_workload_tests in suite concurrency_sharded_replication. The seed is 403147940797.
[executor] 2019-11-26T14:30:43.870-0500 Reducing the number of jobs from 16 to 1 since there are only 1 test(s) to run.
[executor] 2019-11-26T14:30:43.870-0500 Starting execution of fsm_workload_tests...
[executor:fsm_workload_test:job0] 2019-11-26T14:30:43.871-0500 Running job0_fixture_setup...
[fsm_workload_test:job0_fixture_setup] 2019-11-26T14:30:43.871-0500 Starting the setup of ShardedClusterFixture (Job #0).
[ShardedClusterFixture:job0:configsvr:primary] Starting mongod on port 20000...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongod --setParameter enableTestCommands=1 --setParameter migrationLockAcquisitionMaxWaitMS=30000 --setParameter logComponentVerbosity={'replication': {'rollback': 2}, 'transaction': 4} --setParameter disableLogicalSessionCacheRefresh=true --setParameter transactionLifetimeLimitSeconds=86400 --setParameter maxIndexBuildDrainBatchSize=10 --setParameter writePeriodicNoops=false --setParameter waitForStepDownOnNonCommandShutdown=false --configsvr --replSet=config-rs --storageEngine=wiredTiger --oplogSize=511 --dbpath=/home/nz_linux/data/job0/resmoke/config/node0 --port=20000 --journal --enableMajorityReadConcern=True
[ShardedClusterFixture:job0:configsvr:primary] mongod started on port 20000 with pid 13986.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:43.911-0500 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:43.915-0500 I CONTROL [initandlisten] MongoDB starting : pid=13986 port=20000 dbpath=/home/nz_linux/data/job0/resmoke/config/node0 64-bit host=nz_desktop
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:43.915-0500 I CONTROL [initandlisten] db version v0.0.0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:43.915-0500 I CONTROL [initandlisten] git version: unknown
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:43.915-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:43.915-0500 I CONTROL [initandlisten] allocator: system
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:43.915-0500 I CONTROL [initandlisten] modules: enterprise ninja
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:43.915-0500 I CONTROL [initandlisten] build environment:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:43.915-0500 I CONTROL [initandlisten] distarch: x86_64
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:43.915-0500 I CONTROL [initandlisten] target_arch: x86_64
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:43.915-0500 I CONTROL [initandlisten] options: { net: { port: 20000 }, replication: { enableMajorityReadConcern: true, oplogSizeMB: 511, replSet: "config-rs" }, setParameter: { disableLogicalSessionCacheRefresh: "true", enableTestCommands: "1", logComponentVerbosity: "{'replication': {'rollback': 2}, 'transaction': 4}", maxIndexBuildDrainBatchSize: "10", migrationLockAcquisitionMaxWaitMS: "30000", transactionLifetimeLimitSeconds: "86400", waitForStepDownOnNonCommandShutdown: "false", writePeriodicNoops: "false" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/home/nz_linux/data/job0/resmoke/config/node0", engine: "wiredTiger", journal: { enabled: true } } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:43.915-0500 I STORAGE [initandlisten]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:43.915-0500 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:43.915-0500 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:43.916-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=31635M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],
[ShardedClusterFixture:job0:configsvr:primary] Waiting to connect to mongod on port 20000.
[ShardedClusterFixture:job0:configsvr:primary] Waiting to connect to mongod on port 20000.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.110-0500 I STORAGE [initandlisten] WiredTiger message [1574796645:110200][13986:0x7f877160da00], txn-recover: Set global recovery timestamp: (0,0)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.121-0500 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.131-0500 I STORAGE [initandlisten] Timestamp monitor starting
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.135-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.135-0500 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.135-0500 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.135-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.135-0500 I CONTROL [initandlisten] ** WARNING: This server is bound to localhost.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.135-0500 I CONTROL [initandlisten] ** Remote systems will be unable to connect to this server.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.135-0500 I CONTROL [initandlisten] ** Start the server with --bind_ip
to specify which IP
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.135-0500 I CONTROL [initandlisten] ** addresses it should serve responses from, or with --bind_ip_all to
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.136-0500 I CONTROL [initandlisten] ** bind to all interfaces. If this behavior is desired, start the
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.136-0500 I CONTROL [initandlisten] ** server with --bind_ip 127.0.0.1 to disable this warning.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.136-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.138-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.138-0500 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.138-0500 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.138-0500 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.138-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.138-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.138-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.138-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.138-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.138-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.138-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.138-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. Number of files is 1024, should be at least 64000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.138-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.139-0500 I SHARDING [initandlisten] Marking collection local.system.replset as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.139-0500 I STORAGE [initandlisten] Flow Control is enabled on this deployment.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.139-0500 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.140-0500 I SHARDING [initandlisten] Marking collection admin.system.version as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.140-0500 I STORAGE [initandlisten] createCollection: local.startup_log with generated UUID: a1488758-c116-4144-adba-02b8f3b8144d and options: { capped: true, size: 10485760 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.150-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.startup_log
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.150-0500 I SHARDING [initandlisten] Marking collection local.startup_log as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.151-0500 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/home/nz_linux/data/job0/resmoke/config/node0/diagnostic.data'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.152-0500 I SHARDING [thread1] creating distributed lock ping thread for process ConfigServer (sleeping for 30000ms)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.152-0500 I SHARDING [shard-registry-reload] Periodic reload of shard registry failed :: caused by :: ReadConcernMajorityNotAvailableYet: could not get updated shard list from config server :: caused by :: Read concern majority reads are currently not possible.; will retry after 30s
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.153-0500 I STORAGE [initandlisten] createCollection: local.replset.oplogTruncateAfterPoint with generated UUID: b5258dce-fb89-4436-a191-b8586ea2e6c0 and options: {}
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.163-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.163-0500 I STORAGE [initandlisten] createCollection: local.replset.minvalid with generated UUID: ce934bfb-84f4-4d44-a963-37c09c6c95a6 and options: {}
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.174-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.minvalid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.174-0500 I SHARDING [initandlisten] Marking collection local.replset.minvalid as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.174-0500 I STORAGE [initandlisten] createCollection: local.replset.election with generated UUID: 5f00e271-c3c6-4d7b-9d39-1c8e9e8a77d4 and options: {}
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.184-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.election
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.184-0500 I SHARDING [initandlisten] Marking collection local.replset.election as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.185-0500 I REPL [initandlisten] Did not find local initialized voted for document at startup.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.185-0500 I REPL [initandlisten] Did not find local Rollback ID document at startup. Creating one.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.185-0500 I STORAGE [initandlisten] createCollection: local.system.rollback.id with generated UUID: 0ad52f2a-9d3e-4f9f-b91b-17a9c570ab7e and options: {}
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.194-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.system.rollback.id
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.194-0500 I SHARDING [initandlisten] Marking collection local.system.rollback.id as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.194-0500 I REPL [initandlisten] Initialized the rollback ID to 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.194-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.195-0500 I NETWORK [initandlisten] Listening on /tmp/mongodb-20000.sock
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.195-0500 I NETWORK [initandlisten] Listening on 127.0.0.1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.195-0500 I NETWORK [initandlisten] waiting for connections on port 20000
[ShardedClusterFixture:job0:configsvr:primary] Waiting to connect to mongod on port 20000.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.588-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55318 #1 (1 connection now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.588-0500 I NETWORK [conn1] received client metadata from 127.0.0.1:55318 conn1: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.689-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55320 #2 (2 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.689-0500 I NETWORK [conn2] received client metadata from 127.0.0.1:55320 conn2: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.690-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55322 #3 (3 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.690-0500 I NETWORK [conn3] received client metadata from 127.0.0.1:55322 conn3: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:configsvr:primary] Successfully contacted the mongod on port 20000.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.691-0500 I NETWORK [conn3] end connection 127.0.0.1:55322 (2 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.691-0500 I NETWORK [conn2] end connection 127.0.0.1:55320 (1 connection now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.692-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55324 #4 (2 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.692-0500 I NETWORK [conn1] end connection 127.0.0.1:55318 (1 connection now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.693-0500 I NETWORK [conn4] received client metadata from 127.0.0.1:55324 conn4: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.693-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55326 #5 (2 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.693-0500 I NETWORK [conn5] received client metadata from 127.0.0.1:55326 conn5: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.694-0500 I SHARDING [conn5] Marking collection local.oplog.rs as collection version:
[ShardedClusterFixture:job0:configsvr] Issuing replSetInitiate command: {'_id': 'config-rs', 'protocolVersion': 1, 'configsvr': True, 'settings': {'electionTimeoutMillis': 86400000}, 'members': [{'_id': 0, 'host': 'localhost:20000'}]}
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.696-0500 I REPL [conn5] replSetInitiate admin command received from client
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.697-0500 I REPL [conn5] replSetInitiate config object with 1 members parses ok
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.697-0500 I REPL [conn5] ******
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.697-0500 I REPL [conn5] creating replication oplog of size: 511MB...
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.697-0500 I STORAGE [conn5] createCollection: local.oplog.rs with generated UUID: 5bb0c359-7cb9-48f8-8ff8-4b4c84c12ec5 and options: { capped: true, size: 535822336, autoIndexId: false }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.704-0500 I STORAGE [conn5] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.704-0500 I STORAGE [conn5] WiredTiger record store oplog processing took 0ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.725-0500 I REPL [conn5] ******
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.725-0500 I STORAGE [conn5] createCollection: local.system.replset with generated UUID: ea98bf03-b956-4e01-b9a4-857e601cceda and options: {}
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.735-0500 I INDEX [conn5] index build: done building index _id_ on ns local.system.replset
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.736-0500 I STORAGE [conn5] createCollection: admin.system.version with provided UUID: 1b1834a4-71ee-49e7-abbc-7ae09d5089b2 and options: { uuid: UUID("1b1834a4-71ee-49e7-abbc-7ae09d5089b2") }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.746-0500 I INDEX [conn5] index build: done building index _id_ on ns admin.system.version
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.747-0500 I COMMAND [conn5] setting featureCompatibilityVersion to 4.4
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.747-0500 I NETWORK [conn5] Skip closing connection for connection # 5
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.747-0500 I NETWORK [conn5] Skip closing connection for connection # 4
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.747-0500 I REPL [conn5] New replica set config in use: { _id: "config-rs", version: 1, configsvr: true, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "localhost:20000", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 86400000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5ddd7d655cde74b6784bb14d') } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.747-0500 I REPL [conn5] This node is localhost:20000 in the config
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.747-0500 I REPL [conn5] transition to STARTUP2 from STARTUP
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.747-0500 I REPL [conn5] Starting replication storage threads
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.751-0500 I REPL [conn5] transition to RECOVERING from STARTUP2
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.751-0500 I REPL [conn5] Starting replication fetcher thread
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.752-0500 I REPL [conn5] Starting replication applier thread
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.752-0500 I REPL [conn5] Starting replication reporter thread
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.752-0500 I REPL [OplogApplier-0] Starting oplog application
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.752-0500 I REPL [OplogApplier-0] transition to SECONDARY from RECOVERING
[ShardedClusterFixture:job0:configsvr] Waiting for primary on port 20000 to be elected.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.810-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55328 #6 (3 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.810-0500 I ELECTION [OplogApplier-0] conducting a dry run election to see if we could be elected. current term: 0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.810-0500 I ELECTION [ReplCoord-0] dry election run succeeded, running for election in term 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.810-0500 I NETWORK [conn6] received client metadata from 127.0.0.1:55328 conn6: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.811-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55330 #7 (4 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.812-0500 I NETWORK [conn7] received client metadata from 127.0.0.1:55330 conn7: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.812-0500 I ELECTION [ReplCoord-1] election succeeded, assuming primary role in term 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.812-0500 I REPL [ReplCoord-1] transition to PRIMARY from SECONDARY
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.812-0500 I REPL [ReplCoord-1] Resetting sync source to empty, which was :27017
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.812-0500 I REPL [ReplCoord-1] Entering primary catch-up mode.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.812-0500 I REPL [ReplCoord-1] Exited primary catch-up mode.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:45.812-0500 I REPL [ReplCoord-1] Stopping replication producer
[ShardedClusterFixture:job0:configsvr] Waiting for primary on port 20000 to be elected.
[ShardedClusterFixture:job0:configsvr] Waiting for primary on port 20000 to be elected.
[ShardedClusterFixture:job0:configsvr] Waiting for primary on port 20000 to be elected.
[ShardedClusterFixture:job0:configsvr] Waiting for primary on port 20000 to be elected.
[ShardedClusterFixture:job0:configsvr] Waiting for primary on port 20000 to be elected.
[ShardedClusterFixture:job0:configsvr] Waiting for primary on port 20000 to be elected.
[ShardedClusterFixture:job0:configsvr] Waiting for primary on port 20000 to be elected.
[ShardedClusterFixture:job0:configsvr] Waiting for primary on port 20000 to be elected.
[ShardedClusterFixture:job0:configsvr] Waiting for primary on port 20000 to be elected.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.811-0500 I REPL [ReplBatcher] Oplog buffer has been drained in term 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.811-0500 I REPL [RstlKillOpThread] Starting to kill user operations
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.812-0500 I REPL [RstlKillOpThread] Stopped killing user operations
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.812-0500 I REPL [RstlKillOpThread] State transition ops metrics: { lastStateTransition: "stepUp", userOpsKilled: 0, userOpsRunning: 0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.812-0500 I SHARDING [OplogApplier-0] Marking collection config.transactions as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.812-0500 I STORAGE [OplogApplier-0] createCollection: config.transactions with generated UUID: c2741992-901b-4092-a01f-3dfe88ab21c5 and options: {}
[ShardedClusterFixture:job0:configsvr] Waiting for primary on port 20000 to be elected.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.823-0500 I INDEX [OplogApplier-0] index build: done building index _id_ on ns config.transactions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.824-0500 I STORAGE [OplogApplier-0] createCollection: config.chunks with provided UUID: e7035d0b-a892-4426-b520-83da62bcbda6 and options: { uuid: UUID("e7035d0b-a892-4426-b520-83da62bcbda6") }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.833-0500 I INDEX [OplogApplier-0] index build: done building index _id_ on ns config.chunks
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.839-0500 I INDEX [OplogApplier-0] index build: done building index ns_1_min_1 on ns config.chunks
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.844-0500 I INDEX [OplogApplier-0] index build: done building index ns_1_shard_1_min_1 on ns config.chunks
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.850-0500 I INDEX [OplogApplier-0] index build: done building index ns_1_lastmod_1 on ns config.chunks
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.850-0500 I STORAGE [OplogApplier-0] createCollection: config.migrations with provided UUID: 550e32ef-0dd4-48f9-bb5e-9e21bec0734f and options: { uuid: UUID("550e32ef-0dd4-48f9-bb5e-9e21bec0734f") }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.860-0500 I INDEX [OplogApplier-0] index build: done building index _id_ on ns config.migrations
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.865-0500 I INDEX [OplogApplier-0] index build: done building index ns_1_min_1 on ns config.migrations
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.865-0500 I STORAGE [OplogApplier-0] createCollection: config.shards with provided UUID: ed6a2b77-0788-4ad3-a1b0-ccd61535c24f and options: { uuid: UUID("ed6a2b77-0788-4ad3-a1b0-ccd61535c24f") }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.875-0500 I INDEX [OplogApplier-0] index build: done building index _id_ on ns config.shards
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.880-0500 I INDEX [OplogApplier-0] index build: done building index host_1 on ns config.shards
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.880-0500 I STORAGE [OplogApplier-0] createCollection: config.locks with provided UUID: dbde06c7-d8ac-4f80-ab9f-cae486f16451 and options: { uuid: UUID("dbde06c7-d8ac-4f80-ab9f-cae486f16451") }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.890-0500 I INDEX [OplogApplier-0] index build: done building index _id_ on ns config.locks
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.895-0500 I INDEX [OplogApplier-0] index build: done building index ts_1 on ns config.locks
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.901-0500 I INDEX [OplogApplier-0] index build: done building index state_1_process_1 on ns config.locks
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.901-0500 I STORAGE [OplogApplier-0] createCollection: config.lockpings with provided UUID: f662f115-623a-496b-9953-7132cdf8c056 and options: { uuid: UUID("f662f115-623a-496b-9953-7132cdf8c056") }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.912-0500 I INDEX [OplogApplier-0] index build: done building index _id_ on ns config.lockpings
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.917-0500 I INDEX [OplogApplier-0] index build: done building index ping_1 on ns config.lockpings
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.917-0500 I STORAGE [OplogApplier-0] createCollection: config.tags with provided UUID: d225b508-e40e-4c3c-a716-26adc4561055 and options: { uuid: UUID("d225b508-e40e-4c3c-a716-26adc4561055") }
[ShardedClusterFixture:job0:configsvr] Waiting for primary on port 20000 to be elected.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.927-0500 I INDEX [OplogApplier-0] index build: done building index _id_ on ns config.tags
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.932-0500 I INDEX [OplogApplier-0] index build: done building index ns_1_min_1 on ns config.tags
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.938-0500 I INDEX [OplogApplier-0] index build: done building index ns_1_tag_1 on ns config.tags
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.938-0500 I SHARDING [OplogApplier-0] Marking collection config.version as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.938-0500 I STORAGE [OplogApplier-0] createCollection: config.version with generated UUID: d52b8328-6d55-4f54-8cfd-e715a58e3315 and options: {}
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.948-0500 I INDEX [OplogApplier-0] index build: done building index _id_ on ns config.version
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.948-0500 I SHARDING [OplogApplier-0] Marking collection config.locks as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.948-0500 I SHARDING [OplogApplier-0] Marking collection config.migrations as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.949-0500 I SHARDING [Balancer] CSRS balancer is starting
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.949-0500 D3 TXN [TransactionCoordinator] Waiting for OpTime { ts: Timestamp(1574796646, 40), t: 1 } to become majority committed
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.949-0500 I STORAGE [OplogApplier-0] IndexBuildsCoordinator::onStepUp - this node is stepping up to primary
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.949-0500 I STORAGE [OplogApplier-0] Triggering the first stable checkpoint. Initial Data: Timestamp(1574796645, 1) PrevStable: Timestamp(0, 0) CurrStable: Timestamp(1574796646, 32)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.949-0500 I REPL [OplogApplier-0] transition to primary complete; database writes are now permitted
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.949-0500 I SHARDING [TransactionCoordinator] Marking collection config.transaction_coordinators as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.950-0500 I SHARDING [monitoring-keys-for-HMAC] Marking collection admin.system.keys as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.950-0500 I TXN [TransactionCoordinator] Need to resume coordinating commit for 0 transactions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.950-0500 I TXN [TransactionCoordinator] Incoming coordinateCommit requests are now enabled
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.950-0500 I SHARDING [Balancer] Marking collection config.settings as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.950-0500 I STORAGE [monitoring-keys-for-HMAC] createCollection: admin.system.keys with generated UUID: 807238e6-a72f-4ef0-b305-4bab60afd0e6 and options: {}
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.950-0500 I SHARDING [Balancer] CSRS balancer thread is recovering
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.950-0500 I SHARDING [Balancer] CSRS balancer thread is recovered
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.950-0500 I SHARDING [Balancer] Marking collection config.shards as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.950-0500 I SHARDING [Balancer] Marking collection config.collections as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:46.965-0500 I INDEX [monitoring-keys-for-HMAC] index build: done building index _id_ on ns admin.system.keys
[ShardedClusterFixture:job0:configsvr] Waiting for primary on port 20000 to be elected.
[ShardedClusterFixture:job0:configsvr] Primary on port 20000 successfully elected.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:47.022-0500 I NETWORK [conn7] end connection 127.0.0.1:55330 (3 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:47.022-0500 I NETWORK [conn6] end connection 127.0.0.1:55328 (2 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:47.022-0500 I NETWORK [conn5] end connection 127.0.0.1:55326 (1 connection now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:47.022-0500 I NETWORK [conn4] end connection 127.0.0.1:55324 (0 connections now open)
[ShardedClusterFixture:job0:shard0:primary] Starting mongod on port 20001...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongod --setParameter enableTestCommands=1 --setParameter migrationLockAcquisitionMaxWaitMS=30000 --setParameter logComponentVerbosity={'replication': {'rollback': 2}, 'transaction': 4} --setParameter orphanCleanupDelaySecs=1 --setParameter disableLogicalSessionCacheRefresh=true --setParameter transactionLifetimeLimitSeconds=86400 --setParameter maxIndexBuildDrainBatchSize=10 --setParameter writePeriodicNoops=false --setParameter waitForStepDownOnNonCommandShutdown=false --oplogSize=1024 --shardsvr --replSet=shard-rs0 --dbpath=/home/nz_linux/data/job0/resmoke/shard0/node0 --port=20001 --enableMajorityReadConcern=True
[ShardedClusterFixture:job0:shard0:primary] mongod started on port 20001 with pid 14076.
[ShardedClusterFixture:job0:shard0:secondary0] Starting mongod on port 20002...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongod --setParameter enableTestCommands=1 --setParameter migrationLockAcquisitionMaxWaitMS=30000 --setParameter logComponentVerbosity={'replication': {'rollback': 2}, 'transaction': 4} --setParameter orphanCleanupDelaySecs=1 --setParameter disableLogicalSessionCacheRefresh=true --setParameter transactionLifetimeLimitSeconds=86400 --setParameter maxIndexBuildDrainBatchSize=10 --setParameter writePeriodicNoops=false --setParameter waitForStepDownOnNonCommandShutdown=false --oplogSize=1024 --shardsvr --replSet=shard-rs0 --dbpath=/home/nz_linux/data/job0/resmoke/shard0/node1 --port=20002 --enableMajorityReadConcern=True
[ShardedClusterFixture:job0:shard0:secondary0] mongod started on port 20002 with pid 14079.
[ShardedClusterFixture:job0:shard0:secondary1] Starting mongod on port 20003...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongod --setParameter enableTestCommands=1 --setParameter migrationLockAcquisitionMaxWaitMS=30000 --setParameter logComponentVerbosity={'replication': {'rollback': 2}, 'transaction': 4} --setParameter orphanCleanupDelaySecs=1 --setParameter disableLogicalSessionCacheRefresh=true --setParameter transactionLifetimeLimitSeconds=86400 --setParameter maxIndexBuildDrainBatchSize=10 --setParameter writePeriodicNoops=false --setParameter waitForStepDownOnNonCommandShutdown=false --oplogSize=1024 --shardsvr --replSet=shard-rs0 --dbpath=/home/nz_linux/data/job0/resmoke/shard0/node2 --port=20003 --enableMajorityReadConcern=True
[ShardedClusterFixture:job0:shard0:secondary1] mongod started on port 20003 with pid 14082.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.063-0500 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.065-0500 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.065-0500 I CONTROL [initandlisten] MongoDB starting : pid=14076 port=20001 dbpath=/home/nz_linux/data/job0/resmoke/shard0/node0 64-bit host=nz_desktop
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.065-0500 I CONTROL [initandlisten] db version v0.0.0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.065-0500 I CONTROL [initandlisten] git version: unknown
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.065-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.065-0500 I CONTROL [initandlisten] allocator: system
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.065-0500 I CONTROL [initandlisten] modules: enterprise ninja
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.065-0500 I CONTROL [initandlisten] build environment:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.065-0500 I CONTROL [initandlisten] distarch: x86_64
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.065-0500 I CONTROL [initandlisten] target_arch: x86_64
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.065-0500 I CONTROL [initandlisten] options: { net: { port: 20001 }, replication: { enableMajorityReadConcern: true, oplogSizeMB: 1024, replSet: "shard-rs0" }, setParameter: { disableLogicalSessionCacheRefresh: "true", enableTestCommands: "1", logComponentVerbosity: "{'replication': {'rollback': 2}, 'transaction': 4}", maxIndexBuildDrainBatchSize: "10", migrationLockAcquisitionMaxWaitMS: "30000", orphanCleanupDelaySecs: "1", transactionLifetimeLimitSeconds: "86400", waitForStepDownOnNonCommandShutdown: "false", writePeriodicNoops: "false" }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/home/nz_linux/data/job0/resmoke/shard0/node0" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.066-0500 I STORAGE [initandlisten]
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.066-0500 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.066-0500 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.066-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=31635M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.071-0500 I CONTROL [initandlisten] MongoDB starting : pid=14079 port=20002 dbpath=/home/nz_linux/data/job0/resmoke/shard0/node1 64-bit host=nz_desktop
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.071-0500 I CONTROL [initandlisten] db version v0.0.0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.071-0500 I CONTROL [initandlisten] git version: unknown
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.071-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.071-0500 I CONTROL [initandlisten] allocator: system
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.071-0500 I CONTROL [initandlisten] modules: enterprise ninja
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.071-0500 I CONTROL [initandlisten] build environment:
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.071-0500 I CONTROL [initandlisten] distarch: x86_64
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.071-0500 I CONTROL [initandlisten] target_arch: x86_64
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.071-0500 I CONTROL [initandlisten] options: { net: { port: 20002 }, replication: { enableMajorityReadConcern: true, oplogSizeMB: 1024, replSet: "shard-rs0" }, setParameter: { disableLogicalSessionCacheRefresh: "true", enableTestCommands: "1", logComponentVerbosity: "{'replication': {'rollback': 2}, 'transaction': 4}", maxIndexBuildDrainBatchSize: "10", migrationLockAcquisitionMaxWaitMS: "30000", orphanCleanupDelaySecs: "1", transactionLifetimeLimitSeconds: "86400", waitForStepDownOnNonCommandShutdown: "false", writePeriodicNoops: "false" }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/home/nz_linux/data/job0/resmoke/shard0/node1" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.072-0500 I STORAGE [initandlisten]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.072-0500 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.072-0500 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.072-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=31635M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.075-0500 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.078-0500 I CONTROL [initandlisten] MongoDB starting : pid=14082 port=20003 dbpath=/home/nz_linux/data/job0/resmoke/shard0/node2 64-bit host=nz_desktop
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.078-0500 I CONTROL [initandlisten] db version v0.0.0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.078-0500 I CONTROL [initandlisten] git version: unknown
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.078-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.078-0500 I CONTROL [initandlisten] allocator: system
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.078-0500 I CONTROL [initandlisten] modules: enterprise ninja
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.078-0500 I CONTROL [initandlisten] build environment:
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.078-0500 I CONTROL [initandlisten] distarch: x86_64
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.078-0500 I CONTROL [initandlisten] target_arch: x86_64
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.078-0500 I CONTROL [initandlisten] options: { net: { port: 20003 }, replication: { enableMajorityReadConcern: true, oplogSizeMB: 1024, replSet: "shard-rs0" }, setParameter: { disableLogicalSessionCacheRefresh: "true", enableTestCommands: "1", logComponentVerbosity: "{'replication': {'rollback': 2}, 'transaction': 4}", maxIndexBuildDrainBatchSize: "10", migrationLockAcquisitionMaxWaitMS: "30000", orphanCleanupDelaySecs: "1", transactionLifetimeLimitSeconds: "86400", waitForStepDownOnNonCommandShutdown: "false", writePeriodicNoops: "false" }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/home/nz_linux/data/job0/resmoke/shard0/node2" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.079-0500 I STORAGE [initandlisten]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.079-0500 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.079-0500 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.079-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=31635M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],
[ShardedClusterFixture:job0:shard0:primary] Waiting to connect to mongod on port 20001.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.906-0500 I STORAGE [initandlisten] WiredTiger message [1574796647:906241][14076:0x7f6c88fbfa00], txn-recover: Set global recovery timestamp: (0,0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.913-0500 I STORAGE [initandlisten] WiredTiger message [1574796647:913239][14079:0x7f8e676f7a00], txn-recover: Set global recovery timestamp: (0,0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.917-0500 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.922-0500 I STORAGE [initandlisten] WiredTiger message [1574796647:922585][14082:0x7f9ec5faba00], txn-recover: Set global recovery timestamp: (0,0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.925-0500 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.933-0500 I STORAGE [initandlisten] Timestamp monitor starting
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.937-0500 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.941-0500 I STORAGE [initandlisten] Timestamp monitor starting
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.943-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.943-0500 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.943-0500 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.943-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.943-0500 I CONTROL [initandlisten] ** WARNING: This server is bound to localhost.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.943-0500 I CONTROL [initandlisten] ** Remote systems will be unable to connect to this server.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.943-0500 I CONTROL [initandlisten] ** Start the server with --bind_ip to specify which IP
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.943-0500 I CONTROL [initandlisten] ** addresses it should serve responses from, or with --bind_ip_all to
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.943-0500 I CONTROL [initandlisten] ** bind to all interfaces. If this behavior is desired, start the
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.943-0500 I CONTROL [initandlisten] ** server with --bind_ip 127.0.0.1 to disable this warning.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.943-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.945-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.945-0500 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.945-0500 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.945-0500 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options]
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.945-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.945-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.945-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.945-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.945-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.945-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.945-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.945-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. Number of files is 1024, should be at least 64000
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.945-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.946-0500 I SHARDING [initandlisten] Marking collection local.system.replset as collection version:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.946-0500 I STORAGE [initandlisten] Flow Control is enabled on this deployment.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.946-0500 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.946-0500 I SHARDING [initandlisten] Marking collection admin.system.version as collection version:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.946-0500 W SHARDING [initandlisten] Started with --shardsvr, but no shardIdentity document was found on disk in admin.system.version. This most likely means this server has not yet been added to a sharded cluster.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.946-0500 I STORAGE [initandlisten] createCollection: local.startup_log with generated UUID: e8e71921-e80f-42ad-92d0-ad769374a694 and options: { capped: true, size: 10485760 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.950-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.950-0500 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.950-0500 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.950-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.950-0500 I CONTROL [initandlisten] ** WARNING: This server is bound to localhost.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.950-0500 I CONTROL [initandlisten] ** Remote systems will be unable to connect to this server.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.950-0500 I CONTROL [initandlisten] ** Start the server with --bind_ip to specify which IP
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.950-0500 I CONTROL [initandlisten] ** addresses it should serve responses from, or with --bind_ip_all to
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.950-0500 I CONTROL [initandlisten] ** bind to all interfaces. If this behavior is desired, start the
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.950-0500 I CONTROL [initandlisten] ** server with --bind_ip 127.0.0.1 to disable this warning.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.950-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.951-0500 I STORAGE [initandlisten] Timestamp monitor starting
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.952-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.952-0500 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.952-0500 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems:
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.952-0500 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.952-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.952-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.952-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.952-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.952-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.952-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.952-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.952-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. Number of files is 1024, should be at least 64000
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.952-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.953-0500 I SHARDING [initandlisten] Marking collection local.system.replset as collection version:
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.953-0500 I STORAGE [initandlisten] Flow Control is enabled on this deployment.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.953-0500 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version:
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.953-0500 I SHARDING [initandlisten] Marking collection admin.system.version as collection version:
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.953-0500 W SHARDING [initandlisten] Started with --shardsvr, but no shardIdentity document was found on disk in admin.system.version. This most likely means this server has not yet been added to a sharded cluster.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.953-0500 I STORAGE [initandlisten] createCollection: local.startup_log with generated UUID: e0cc0511-0005-4584-a461-5ae30058b4c6 and options: { capped: true, size: 10485760 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.961-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.961-0500 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.961-0500 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.961-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.961-0500 I CONTROL [initandlisten] ** WARNING: This server is bound to localhost.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.961-0500 I CONTROL [initandlisten] ** Remote systems will be unable to connect to this server.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.961-0500 I CONTROL [initandlisten] ** Start the server with --bind_ip to specify which IP
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.961-0500 I CONTROL [initandlisten] ** addresses it should serve responses from, or with --bind_ip_all to
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.961-0500 I CONTROL [initandlisten] ** bind to all interfaces. If this behavior is desired, start the
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.961-0500 I CONTROL [initandlisten] ** server with --bind_ip 127.0.0.1 to disable this warning.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.961-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.962-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.startup_log
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.962-0500 I SHARDING [initandlisten] Marking collection local.startup_log as collection version:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.962-0500 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/home/nz_linux/data/job0/resmoke/shard0/node0/diagnostic.data'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.963-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.963-0500 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.963-0500 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems:
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.963-0500 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.963-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.963-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.963-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.963-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.963-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.963-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.963-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.963-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. Number of files is 1024, should be at least 64000
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.963-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.963-0500 I STORAGE [initandlisten] createCollection: local.replset.oplogTruncateAfterPoint with generated UUID: 4ac06258-0ea7-46c8-b773-0c637830872b and options: {}
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.966-0500 I SHARDING [initandlisten] Marking collection local.system.replset as collection version:
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.966-0500 I STORAGE [initandlisten] Flow Control is enabled on this deployment.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.966-0500 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version:
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.966-0500 I SHARDING [initandlisten] Marking collection admin.system.version as collection version:
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.967-0500 W SHARDING [initandlisten] Started with --shardsvr, but no shardIdentity document was found on disk in admin.system.version. This most likely means this server has not yet been added to a sharded cluster.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.967-0500 I STORAGE [initandlisten] createCollection: local.startup_log with generated UUID: 7b6988ea-0c65-41a6-9855-5680c2c711a1 and options: { capped: true, size: 10485760 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.967-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.startup_log
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.967-0500 I SHARDING [initandlisten] Marking collection local.startup_log as collection version:
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.967-0500 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/home/nz_linux/data/job0/resmoke/shard0/node1/diagnostic.data'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.971-0500 I STORAGE [initandlisten] createCollection: local.replset.oplogTruncateAfterPoint with generated UUID: 5d41bfc8-ebca-43f3-a038-30023495a91a and options: {}
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.978-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.978-0500 I STORAGE [initandlisten] createCollection: local.replset.minvalid with generated UUID: a96fd08c-e1c8-43e5-868a-0849697b175e and options: {}
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.982-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.startup_log
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.983-0500 I SHARDING [initandlisten] Marking collection local.startup_log as collection version:
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.983-0500 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/home/nz_linux/data/job0/resmoke/shard0/node2/diagnostic.data'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.983-0500 I STORAGE [initandlisten] createCollection: local.replset.oplogTruncateAfterPoint with generated UUID: fe211210-ae1b-4ab2-81d6-86b025cc1404 and options: {}
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.987-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:47.987-0500 I STORAGE [initandlisten] createCollection: local.replset.minvalid with generated UUID: 6eb6e647-60c7-450a-a905-f04052287b8a and options: {}
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.993-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.minvalid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.993-0500 I SHARDING [initandlisten] Marking collection local.replset.minvalid as collection version:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:47.993-0500 I STORAGE [initandlisten] createCollection: local.replset.election with generated UUID: 801ad0de-17c3-44b2-a878-e91b8de004c5 and options: {}
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.998-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:47.998-0500 I STORAGE [initandlisten] createCollection: local.replset.minvalid with generated UUID: 6654b1c2-f323-4c78-9165-5ff31d331960 and options: {}
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:48.000-0500 I SHARDING [ftdc] Marking collection local.oplog.rs as collection version:
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:48.000-0500 W REPL [ftdc] Rollback ID is not initialized yet.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.000-0500 I SHARDING [ftdc] Marking collection local.oplog.rs as collection version:
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:48.000-0500 I SHARDING [ftdc] Marking collection local.oplog.rs as collection version:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.000-0500 W REPL [ftdc] Rollback ID is not initialized yet.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:48.000-0500 W REPL [ftdc] Rollback ID is not initialized yet.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:48.003-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.minvalid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:48.003-0500 I SHARDING [initandlisten] Marking collection local.replset.minvalid as collection version:
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:48.003-0500 I STORAGE [initandlisten] createCollection: local.replset.election with generated UUID: d0928956-d7fc-46fe-a9bc-1f07f2435457 and options: {}
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.009-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.election
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.009-0500 I SHARDING [initandlisten] Marking collection local.replset.election as collection version:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.009-0500 I REPL [initandlisten] Did not find local initialized voted for document at startup.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.009-0500 I REPL [initandlisten] Did not find local Rollback ID document at startup. Creating one.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.009-0500 I STORAGE [initandlisten] createCollection: local.system.rollback.id with generated UUID: 2d9a033a-73d1-44ef-b7d1-30b6243b0419 and options: {}
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:48.015-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.minvalid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:48.015-0500 I SHARDING [initandlisten] Marking collection local.replset.minvalid as collection version:
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:48.015-0500 I STORAGE [initandlisten] createCollection: local.replset.election with generated UUID: bf7b5380-e70a-475e-ad1b-16751bee6907 and options: {}
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:48.019-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.election
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:48.019-0500 I SHARDING [initandlisten] Marking collection local.replset.election as collection version:
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:48.019-0500 I REPL [initandlisten] Did not find local initialized voted for document at startup.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:48.019-0500 I REPL [initandlisten] Did not find local Rollback ID document at startup. Creating one.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:48.019-0500 I STORAGE [initandlisten] createCollection: local.system.rollback.id with generated UUID: 1099f6d7-f170-471c-a0ac-dc97bd7e42b0 and options: {}
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.024-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.system.rollback.id
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.024-0500 I SHARDING [initandlisten] Marking collection local.system.rollback.id as collection version:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.024-0500 I REPL [initandlisten] Initialized the rollback ID to 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.024-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.024-0500 I NETWORK [initandlisten] Listening on /tmp/mongodb-20001.sock
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.024-0500 I NETWORK [initandlisten] Listening on 127.0.0.1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.028-0500 I NETWORK [initandlisten] waiting for connections on port 20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:48.029-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.election
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:48.029-0500 I SHARDING [initandlisten] Marking collection local.replset.election as collection version:
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:48.029-0500 I REPL [initandlisten] Did not find local initialized voted for document at startup.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:48.029-0500 I REPL [initandlisten] Did not find local Rollback ID document at startup. Creating one.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:48.029-0500 I STORAGE [initandlisten] createCollection: local.system.rollback.id with generated UUID: 9434a858-83b3-4d87-8d66-64bde405790b and options: {}
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:48.032-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.system.rollback.id
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:48.032-0500 I SHARDING [initandlisten] Marking collection local.system.rollback.id as collection version:
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:48.032-0500 I REPL [initandlisten] Initialized the rollback ID to 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:48.032-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:48.036-0500 I NETWORK [initandlisten] Listening on /tmp/mongodb-20002.sock
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:48.036-0500 I NETWORK [initandlisten] Listening on 127.0.0.1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:48.036-0500 I NETWORK [initandlisten] waiting for connections on port 20002
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:48.040-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.system.rollback.id
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:48.040-0500 I SHARDING [initandlisten] Marking collection local.system.rollback.id as collection version:
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:48.040-0500 I REPL [initandlisten] Initialized the rollback ID to 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:48.040-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:48.041-0500 I NETWORK [initandlisten] Listening on /tmp/mongodb-20003.sock
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:48.041-0500 I NETWORK [initandlisten] Listening on 127.0.0.1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:48.041-0500 I NETWORK [initandlisten] waiting for connections on port 20003
[ShardedClusterFixture:job0:shard0:primary] Waiting to connect to mongod on port 20001.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.151-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38078 #1 (1 connection now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.152-0500 I NETWORK [conn1] received client metadata from 127.0.0.1:38078 conn1: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.252-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38080 #2 (2 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.252-0500 I NETWORK [conn2] received client metadata from 127.0.0.1:38080 conn2: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.253-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38082 #3 (3 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.253-0500 I NETWORK [conn3] received client metadata from 127.0.0.1:38082 conn3: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:primary] Successfully contacted the mongod on port 20001.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.254-0500 I NETWORK [conn3] end connection 127.0.0.1:38082 (2 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.254-0500 I NETWORK [conn2] end connection 127.0.0.1:38080 (1 connection now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.255-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38084 #4 (2 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.256-0500 I NETWORK [conn4] received client metadata from 127.0.0.1:38084 conn4: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.256-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38086 #5 (3 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.256-0500 I NETWORK [conn5] received client metadata from 127.0.0.1:38086 conn5: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.258-0500 I NETWORK [conn1] end connection 127.0.0.1:38078 (2 connections now open)
[ShardedClusterFixture:job0:shard0] Issuing replSetInitiate command: {'_id': 'shard-rs0', 'protocolVersion': 1, 'settings': {'electionTimeoutMillis': 86400000}, 'members': [{'_id': 0, 'host': 'localhost:20001'}]}
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.260-0500 I REPL [conn5] replSetInitiate admin command received from client
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.260-0500 I REPL [conn5] replSetInitiate config object with 1 members parses ok
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.260-0500 I REPL [conn5] ******
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.260-0500 I REPL [conn5] creating replication oplog of size: 1024MB...
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.260-0500 I STORAGE [conn5] createCollection: local.oplog.rs with generated UUID: 5f1b9ff7-2fef-4590-8e90-0f3704b0f5df and options: { capped: true, size: 1073741824.0, autoIndexId: false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.267-0500 I STORAGE [conn5] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.267-0500 I STORAGE [conn5] WiredTiger record store oplog processing took 0ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.287-0500 I REPL [conn5] ******
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.287-0500 I STORAGE [conn5] createCollection: local.system.replset with generated UUID: 318b7af2-23ac-427e-bba7-a3e3f5b1e60d and options: {}
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.298-0500 I INDEX [conn5] index build: done building index _id_ on ns local.system.replset
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.299-0500 I STORAGE [conn5] createCollection: admin.system.version with provided UUID: 70439088-b608-4bfe-8d4e-f62378562d13 and options: { uuid: UUID("70439088-b608-4bfe-8d4e-f62378562d13") }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.309-0500 I INDEX [conn5] index build: done building index _id_ on ns admin.system.version
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.310-0500 I COMMAND [conn5] setting featureCompatibilityVersion to 4.2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.310-0500 I REPL [conn5] New replica set config in use: { _id: "shard-rs0", version: 1, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "localhost:20001", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 86400000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5ddd7d683bbfe7fa5630d3b8') } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.310-0500 I REPL [conn5] This node is localhost:20001 in the config
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.310-0500 I REPL [conn5] transition to STARTUP2 from STARTUP
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.310-0500 I REPL [conn5] Starting replication storage threads
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.314-0500 I REPL [conn5] transition to RECOVERING from STARTUP2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.314-0500 I REPL [conn5] Starting replication fetcher thread
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.314-0500 I REPL [conn5] Starting replication applier thread
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.314-0500 I REPL [conn5] Starting replication reporter thread
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.314-0500 I REPL [OplogApplier-0] Starting oplog application
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.314-0500 I REPL [OplogApplier-0] transition to SECONDARY from RECOVERING
[ShardedClusterFixture:job0:shard0] Waiting for primary on port 20001 to be elected.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.354-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38088 #6 (3 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.354-0500 I ELECTION [OplogApplier-0] conducting a dry run election to see if we could be elected. current term: 0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.354-0500 I ELECTION [ReplCoord-0] dry election run succeeded, running for election in term 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.354-0500 I NETWORK [conn6] received client metadata from 127.0.0.1:38088 conn6: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.355-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38090 #7 (4 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.355-0500 I NETWORK [conn7] received client metadata from 127.0.0.1:38090 conn7: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.356-0500 I ELECTION [ReplCoord-0] election succeeded, assuming primary role in term 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.356-0500 I REPL [ReplCoord-0] transition to PRIMARY from SECONDARY
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.356-0500 I REPL [ReplCoord-0] Resetting sync source to empty, which was :27017
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.356-0500 I REPL [ReplCoord-0] Entering primary catch-up mode.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.356-0500 I REPL [ReplCoord-0] Exited primary catch-up mode.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:48.356-0500 I REPL [ReplCoord-0] Stopping replication producer
[ShardedClusterFixture:job0:shard0] Waiting for primary on port 20001 to be elected.
[ShardedClusterFixture:job0:shard0] Waiting for primary on port 20001 to be elected.
[ShardedClusterFixture:job0:shard0] Waiting for primary on port 20001 to be elected.
[ShardedClusterFixture:job0:shard0] Waiting for primary on port 20001 to be elected.
[ShardedClusterFixture:job0:shard0] Waiting for primary on port 20001 to be elected.
[ShardedClusterFixture:job0:shard0] Waiting for primary on port 20001 to be elected.
[ShardedClusterFixture:job0:shard0] Waiting for primary on port 20001 to be elected.
[ShardedClusterFixture:job0:shard0] Waiting for primary on port 20001 to be elected.
[ShardedClusterFixture:job0:shard0] Waiting for primary on port 20001 to be elected.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.355-0500 I REPL [ReplBatcher] Oplog buffer has been drained in term 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.358-0500 I REPL [RstlKillOpThread] Starting to kill user operations
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.359-0500 I REPL [RstlKillOpThread] Stopped killing user operations
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.359-0500 I REPL [RstlKillOpThread] State transition ops metrics: { lastStateTransition: "stepUp", userOpsKilled: 0, userOpsRunning: 0 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.359-0500 I SHARDING [OplogApplier-0] Marking collection config.transactions as collection version:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.359-0500 I STORAGE [OplogApplier-0] createCollection: config.transactions with generated UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d and options: {}
[ShardedClusterFixture:job0:shard0] Waiting for primary on port 20001 to be elected.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.370-0500 I INDEX [OplogApplier-0] index build: done building index _id_ on ns config.transactions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.371-0500 I STORAGE [OplogApplier-0] IndexBuildsCoordinator::onStepUp - this node is stepping up to primary
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.371-0500 I REPL [OplogApplier-0] transition to primary complete; database writes are now permitted
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.456-0500 I STORAGE [WTJournalFlusher] Triggering the first stable checkpoint. Initial Data: Timestamp(1574796648, 1) PrevStable: Timestamp(0, 0) CurrStable: Timestamp(1574796649, 2)
[ShardedClusterFixture:job0:shard0] Waiting for primary on port 20001 to be elected.
[ShardedClusterFixture:job0:shard0] Primary on port 20001 successfully elected.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.466-0500 I NETWORK [conn7] end connection 127.0.0.1:38090 (3 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.466-0500 I NETWORK [conn6] end connection 127.0.0.1:38088 (2 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.467-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51126 #1 (1 connection now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.467-0500 I NETWORK [conn1] received client metadata from 127.0.0.1:51126 conn1: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.468-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51128 #2 (2 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.468-0500 I NETWORK [conn2] received client metadata from 127.0.0.1:51128 conn2: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:secondary0] Successfully contacted the mongod on port 20002.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.468-0500 I NETWORK [conn2] end connection 127.0.0.1:51128 (1 connection now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.468-0500 I NETWORK [conn1] end connection 127.0.0.1:51126 (0 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.469-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52018 #1 (1 connection now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.470-0500 I NETWORK [conn1] received client metadata from 127.0.0.1:52018 conn1: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.470-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52020 #2 (2 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.471-0500 I NETWORK [conn2] received client metadata from 127.0.0.1:52020 conn2: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:secondary1] Successfully contacted the mongod on port 20003.
[ShardedClusterFixture:job0:shard0] Issuing replSetReconfig command: {'_id': 'shard-rs0', 'protocolVersion': 1, 'settings': {'electionTimeoutMillis': 86400000}, 'members': [{'_id': 0, 'host': 'localhost:20001'}, {'_id': 1, 'host': 'localhost:20002', 'priority': 0}, {'_id': 2, 'host': 'localhost:20003', 'priority': 0}], 'version': 2}
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.471-0500 I NETWORK [conn2] end connection 127.0.0.1:52020 (1 connection now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.471-0500 I NETWORK [conn1] end connection 127.0.0.1:52018 (0 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.472-0500 I REPL [conn5] replSetReconfig admin command received from client; new config: { _id: "shard-rs0", protocolVersion: 1, settings: { electionTimeoutMillis: 86400000 }, members: [ { _id: 0, host: "localhost:20001" }, { _id: 1, host: "localhost:20002", priority: 0 }, { _id: 2, host: "localhost:20003", priority: 0 } ], version: 2 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.472-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51134 #3 (1 connection now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.473-0500 I NETWORK [conn3] end connection 127.0.0.1:51134 (0 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.473-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52024 #3 (1 connection now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.473-0500 I NETWORK [conn3] end connection 127.0.0.1:52024 (0 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.473-0500 I REPL [conn5] replSetReconfig config object with 3 members parses ok
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.473-0500 I REPL [conn5] Scheduling remote command request for reconfig quorum check: RemoteCommand 1 -- target:localhost:20002 db:admin cmd:{ replSetHeartbeat: "shard-rs0", configVersion: 2, hbv: 1, from: "localhost:20001", fromId: 0, term: 1 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.473-0500 I REPL [conn5] Scheduling remote command request for reconfig quorum check: RemoteCommand 2 -- target:localhost:20003 db:admin cmd:{ replSetHeartbeat: "shard-rs0", configVersion: 2, hbv: 1, from: "localhost:20001", fromId: 0, term: 1 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.473-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20002
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.473-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20003
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.473-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51138 #4 (1 connection now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.473-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52028 #4 (1 connection now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.474-0500 I NETWORK [conn4] received client metadata from 127.0.0.1:51138 conn4: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.474-0500 I NETWORK [conn4] received client metadata from 127.0.0.1:52028 conn4: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.474-0500 I REPL [conn5] New replica set config in use: { _id: "shard-rs0", version: 2, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "localhost:20001", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:20002", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 0.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:20003", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 0.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 86400000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5ddd7d683bbfe7fa5630d3b8') } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.474-0500 I REPL [conn5] This node is localhost:20001 in the config
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.474-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38108 #12 (3 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.475-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52032 #5 (2 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.475-0500 I REPL [ReplCoord-2] Member localhost:20002 is now in state STARTUP
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.475-0500 I NETWORK [conn5] received client metadata from 127.0.0.1:52032 conn5: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.475-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38112 #14 (4 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.476-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51148 #6 (2 connections now open)
[ShardedClusterFixture:job0:shard0] Waiting for secondary on port 20002 to become available.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.475-0500 I NETWORK [conn14] received client metadata from 127.0.0.1:38112 conn14: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.476-0500 I NETWORK [conn6] received client metadata from 127.0.0.1:51148 conn6: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.475-0500 I NETWORK [conn12] received client metadata from 127.0.0.1:38108 conn12: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.475-0500 I REPL [ReplCoord-1] Member localhost:20003 is now in state STARTUP
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.477-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51154 #9 (3 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.476-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38116 #15 (5 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.476-0500 I NETWORK [conn15] end connection 127.0.0.1:38116 (4 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.477-0500 I NETWORK [conn9] received client metadata from 127.0.0.1:51154 conn9: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.479-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52040 #7 (3 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.479-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38122 #16 (5 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.479-0500 I NETWORK [conn7] end connection 127.0.0.1:52040 (2 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.479-0500 I NETWORK [conn16] end connection 127.0.0.1:38122 (4 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.479-0500 I STORAGE [ReplCoord-0] createCollection: local.system.replset with generated UUID: 3b8c02e8-ec29-4e79-912d-3e315d1d851c and options: {}
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.480-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51158 #10 (4 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.480-0500 I NETWORK [conn10] end connection 127.0.0.1:51158 (3 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.480-0500 I STORAGE [ReplCoord-0] createCollection: local.system.replset with generated UUID: 920cbf66-0930-4ef5-82e9-10d7319f0fda and options: {}
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.491-0500 I INDEX [ReplCoord-0] index build: done building index _id_ on ns local.system.replset
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.491-0500 I REPL [ReplCoord-0] New replica set config in use: { _id: "shard-rs0", version: 2, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "localhost:20001", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:20002", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 0.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:20003", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 0.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 86400000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5ddd7d683bbfe7fa5630d3b8') } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.491-0500 I REPL [ReplCoord-0] This node is localhost:20003 in the config
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.491-0500 I REPL [ReplCoord-0] transition to STARTUP2 from STARTUP
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.491-0500 I REPL [ReplCoord-0] Starting replication storage threads
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.491-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20002
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.492-0500 I REPL [ReplCoord-2] Member localhost:20001 is now in state PRIMARY
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.505-0500 I STORAGE [ReplCoord-0] createCollection: local.temp_oplog_buffer with generated UUID: 81f031ae-d3ce-4c71-ad49-031a22a2aa05 and options: { temp: true }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.514-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51160 #11 (4 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.514-0500 I NETWORK [conn11] received client metadata from 127.0.0.1:51160 conn11: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.514-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20003
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.515-0500 I REPL [ReplCoord-1] Member localhost:20002 is now in state STARTUP
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.518-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52050 #11 (3 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.518-0500 I NETWORK [conn11] received client metadata from 127.0.0.1:52050 conn11: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.529-0500 I INDEX [ReplCoord-0] index build: done building index _id_ on ns local.system.replset
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.530-0500 I REPL [ReplCoord-0] New replica set config in use: { _id: "shard-rs0", version: 2, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "localhost:20001", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:20002", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 0.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:20003", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 0.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 86400000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5ddd7d683bbfe7fa5630d3b8') } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.530-0500 I REPL [ReplCoord-0] This node is localhost:20002 in the config
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.530-0500 I REPL [ReplCoord-0] transition to STARTUP2 from STARTUP
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.531-0500 I INDEX [ReplCoord-0] index build: done building index _id_ on ns local.temp_oplog_buffer
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.531-0500 I INITSYNC [ReplCoordExtern-0] Starting initial sync (attempt 1 of 10)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.531-0500 I STORAGE [ReplCoordExtern-0] Finishing collection drop for local.temp_oplog_buffer (81f031ae-d3ce-4c71-ad49-031a22a2aa05).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.532-0500 I REPL [ReplCoord-0] Starting replication storage threads
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.532-0500 I REPL [ReplCoord-4] Member localhost:20001 is now in state PRIMARY
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.532-0500 I REPL [ReplCoord-2] Member localhost:20003 is now in state STARTUP2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.533-0500 I STORAGE [ReplCoordExtern-0] createCollection: local.temp_oplog_buffer with generated UUID: aea601b6-30f7-47df-b645-32f737d04315 and options: { temp: true }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.542-0500 I STORAGE [ReplCoord-0] createCollection: local.temp_oplog_buffer with generated UUID: 0e338c7c-db31-4421-8e24-8eed1a2f59cc and options: { temp: true }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.549-0500 I INDEX [ReplCoordExtern-0] index build: done building index _id_ on ns local.temp_oplog_buffer
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.549-0500 I REPL [ReplCoordExtern-0] waiting for 1 pings from other members before syncing
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.556-0500 I INDEX [ReplCoord-0] index build: done building index _id_ on ns local.temp_oplog_buffer
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.556-0500 I INITSYNC [ReplCoordExtern-0] Starting initial sync (attempt 1 of 10)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.556-0500 I STORAGE [ReplCoordExtern-0] Finishing collection drop for local.temp_oplog_buffer (0e338c7c-db31-4421-8e24-8eed1a2f59cc).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.558-0500 I STORAGE [ReplCoordExtern-0] createCollection: local.temp_oplog_buffer with generated UUID: 6028f73a-311c-4bbd-88f1-0de4e253ea6b and options: { temp: true }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.570-0500 I INDEX [ReplCoordExtern-0] index build: done building index _id_ on ns local.temp_oplog_buffer
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.570-0500 I REPL [ReplCoordExtern-0] sync source candidate: localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.571-0500 I INITSYNC [ReplCoordExtern-0] Initial syncer oplog truncation finished in: 0ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.571-0500 I REPL [ReplCoordExtern-0] ******
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.571-0500 I REPL [ReplCoordExtern-0] creating replication oplog of size: 1024MB...
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.571-0500 I STORAGE [ReplCoordExtern-0] createCollection: local.oplog.rs with generated UUID: 88962763-38f7-4965-bfd6-b2a62304ae0e and options: { capped: true, size: 1073741824.0, autoIndexId: false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.575-0500 I STORAGE [ReplCoordExtern-0] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.575-0500 I STORAGE [ReplCoordExtern-0] WiredTiger record store oplog processing took 0ms
[ShardedClusterFixture:job0:shard0] Waiting for secondary on port 20002 to become available.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.600-0500 I REPL [ReplCoordExtern-0] ******
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.600-0500 I REPL [ReplCoordExtern-0] dropReplicatedDatabases - dropping 1 databases
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.600-0500 I REPL [ReplCoordExtern-0] dropReplicatedDatabases - dropped 1 databases
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.601-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38130 #17 (5 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.601-0500 I NETWORK [conn17] received client metadata from 127.0.0.1:38130 conn17: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.603-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38132 #18 (6 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.603-0500 I SHARDING [ReplCoordExtern-2] Marking collection local.temp_oplog_buffer as collection version:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.603-0500 I NETWORK [conn18] received client metadata from 127.0.0.1:38132 conn18: { driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.604-0500 I STORAGE [ReplCoordExtern-1] createCollection: admin.system.version with provided UUID: 70439088-b608-4bfe-8d4e-f62378562d13 and options: { uuid: UUID("70439088-b608-4bfe-8d4e-f62378562d13") }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.626-0500 I INDEX [ReplCoordExtern-1] index build: starting on admin.system.version properties: { v: 2, key: { _id: 1 }, name: "_id_" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.626-0500 I INDEX [ReplCoordExtern-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.627-0500 I COMMAND [ReplWriterWorker-0] setting featureCompatibilityVersion to 4.2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.627-0500 I INDEX [ReplCoordExtern-1] index build: inserted 1 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.629-0500 I INDEX [ReplCoordExtern-1] index build: done building index _id_ on ns admin.system.version
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.635-0500 I STORAGE [ReplCoordExtern-1] createCollection: config.transactions with provided UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d and options: { uuid: UUID("594dd33c-8197-4d92-ab4c-87745ec5f77d") }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.657-0500 I INDEX [ReplCoordExtern-1] index build: starting on config.transactions properties: { v: 2, key: { _id: 1 }, name: "_id_" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.657-0500 I INDEX [ReplCoordExtern-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.658-0500 I INDEX [ReplCoordExtern-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.660-0500 I INDEX [ReplCoordExtern-1] index build: done building index _id_ on ns config.transactions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.662-0500 I INITSYNC [ReplCoordExtern-1] Finished cloning data: OK. Beginning oplog replay.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.662-0500 I NETWORK [conn18] end connection 127.0.0.1:38132 (5 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.662-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38134 #19 (6 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:49.663-0500 I NETWORK [conn19] received client metadata from 127.0.0.1:38134 conn19: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.663-0500 I INITSYNC [ReplCoordExtern-2] No need to apply operations. (currently at { : Timestamp(1574796649, 3) })
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.663-0500 I INITSYNC [ReplCoordExtern-1] Finished fetching oplog during initial sync: CallbackCanceled: error in fetcher batch callback: oplog fetcher is shutting down. Last fetched optime: { ts: Timestamp(0, 0), t: -1 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.663-0500 I INITSYNC [ReplCoordExtern-1] Initial sync attempt finishing up.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.663-0500 I INITSYNC [ReplCoordExtern-1] Initial Sync Attempt Statistics: { failedInitialSyncAttempts: 0, maxFailedInitialSyncAttempts: 10, initialSyncStart: new Date(1574796649556), initialSyncAttempts: [], appliedOps: 0, initialSyncOplogStart: Timestamp(1574796649, 3), initialSyncOplogEnd: Timestamp(1574796649, 3), databases: { databasesCloned: 2, databaseCount: 2, admin: { collections: 1, clonedCollections: 1, start: new Date(1574796649604), admin.system.version: { documentsToCopy: 1, documentsCopied: 1, indexes: 1, fetchedBatches: 1, receivedBatches: 1 } }, config: { collections: 1, clonedCollections: 1, start: new Date(1574796649635), config.transactions: { documentsToCopy: 0, documentsCopied: 0, indexes: 1, fetchedBatches: 0, receivedBatches: 0 } } } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.663-0500 I STORAGE [ReplCoordExtern-1] Finishing collection drop for local.temp_oplog_buffer (6028f73a-311c-4bbd-88f1-0de4e253ea6b).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.665-0500 I SHARDING [ReplCoordExtern-1] Marking collection config.transactions as collection version:
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.665-0500 I SHARDING [ReplCoordExtern-1] Marking collection local.replset.oplogTruncateAfterPoint as collection version:
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.666-0500 I INITSYNC [ReplCoordExtern-1] initial sync done; took 0s.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.666-0500 I REPL [ReplCoordExtern-1] transition to RECOVERING from STARTUP2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.666-0500 I REPL [ReplCoordExtern-1] Starting replication fetcher thread
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.666-0500 I REPL [ReplCoordExtern-1] Starting replication applier thread
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.666-0500 I REPL [ReplCoordExtern-1] Starting replication reporter thread
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.666-0500 I REPL [OplogApplier-0] Starting oplog application
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.667-0500 I REPL [BackgroundSync] could not find member to sync from
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.670-0500 I REPL [OplogApplier-0] transition to SECONDARY from RECOVERING
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.670-0500 I REPL [OplogApplier-0] Resetting sync source to empty, which was :27017
[ShardedClusterFixture:job0:shard0] Waiting for secondary on port 20002 to become available.
[ShardedClusterFixture:job0:shard0] Secondary on port 20002 is now available.
[ShardedClusterFixture:job0:shard0] Waiting for secondary on port 20003 to become available.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.684-0500 I NETWORK [conn9] end connection 127.0.0.1:51154 (3 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.684-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52058 #12 (4 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:49.684-0500 I NETWORK [conn6] end connection 127.0.0.1:51148 (2 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.684-0500 I NETWORK [conn12] received client metadata from 127.0.0.1:52058 conn12: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.685-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52060 #13 (5 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:49.685-0500 I NETWORK [conn13] received client metadata from 127.0.0.1:52060 conn13: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0] Waiting for secondary on port 20003 to become available.
[ShardedClusterFixture:job0:shard0] Waiting for secondary on port 20003 to become available.
[ShardedClusterFixture:job0:shard0] Waiting for secondary on port 20003 to become available.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.015-0500 I REPL [ReplCoord-0] Member localhost:20002 is now in state SECONDARY
[ShardedClusterFixture:job0:shard0] Waiting for secondary on port 20003 to become available.
[ShardedClusterFixture:job0:shard0] Waiting for secondary on port 20003 to become available.
[ShardedClusterFixture:job0:shard0] Waiting for secondary on port 20003 to become available.
[ShardedClusterFixture:job0:shard0] Waiting for secondary on port 20003 to become available.
[ShardedClusterFixture:job0:shard0] Waiting for secondary on port 20003 to become available.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.549-0500 I REPL [ReplCoordExtern-1] sync source candidate: localhost:20002
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.549-0500 I INITSYNC [ReplCoordExtern-1] Initial syncer oplog truncation finished in: 0ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.549-0500 I REPL [ReplCoordExtern-1] ******
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.549-0500 I REPL [ReplCoordExtern-1] creating replication oplog of size: 1024MB...
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.549-0500 I STORAGE [ReplCoordExtern-1] createCollection: local.oplog.rs with generated UUID: 6d43bede-f05f-41b1-b7ac-5a32b66b8140 and options: { capped: true, size: 1073741824.0, autoIndexId: false }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.555-0500 I STORAGE [ReplCoordExtern-1] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.555-0500 I STORAGE [ReplCoordExtern-1] WiredTiger record store oplog processing took 0ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.579-0500 I REPL [ReplCoordExtern-1] ******
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.579-0500 I REPL [ReplCoordExtern-1] dropReplicatedDatabases - dropping 1 databases
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.579-0500 I REPL [ReplCoordExtern-1] dropReplicatedDatabases - dropped 1 databases
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.579-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20002
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:50.579-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51174 #16 (3 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:50.580-0500 I NETWORK [conn16] received client metadata from 127.0.0.1:51174 conn16: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:50.582-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51176 #17 (4 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:50.582-0500 I NETWORK [conn17] received client metadata from 127.0.0.1:51176 conn17: { driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.582-0500 I SHARDING [ReplCoordExtern-2] Marking collection local.temp_oplog_buffer as collection version:
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.583-0500 I STORAGE [ReplCoordExtern-0] createCollection: admin.system.version with provided UUID: 70439088-b608-4bfe-8d4e-f62378562d13 and options: { uuid: UUID("70439088-b608-4bfe-8d4e-f62378562d13") }
[ShardedClusterFixture:job0:shard0] Waiting for secondary on port 20003 to become available.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.604-0500 I INDEX [ReplCoordExtern-0] index build: starting on admin.system.version properties: { v: 2, key: { _id: 1 }, name: "_id_" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.604-0500 I INDEX [ReplCoordExtern-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.604-0500 I COMMAND [ReplWriterWorker-12] setting featureCompatibilityVersion to 4.2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.605-0500 I INDEX [ReplCoordExtern-0] index build: inserted 1 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.607-0500 I INDEX [ReplCoordExtern-0] index build: done building index _id_ on ns admin.system.version
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.611-0500 I STORAGE [ReplCoordExtern-0] createCollection: config.transactions with provided UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d and options: { uuid: UUID("594dd33c-8197-4d92-ab4c-87745ec5f77d") }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.631-0500 I INDEX [ReplCoordExtern-0] index build: starting on config.transactions properties: { v: 2, key: { _id: 1 }, name: "_id_" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.631-0500 I INDEX [ReplCoordExtern-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.632-0500 I INDEX [ReplCoordExtern-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.634-0500 I INDEX [ReplCoordExtern-0] index build: done building index _id_ on ns config.transactions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.636-0500 I INITSYNC [ReplCoordExtern-0] Finished cloning data: OK. Beginning oplog replay.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:50.636-0500 I NETWORK [conn17] end connection 127.0.0.1:51176 (3 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:50.637-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51178 #18 (4 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:50.637-0500 I NETWORK [conn18] received client metadata from 127.0.0.1:51178 conn18: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.637-0500 I INITSYNC [ReplCoordExtern-2] No need to apply operations. (currently at { : Timestamp(1574796649, 3) })
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.638-0500 I INITSYNC [ReplCoordExtern-1] Finished fetching oplog during initial sync: CallbackCanceled: error in fetcher batch callback: oplog fetcher is shutting down. Last fetched optime: { ts: Timestamp(0, 0), t: -1 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.638-0500 I INITSYNC [ReplCoordExtern-1] Initial sync attempt finishing up.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.638-0500 I INITSYNC [ReplCoordExtern-1] Initial Sync Attempt Statistics: { failedInitialSyncAttempts: 0, maxFailedInitialSyncAttempts: 10, initialSyncStart: new Date(1574796649531), initialSyncAttempts: [], appliedOps: 0, initialSyncOplogStart: Timestamp(1574796649, 3), initialSyncOplogEnd: Timestamp(1574796649, 3), databases: { databasesCloned: 2, databaseCount: 2, admin: { collections: 1, clonedCollections: 1, start: new Date(1574796650583), admin.system.version: { documentsToCopy: 1, documentsCopied: 1, indexes: 1, fetchedBatches: 1, receivedBatches: 1 } }, config: { collections: 1, clonedCollections: 1, start: new Date(1574796650610), config.transactions: { documentsToCopy: 0, documentsCopied: 0, indexes: 1, fetchedBatches: 0, receivedBatches: 0 } } } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.638-0500 I STORAGE [ReplCoordExtern-1] Finishing collection drop for local.temp_oplog_buffer (aea601b6-30f7-47df-b645-32f737d04315).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.640-0500 I SHARDING [ReplCoordExtern-1] Marking collection config.transactions as collection version:
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.640-0500 I SHARDING [ReplCoordExtern-1] Marking collection local.replset.oplogTruncateAfterPoint as collection version:
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.641-0500 I INITSYNC [ReplCoordExtern-1] initial sync done; took 1s.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.641-0500 I REPL [ReplCoordExtern-1] transition to RECOVERING from STARTUP2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.641-0500 I REPL [ReplCoordExtern-1] Starting replication fetcher thread
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.641-0500 I REPL [ReplCoordExtern-1] Starting replication applier thread
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.641-0500 I REPL [ReplCoordExtern-1] Starting replication reporter thread
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.641-0500 I REPL [OplogApplier-0] Starting oplog application
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.645-0500 I REPL [BackgroundSync] could not find member to sync from
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.645-0500 I REPL [OplogApplier-0] transition to SECONDARY from RECOVERING
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.645-0500 I REPL [OplogApplier-0] Resetting sync source to empty, which was :27017
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:50.667-0500 I REPL [ReplCoord-4] Member localhost:20003 is now in state SECONDARY
[ShardedClusterFixture:job0:shard0] Waiting for secondary on port 20003 to become available.
[ShardedClusterFixture:job0:shard0] Secondary on port 20003 is now available.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.693-0500 I NETWORK [conn13] end connection 127.0.0.1:52060 (4 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:50.693-0500 I NETWORK [conn12] end connection 127.0.0.1:52058 (3 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:50.693-0500 I NETWORK [conn5] end connection 127.0.0.1:38086 (5 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:50.693-0500 I NETWORK [conn4] end connection 127.0.0.1:38084 (4 connections now open)
[ShardedClusterFixture:job0:shard1:primary] Starting mongod on port 20004...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongod --setParameter enableTestCommands=1 --setParameter migrationLockAcquisitionMaxWaitMS=30000 --setParameter logComponentVerbosity={'replication': {'rollback': 2}, 'transaction': 4} --setParameter orphanCleanupDelaySecs=1 --setParameter disableLogicalSessionCacheRefresh=true --setParameter transactionLifetimeLimitSeconds=86400 --setParameter maxIndexBuildDrainBatchSize=10 --setParameter writePeriodicNoops=false --setParameter waitForStepDownOnNonCommandShutdown=false --oplogSize=1024 --shardsvr --replSet=shard-rs1 --dbpath=/home/nz_linux/data/job0/resmoke/shard1/node0 --port=20004 --enableMajorityReadConcern=True
[ShardedClusterFixture:job0:shard1:primary] mongod started on port 20004 with pid 14340.
[ShardedClusterFixture:job0:shard1:secondary0] Starting mongod on port 20005...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongod --setParameter enableTestCommands=1 --setParameter migrationLockAcquisitionMaxWaitMS=30000 --setParameter logComponentVerbosity={'replication': {'rollback': 2}, 'transaction': 4} --setParameter orphanCleanupDelaySecs=1 --setParameter disableLogicalSessionCacheRefresh=true --setParameter transactionLifetimeLimitSeconds=86400 --setParameter maxIndexBuildDrainBatchSize=10 --setParameter writePeriodicNoops=false --setParameter waitForStepDownOnNonCommandShutdown=false --oplogSize=1024 --shardsvr --replSet=shard-rs1 --dbpath=/home/nz_linux/data/job0/resmoke/shard1/node1 --port=20005 --enableMajorityReadConcern=True
[ShardedClusterFixture:job0:shard1:secondary0] mongod started on port 20005 with pid 14343.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:50.733-0500 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
[ShardedClusterFixture:job0:shard1:secondary1] Starting mongod on port 20006...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongod --setParameter enableTestCommands=1 --setParameter migrationLockAcquisitionMaxWaitMS=30000 --setParameter logComponentVerbosity={'replication': {'rollback': 2}, 'transaction': 4} --setParameter orphanCleanupDelaySecs=1 --setParameter disableLogicalSessionCacheRefresh=true --setParameter transactionLifetimeLimitSeconds=86400 --setParameter maxIndexBuildDrainBatchSize=10 --setParameter writePeriodicNoops=false --setParameter waitForStepDownOnNonCommandShutdown=false --oplogSize=1024 --shardsvr --replSet=shard-rs1 --dbpath=/home/nz_linux/data/job0/resmoke/shard1/node2 --port=20006 --enableMajorityReadConcern=True
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:50.739-0500 I CONTROL [initandlisten] MongoDB starting : pid=14340 port=20004 dbpath=/home/nz_linux/data/job0/resmoke/shard1/node0 64-bit host=nz_desktop
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:50.739-0500 I CONTROL [initandlisten] db version v0.0.0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:50.739-0500 I CONTROL [initandlisten] git version: unknown
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:50.739-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:50.739-0500 I CONTROL [initandlisten] allocator: system
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:50.739-0500 I CONTROL [initandlisten] modules: enterprise ninja
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:50.739-0500 I CONTROL [initandlisten] build environment:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:50.739-0500 I CONTROL [initandlisten] distarch: x86_64
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:50.740-0500 I CONTROL [initandlisten] target_arch: x86_64
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:50.740-0500 I CONTROL [initandlisten] options: { net: { port: 20004 }, replication: { enableMajorityReadConcern: true, oplogSizeMB: 1024, replSet: "shard-rs1" }, setParameter: { disableLogicalSessionCacheRefresh: "true", enableTestCommands: "1", logComponentVerbosity: "{'replication': {'rollback': 2}, 'transaction': 4}", maxIndexBuildDrainBatchSize: "10", migrationLockAcquisitionMaxWaitMS: "30000", orphanCleanupDelaySecs: "1", transactionLifetimeLimitSeconds: "86400", waitForStepDownOnNonCommandShutdown: "false", writePeriodicNoops: "false" }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/home/nz_linux/data/job0/resmoke/shard1/node0" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:50.741-0500 I STORAGE [initandlisten]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:50.741-0500 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:50.741-0500 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:50.741-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=31635M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],
[ShardedClusterFixture:job0:shard1:secondary1] mongod started on port 20006 with pid 14346.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:50.759-0500 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:50.762-0500 I CONTROL [initandlisten] MongoDB starting : pid=14343 port=20005 dbpath=/home/nz_linux/data/job0/resmoke/shard1/node1 64-bit host=nz_desktop
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:50.762-0500 I CONTROL [initandlisten] db version v0.0.0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:50.762-0500 I CONTROL [initandlisten] git version: unknown
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:50.762-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:50.762-0500 I CONTROL [initandlisten] allocator: system
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:50.762-0500 I CONTROL [initandlisten] modules: enterprise ninja
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:50.762-0500 I CONTROL [initandlisten] build environment:
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:50.762-0500 I CONTROL [initandlisten] distarch: x86_64
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:50.762-0500 I CONTROL [initandlisten] target_arch: x86_64
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:50.762-0500 I CONTROL [initandlisten] options: { net: { port: 20005 }, replication: { enableMajorityReadConcern: true, oplogSizeMB: 1024, replSet: "shard-rs1" }, setParameter: { disableLogicalSessionCacheRefresh: "true", enableTestCommands: "1", logComponentVerbosity: "{'replication': {'rollback': 2}, 'transaction': 4}", maxIndexBuildDrainBatchSize: "10", migrationLockAcquisitionMaxWaitMS: "30000", orphanCleanupDelaySecs: "1", transactionLifetimeLimitSeconds: "86400", waitForStepDownOnNonCommandShutdown: "false", writePeriodicNoops: "false" }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/home/nz_linux/data/job0/resmoke/shard1/node1" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:50.762-0500 I STORAGE [initandlisten]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:50.762-0500 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:50.762-0500 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:50.762-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=31635M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:50.777-0500 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:50.779-0500 I CONTROL [initandlisten] MongoDB starting : pid=14346 port=20006 dbpath=/home/nz_linux/data/job0/resmoke/shard1/node2 64-bit host=nz_desktop
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:50.779-0500 I CONTROL [initandlisten] db version v0.0.0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:50.779-0500 I CONTROL [initandlisten] git version: unknown
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:50.779-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:50.779-0500 I CONTROL [initandlisten] allocator: system
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:50.779-0500 I CONTROL [initandlisten] modules: enterprise ninja
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:50.779-0500 I CONTROL [initandlisten] build environment:
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:50.779-0500 I CONTROL [initandlisten] distarch: x86_64
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:50.779-0500 I CONTROL [initandlisten] target_arch: x86_64
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:50.779-0500 I CONTROL [initandlisten] options: { net: { port: 20006 }, replication: { enableMajorityReadConcern: true, oplogSizeMB: 1024, replSet: "shard-rs1" }, setParameter: { disableLogicalSessionCacheRefresh: "true", enableTestCommands: "1", logComponentVerbosity: "{'replication': {'rollback': 2}, 'transaction': 4}", maxIndexBuildDrainBatchSize: "10", migrationLockAcquisitionMaxWaitMS: "30000", orphanCleanupDelaySecs: "1", transactionLifetimeLimitSeconds: "86400", waitForStepDownOnNonCommandShutdown: "false", writePeriodicNoops: "false" }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/home/nz_linux/data/job0/resmoke/shard1/node2" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:50.780-0500 I STORAGE [initandlisten]
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:50.780-0500 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:50.780-0500 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:50.780-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=31635M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],
[ShardedClusterFixture:job0:shard1:primary] Waiting to connect to mongod on port 20004.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:51.475-0500 I REPL [ReplCoord-1] Member localhost:20002 is now in state SECONDARY
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:51.475-0500 I REPL [ReplCoord-0] Member localhost:20003 is now in state SECONDARY
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.580-0500 I STORAGE [initandlisten] WiredTiger message [1574796651:580329][14340:0x7fb47447fa00], txn-recover: Set global recovery timestamp: (0,0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.591-0500 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.606-0500 I STORAGE [initandlisten] WiredTiger message [1574796651:606431][14343:0x7f97ba8f8a00], txn-recover: Set global recovery timestamp: (0,0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.609-0500 I STORAGE [initandlisten] Timestamp monitor starting
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.618-0500 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.619-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.619-0500 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.619-0500 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.619-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.619-0500 I CONTROL [initandlisten] ** WARNING: This server is bound to localhost.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.619-0500 I CONTROL [initandlisten] ** Remote systems will be unable to connect to this server.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.619-0500 I CONTROL [initandlisten] ** Start the server with --bind_ip to specify which IP
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.619-0500 I CONTROL [initandlisten] ** addresses it should serve responses from, or with --bind_ip_all to
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.619-0500 I CONTROL [initandlisten] ** bind to all interfaces. If this behavior is desired, start the
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.619-0500 I CONTROL [initandlisten] ** server with --bind_ip 127.0.0.1 to disable this warning.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.619-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.621-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.621-0500 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.621-0500 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.621-0500 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.621-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.621-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.621-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.621-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.621-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.621-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.622-0500 I STORAGE [initandlisten] WiredTiger message [1574796651:622171][14346:0x7fb4b25c9a00], txn-recover: Set global recovery timestamp: (0,0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.621-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.621-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. Number of files is 1024, should be at least 64000
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.621-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.622-0500 I SHARDING [initandlisten] Marking collection local.system.replset as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.622-0500 I STORAGE [initandlisten] Flow Control is enabled on this deployment.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.623-0500 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.623-0500 I SHARDING [initandlisten] Marking collection admin.system.version as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.623-0500 W SHARDING [initandlisten] Started with --shardsvr, but no shardIdentity document was found on disk in admin.system.version. This most likely means this server has not yet been added to a sharded cluster.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.623-0500 I STORAGE [initandlisten] createCollection: local.startup_log with generated UUID: fd9e05bb-cd6c-441c-9265-3783d4065b03 and options: { capped: true, size: 10485760 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.632-0500 I STORAGE [initandlisten] Timestamp monitor starting
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.633-0500 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.639-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.startup_log
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.639-0500 I SHARDING [initandlisten] Marking collection local.startup_log as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.639-0500 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/home/nz_linux/data/job0/resmoke/shard1/node0/diagnostic.data'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.640-0500 I STORAGE [initandlisten] createCollection: local.replset.oplogTruncateAfterPoint with generated UUID: 31ce824c-ef86-4223-a4be-3069dae7b5f2 and options: {}
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.641-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.641-0500 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.641-0500 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.641-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.641-0500 I CONTROL [initandlisten] ** WARNING: This server is bound to localhost.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.641-0500 I CONTROL [initandlisten] ** Remote systems will be unable to connect to this server.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.641-0500 I CONTROL [initandlisten] ** Start the server with --bind_ip to specify which IP
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.641-0500 I CONTROL [initandlisten] ** addresses it should serve responses from, or with --bind_ip_all to
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.641-0500 I CONTROL [initandlisten] ** bind to all interfaces. If this behavior is desired, start the
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.641-0500 I CONTROL [initandlisten] ** server with --bind_ip 127.0.0.1 to disable this warning.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.641-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.643-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.643-0500 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.643-0500 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems:
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.643-0500 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.643-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.643-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.643-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.643-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.643-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.643-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.643-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.643-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. Number of files is 1024, should be at least 64000
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.643-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:51.645-0500 I STORAGE [ReplCoord-3] Triggering the first stable checkpoint. Initial Data: Timestamp(1574796649, 3) PrevStable: Timestamp(0, 0) CurrStable: Timestamp(1574796649, 3)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.646-0500 I STORAGE [initandlisten] Timestamp monitor starting
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.647-0500 I SHARDING [initandlisten] Marking collection local.system.replset as collection version:
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.647-0500 I STORAGE [initandlisten] Flow Control is enabled on this deployment.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.647-0500 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version:
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.648-0500 I SHARDING [initandlisten] Marking collection admin.system.version as collection version:
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.648-0500 W SHARDING [initandlisten] Started with --shardsvr, but no shardIdentity document was found on disk in admin.system.version. This most likely means this server has not yet been added to a sharded cluster.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.648-0500 I STORAGE [initandlisten] createCollection: local.startup_log with generated UUID: fb2ea5d2-ac7b-4697-a368-9f5d41483423 and options: { capped: true, size: 10485760 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.655-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.655-0500 I STORAGE [initandlisten] createCollection: local.replset.minvalid with generated UUID: 5dfed1a1-c7a1-4f91-a3da-2544e54d2e9a and options: {}
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.655-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.655-0500 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.655-0500 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.655-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.655-0500 I CONTROL [initandlisten] ** WARNING: This server is bound to localhost.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.655-0500 I CONTROL [initandlisten] ** Remote systems will be unable to connect to this server.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.655-0500 I CONTROL [initandlisten] ** Start the server with --bind_ip to specify which IP
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.655-0500 I CONTROL [initandlisten] ** addresses it should serve responses from, or with --bind_ip_all to
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.655-0500 I CONTROL [initandlisten] ** bind to all interfaces. If this behavior is desired, start the
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.655-0500 I CONTROL [initandlisten] ** server with --bind_ip 127.0.0.1 to disable this warning.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.655-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.657-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.657-0500 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.657-0500 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems:
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.657-0500 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options]
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.657-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.657-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.657-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.657-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.657-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.657-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.657-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.657-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. Number of files is 1024, should be at least 64000
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.657-0500 I CONTROL [initandlisten]
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.659-0500 I SHARDING [initandlisten] Marking collection local.system.replset as collection version:
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.659-0500 I STORAGE [initandlisten] Flow Control is enabled on this deployment.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.659-0500 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version:
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.659-0500 I SHARDING [initandlisten] Marking collection admin.system.version as collection version:
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.659-0500 W SHARDING [initandlisten] Started with --shardsvr, but no shardIdentity document was found on disk in admin.system.version. This most likely means this server has not yet been added to a sharded cluster.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.659-0500 I STORAGE [initandlisten] createCollection: local.startup_log with generated UUID: 62f9eac5-a715-4818-9af1-edc47894f622 and options: { capped: true, size: 10485760 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.664-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.startup_log
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.664-0500 I SHARDING [initandlisten] Marking collection local.startup_log as collection version:
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.665-0500 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/home/nz_linux/data/job0/resmoke/shard1/node1/diagnostic.data'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.665-0500 I STORAGE [initandlisten] createCollection: local.replset.oplogTruncateAfterPoint with generated UUID: ae67a1b2-b2be-4d7e-8242-18f3082bc280 and options: {}
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:51.667-0500 I STORAGE [ReplCoord-0] Triggering the first stable checkpoint. Initial Data: Timestamp(1574796649, 3) PrevStable: Timestamp(0, 0) CurrStable: Timestamp(1574796649, 3)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.671-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.minvalid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.671-0500 I SHARDING [initandlisten] Marking collection local.replset.minvalid as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.671-0500 I STORAGE [initandlisten] createCollection: local.replset.election with generated UUID: 101a66fe-c3c0-4bee-94b9-e9bb8d04aa79 and options: {}
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.676-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.startup_log
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.676-0500 I SHARDING [initandlisten] Marking collection local.startup_log as collection version:
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.676-0500 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/home/nz_linux/data/job0/resmoke/shard1/node2/diagnostic.data'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.677-0500 I STORAGE [initandlisten] createCollection: local.replset.oplogTruncateAfterPoint with generated UUID: 022b88bb-9282-4f39-aad1-6988341f4ac1 and options: {}
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.682-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.682-0500 I STORAGE [initandlisten] createCollection: local.replset.minvalid with generated UUID: 3f481e27-9697-4b6d-b77b-0bd9b43c5dfa and options: {}
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.688-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.election
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.688-0500 I SHARDING [initandlisten] Marking collection local.replset.election as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.689-0500 I REPL [initandlisten] Did not find local initialized voted for document at startup.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.689-0500 I REPL [initandlisten] Did not find local Rollback ID document at startup. Creating one.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.689-0500 I STORAGE [initandlisten] createCollection: local.system.rollback.id with generated UUID: 223114bc-2956-4d9b-8f0a-5c567c2cb10e and options: {}
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.693-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.693-0500 I STORAGE [initandlisten] createCollection: local.replset.minvalid with generated UUID: e1166351-a2a9-4335-b202-a653b252b811 and options: {}
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.697-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.minvalid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.697-0500 I SHARDING [initandlisten] Marking collection local.replset.minvalid as collection version:
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.698-0500 I STORAGE [initandlisten] createCollection: local.replset.election with generated UUID: 6a83721b-d0f2-438c-a2e3-ec6a11e75236 and options: {}
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.704-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.system.rollback.id
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.704-0500 I SHARDING [initandlisten] Marking collection local.system.rollback.id as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.704-0500 I REPL [initandlisten] Initialized the rollback ID to 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.704-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.705-0500 I NETWORK [initandlisten] Listening on /tmp/mongodb-20004.sock
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.705-0500 I NETWORK [initandlisten] Listening on 127.0.0.1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.705-0500 I NETWORK [initandlisten] waiting for connections on port 20004
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.708-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.minvalid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.708-0500 I SHARDING [initandlisten] Marking collection local.replset.minvalid as collection version:
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.708-0500 I STORAGE [initandlisten] createCollection: local.replset.election with generated UUID: 7b059263-7419-4cf5-8072-b44957d729c9 and options: {}
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.712-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.election
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.712-0500 I SHARDING [initandlisten] Marking collection local.replset.election as collection version:
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.713-0500 I REPL [initandlisten] Did not find local initialized voted for document at startup.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.713-0500 I REPL [initandlisten] Did not find local Rollback ID document at startup. Creating one.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.713-0500 I STORAGE [initandlisten] createCollection: local.system.rollback.id with generated UUID: d6027364-802b-4e8d-ae7f-556bc4252840 and options: {}
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.723-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.election
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.723-0500 I SHARDING [initandlisten] Marking collection local.replset.election as collection version:
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.723-0500 I REPL [initandlisten] Did not find local initialized voted for document at startup.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.723-0500 I REPL [initandlisten] Did not find local Rollback ID document at startup. Creating one.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.723-0500 I STORAGE [initandlisten] createCollection: local.system.rollback.id with generated UUID: af3b2fdb-b5ae-49b3-a026-c55e1bf822c0 and options: {}
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.726-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.system.rollback.id
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.727-0500 I SHARDING [initandlisten] Marking collection local.system.rollback.id as collection version:
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.727-0500 I REPL [initandlisten] Initialized the rollback ID to 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.727-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.731-0500 I NETWORK [initandlisten] Listening on /tmp/mongodb-20005.sock
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.731-0500 I NETWORK [initandlisten] Listening on 127.0.0.1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:51.731-0500 I NETWORK [initandlisten] waiting for connections on port 20005
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.734-0500 I INDEX [initandlisten] index build: done building index _id_ on ns local.system.rollback.id
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.734-0500 I SHARDING [initandlisten] Marking collection local.system.rollback.id as collection version:
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.734-0500 I REPL [initandlisten] Initialized the rollback ID to 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.734-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.736-0500 I NETWORK [initandlisten] Listening on /tmp/mongodb-20006.sock
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.736-0500 I NETWORK [initandlisten] Listening on 127.0.0.1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:51.736-0500 I NETWORK [initandlisten] waiting for connections on port 20006
[ShardedClusterFixture:job0:shard1:primary] Waiting to connect to mongod on port 20004.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.855-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45618 #1 (1 connection now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.856-0500 I NETWORK [conn1] received client metadata from 127.0.0.1:45618 conn1: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.957-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45620 #2 (2 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.957-0500 I NETWORK [conn2] received client metadata from 127.0.0.1:45620 conn2: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.958-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45622 #3 (3 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.958-0500 I NETWORK [conn3] received client metadata from 127.0.0.1:45622 conn3: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:primary] Successfully contacted the mongod on port 20004.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.959-0500 I NETWORK [conn3] end connection 127.0.0.1:45622 (2 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.959-0500 I NETWORK [conn2] end connection 127.0.0.1:45620 (1 connection now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.959-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45624 #4 (2 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.960-0500 I NETWORK [conn4] received client metadata from 127.0.0.1:45624 conn4: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.960-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45626 #5 (3 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.961-0500 I NETWORK [conn5] received client metadata from 127.0.0.1:45626 conn5: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.961-0500 I SHARDING [conn5] Marking collection local.oplog.rs as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.963-0500 I NETWORK [conn1] end connection 127.0.0.1:45618 (2 connections now open)
[ShardedClusterFixture:job0:shard1] Issuing replSetInitiate command: {'_id': 'shard-rs1', 'protocolVersion': 1, 'settings': {'electionTimeoutMillis': 86400000}, 'members': [{'_id': 0, 'host': 'localhost:20004'}]}
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.964-0500 I REPL [conn5] replSetInitiate admin command received from client
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.965-0500 I REPL [conn5] replSetInitiate config object with 1 members parses ok
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.965-0500 I REPL [conn5] ******
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.965-0500 I REPL [conn5] creating replication oplog of size: 1024MB...
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.965-0500 I STORAGE [conn5] createCollection: local.oplog.rs with generated UUID: f999d0d7-cb6c-4d2c-a5ff-807a7ed09766 and options: { capped: true, size: 1073741824.0, autoIndexId: false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.971-0500 I STORAGE [conn5] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.971-0500 I STORAGE [conn5] WiredTiger record store oplog processing took 0ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.992-0500 I REPL [conn5] ******
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:51.992-0500 I STORAGE [conn5] createCollection: local.system.replset with generated UUID: 3eb8c3e8-f477-448c-9a25-5db5ef40b0d6 and options: {}
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:52.000-0500 I SHARDING [ftdc] Marking collection local.oplog.rs as collection version:
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:52.002-0500 I SHARDING [ftdc] Marking collection local.oplog.rs as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.003-0500 I INDEX [conn5] index build: done building index _id_ on ns local.system.replset
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.004-0500 I STORAGE [conn5] createCollection: admin.system.version with provided UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5 and options: { uuid: UUID("19b398bd-025a-4aca-9299-76bf6d82acc5") }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.015-0500 I INDEX [conn5] index build: done building index _id_ on ns admin.system.version
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.015-0500 I COMMAND [conn5] setting featureCompatibilityVersion to 4.2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.015-0500 I REPL [conn5] New replica set config in use: { _id: "shard-rs1", version: 1, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "localhost:20004", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 86400000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5ddd7d6bcf8184c2e1492eba') } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.015-0500 I REPL [conn5] This node is localhost:20004 in the config
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.015-0500 I REPL [conn5] transition to STARTUP2 from STARTUP
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.015-0500 I REPL [conn5] Starting replication storage threads
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.019-0500 I REPL [conn5] transition to RECOVERING from STARTUP2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.019-0500 I REPL [conn5] Starting replication fetcher thread
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.019-0500 I REPL [conn5] Starting replication applier thread
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.019-0500 I REPL [conn5] Starting replication reporter thread
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.019-0500 I REPL [OplogApplier-0] Starting oplog application
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.020-0500 I REPL [OplogApplier-0] transition to SECONDARY from RECOVERING
[ShardedClusterFixture:job0:shard1] Waiting for primary on port 20004 to be elected.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.046-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45628 #6 (3 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.046-0500 I ELECTION [OplogApplier-0] conducting a dry run election to see if we could be elected. current term: 0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.046-0500 I ELECTION [ReplCoord-0] dry election run succeeded, running for election in term 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.046-0500 I NETWORK [conn6] received client metadata from 127.0.0.1:45628 conn6: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.051-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45630 #7 (4 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.051-0500 I NETWORK [conn7] received client metadata from 127.0.0.1:45630 conn7: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.051-0500 I ELECTION [ReplCoord-1] election succeeded, assuming primary role in term 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.051-0500 I REPL [ReplCoord-1] transition to PRIMARY from SECONDARY
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.051-0500 I REPL [ReplCoord-1] Resetting sync source to empty, which was :27017
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.051-0500 I REPL [ReplCoord-1] Entering primary catch-up mode.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.051-0500 I REPL [ReplCoord-1] Exited primary catch-up mode.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:52.051-0500 I REPL [ReplCoord-1] Stopping replication producer
[ShardedClusterFixture:job0:shard1] Waiting for primary on port 20004 to be elected.
[ShardedClusterFixture:job0:shard1] Waiting for primary on port 20004 to be elected.
[ShardedClusterFixture:job0:shard1] Waiting for primary on port 20004 to be elected.
[ShardedClusterFixture:job0:shard1] Waiting for primary on port 20004 to be elected.
[ShardedClusterFixture:job0:shard1] Waiting for primary on port 20004 to be elected.
[ShardedClusterFixture:job0:shard1] Waiting for primary on port 20004 to be elected.
[ShardedClusterFixture:job0:shard1] Waiting for primary on port 20004 to be elected.
[ShardedClusterFixture:job0:shard1] Waiting for primary on port 20004 to be elected.
[ShardedClusterFixture:job0:shard1] Waiting for primary on port 20004 to be elected.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.050-0500 I REPL [ReplBatcher] Oplog buffer has been drained in term 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.050-0500 I REPL [RstlKillOpThread] Starting to kill user operations
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.051-0500 I REPL [RstlKillOpThread] Stopped killing user operations
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.051-0500 I REPL [RstlKillOpThread] State transition ops metrics: { lastStateTransition: "stepUp", userOpsKilled: 0, userOpsRunning: 0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.051-0500 I SHARDING [OplogApplier-0] Marking collection config.transactions as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.051-0500 I STORAGE [OplogApplier-0] createCollection: config.transactions with generated UUID: ec61ac84-71d3-4912-9466-2724ab31be3d and options: {}
[ShardedClusterFixture:job0:shard1] Waiting for primary on port 20004 to be elected.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.063-0500 I INDEX [OplogApplier-0] index build: done building index _id_ on ns config.transactions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.063-0500 I STORAGE [OplogApplier-0] IndexBuildsCoordinator::onStepUp - this node is stepping up to primary
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.063-0500 I REPL [OplogApplier-0] transition to primary complete; database writes are now permitted
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.110-0500 I STORAGE [WTJournalFlusher] Triggering the first stable checkpoint. Initial Data: Timestamp(1574796652, 1) PrevStable: Timestamp(0, 0) CurrStable: Timestamp(1574796653, 2)
[ShardedClusterFixture:job0:shard1] Waiting for primary on port 20004 to be elected.
[ShardedClusterFixture:job0:shard1] Primary on port 20004 successfully elected.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.161-0500 I NETWORK [conn7] end connection 127.0.0.1:45630 (3 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.161-0500 I NETWORK [conn6] end connection 127.0.0.1:45628 (2 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.162-0500 I NETWORK [listener] connection accepted from 127.0.0.1:50830 #1 (1 connection now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.163-0500 I NETWORK [conn1] received client metadata from 127.0.0.1:50830 conn1: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.164-0500 I NETWORK [listener] connection accepted from 127.0.0.1:50832 #2 (2 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.164-0500 I NETWORK [conn2] received client metadata from 127.0.0.1:50832 conn2: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:secondary0] Successfully contacted the mongod on port 20005.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.165-0500 I NETWORK [conn2] end connection 127.0.0.1:50832 (1 connection now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.165-0500 I NETWORK [conn1] end connection 127.0.0.1:50830 (0 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.166-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34194 #1 (1 connection now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.166-0500 I NETWORK [conn1] received client metadata from 127.0.0.1:34194 conn1: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.167-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34196 #2 (2 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.167-0500 I NETWORK [conn2] received client metadata from 127.0.0.1:34196 conn2: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:secondary1] Successfully contacted the mongod on port 20006.
[ShardedClusterFixture:job0:shard1] Issuing replSetReconfig command: {'_id': 'shard-rs1', 'protocolVersion': 1, 'settings': {'electionTimeoutMillis': 86400000}, 'members': [{'_id': 0, 'host': 'localhost:20004'}, {'_id': 1, 'host': 'localhost:20005', 'priority': 0}, {'_id': 2, 'host': 'localhost:20006', 'priority': 0}], 'version': 2}
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.168-0500 I NETWORK [conn2] end connection 127.0.0.1:34196 (1 connection now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.168-0500 I NETWORK [conn1] end connection 127.0.0.1:34194 (0 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.169-0500 I REPL [conn5] replSetReconfig admin command received from client; new config: { _id: "shard-rs1", protocolVersion: 1, settings: { electionTimeoutMillis: 86400000 }, members: [ { _id: 0, host: "localhost:20004" }, { _id: 1, host: "localhost:20005", priority: 0 }, { _id: 2, host: "localhost:20006", priority: 0 } ], version: 2 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.169-0500 I NETWORK [listener] connection accepted from 127.0.0.1:50838 #3 (1 connection now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.169-0500 I NETWORK [conn3] end connection 127.0.0.1:50838 (0 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.170-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34200 #3 (1 connection now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.170-0500 I NETWORK [conn3] end connection 127.0.0.1:34200 (0 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.170-0500 I REPL [conn5] replSetReconfig config object with 3 members parses ok
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.170-0500 I REPL [conn5] Scheduling remote command request for reconfig quorum check: RemoteCommand 1 -- target:localhost:20005 db:admin cmd:{ replSetHeartbeat: "shard-rs1", configVersion: 2, hbv: 1, from: "localhost:20004", fromId: 0, term: 1 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.170-0500 I REPL [conn5] Scheduling remote command request for reconfig quorum check: RemoteCommand 2 -- target:localhost:20006 db:admin cmd:{ replSetHeartbeat: "shard-rs1", configVersion: 2, hbv: 1, from: "localhost:20004", fromId: 0, term: 1 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.170-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20005
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.170-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20006
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.170-0500 I NETWORK [listener] connection accepted from 127.0.0.1:50842 #4 (1 connection now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.170-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34204 #4 (1 connection now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.170-0500 I NETWORK [conn4] received client metadata from 127.0.0.1:50842 conn4: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.171-0500 I NETWORK [conn4] received client metadata from 127.0.0.1:34204 conn4: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.171-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.171-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.171-0500 I REPL [conn5] New replica set config in use: { _id: "shard-rs1", version: 2, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "localhost:20004", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:20005", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 0.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:20006", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 0.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 86400000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5ddd7d6bcf8184c2e1492eba') } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.171-0500 I REPL [conn5] This node is localhost:20004 in the config
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.172-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34210 #6 (2 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.172-0500 I NETWORK [conn6] received client metadata from 127.0.0.1:34210 conn6: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.171-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45648 #12 (3 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.171-0500 I REPL [ReplCoord-1] Member localhost:20005 is now in state STARTUP
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.172-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45650 #14 (4 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.173-0500 I NETWORK [listener] connection accepted from 127.0.0.1:50856 #7 (2 connections now open)
[ShardedClusterFixture:job0:shard1] Waiting for secondary on port 20005 to become available.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.172-0500 I NETWORK [conn12] received client metadata from 127.0.0.1:45648 conn12: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.173-0500 I NETWORK [listener] connection accepted from 127.0.0.1:50858 #8 (3 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.172-0500 I NETWORK [conn14] received client metadata from 127.0.0.1:45650 conn14: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.172-0500 I REPL [ReplCoord-0] Member localhost:20006 is now in state STARTUP
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.173-0500 I NETWORK [conn7] received client metadata from 127.0.0.1:50856 conn7: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.172-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45654 #15 (5 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.174-0500 I STORAGE [ReplCoord-0] createCollection: local.system.replset with generated UUID: c43cc3e4-845d-4144-8406-83bf4df96d39 and options: {}
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.173-0500 I NETWORK [conn8] end connection 127.0.0.1:50858 (2 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.173-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45656 #16 (6 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.174-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34220 #9 (3 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.174-0500 I NETWORK [listener] connection accepted from 127.0.0.1:50862 #10 (3 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.173-0500 I NETWORK [conn15] end connection 127.0.0.1:45654 (5 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.174-0500 I NETWORK [conn9] end connection 127.0.0.1:34220 (2 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.174-0500 I STORAGE [ReplCoord-0] createCollection: local.system.replset with generated UUID: 2b695a66-e9c6-4bba-a36e-eb0a5cf356ba and options: {}
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:53.173-0500 I NETWORK [conn16] end connection 127.0.0.1:45656 (4 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.174-0500 I NETWORK [conn10] received client metadata from 127.0.0.1:50862 conn10: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.188-0500 I INDEX [ReplCoord-0] index build: done building index _id_ on ns local.system.replset
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.188-0500 I REPL [ReplCoord-0] New replica set config in use: { _id: "shard-rs1", version: 2, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "localhost:20004", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:20005", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 0.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:20006", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 0.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 86400000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5ddd7d6bcf8184c2e1492eba') } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.188-0500 I REPL [ReplCoord-0] This node is localhost:20006 in the config
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.188-0500 I REPL [ReplCoord-0] transition to STARTUP2 from STARTUP
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.189-0500 I REPL [ReplCoord-0] Starting replication storage threads
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.189-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20005
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.189-0500 I INDEX [ReplCoord-0] index build: done building index _id_ on ns local.system.replset
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.189-0500 I REPL [ReplCoord-2] Member localhost:20004 is now in state PRIMARY
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.189-0500 I REPL [ReplCoord-0] New replica set config in use: { _id: "shard-rs1", version: 2, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "localhost:20004", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:20005", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 0.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:20006", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 0.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 86400000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5ddd7d6bcf8184c2e1492eba') } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.189-0500 I REPL [ReplCoord-0] This node is localhost:20005 in the config
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.189-0500 I REPL [ReplCoord-0] transition to STARTUP2 from STARTUP
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.190-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20006
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.190-0500 I REPL [ReplCoord-0] Starting replication storage threads
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.190-0500 I REPL [ReplCoord-1] Member localhost:20004 is now in state PRIMARY
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.200-0500 I STORAGE [ReplCoord-0] createCollection: local.temp_oplog_buffer with generated UUID: af1b6f2c-2880-4efd-a7a7-ea61c620f97f and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.201-0500 I STORAGE [ReplCoord-0] createCollection: local.temp_oplog_buffer with generated UUID: b58f1846-b60e-45ba-aeac-ca03471b08bd and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.226-0500 I NETWORK [listener] connection accepted from 127.0.0.1:50864 #11 (4 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.226-0500 I NETWORK [conn11] received client metadata from 127.0.0.1:50864 conn11: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.227-0500 I REPL [ReplCoord-1] Member localhost:20005 is now in state STARTUP2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.230-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34226 #11 (3 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.230-0500 I NETWORK [conn11] received client metadata from 127.0.0.1:34226 conn11: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.231-0500 I REPL [ReplCoord-1] Member localhost:20006 is now in state STARTUP2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.238-0500 I INDEX [ReplCoord-0] index build: done building index _id_ on ns local.temp_oplog_buffer
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.239-0500 I INITSYNC [ReplCoordExtern-0] Starting initial sync (attempt 1 of 10)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.239-0500 I STORAGE [ReplCoordExtern-0] Finishing collection drop for local.temp_oplog_buffer (b58f1846-b60e-45ba-aeac-ca03471b08bd).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.241-0500 I STORAGE [ReplCoordExtern-0] createCollection: local.temp_oplog_buffer with generated UUID: 12f90796-5f71-4799-a05f-43236c2a9ad8 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.243-0500 I INDEX [ReplCoord-0] index build: done building index _id_ on ns local.temp_oplog_buffer
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.243-0500 I INITSYNC [ReplCoordExtern-0] Starting initial sync (attempt 1 of 10)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.243-0500 I STORAGE [ReplCoordExtern-0] Finishing collection drop for local.temp_oplog_buffer (af1b6f2c-2880-4efd-a7a7-ea61c620f97f).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.246-0500 I STORAGE [ReplCoordExtern-0] createCollection: local.temp_oplog_buffer with generated UUID: 26cc4ffd-5f3e-4a3a-9024-1265b75e4dbe and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.254-0500 I INDEX [ReplCoordExtern-0] index build: done building index _id_ on ns local.temp_oplog_buffer
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:53.254-0500 I REPL [ReplCoordExtern-0] waiting for 1 pings from other members before syncing
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.259-0500 I INDEX [ReplCoordExtern-0] index build: done building index _id_ on ns local.temp_oplog_buffer
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:53.259-0500 I REPL [ReplCoordExtern-0] waiting for 1 pings from other members before syncing
[ShardedClusterFixture:job0:shard1] Waiting for secondary on port 20005 to become available.
[ShardedClusterFixture:job0:shard1] Waiting for secondary on port 20005 to become available.
[ShardedClusterFixture:job0:shard1] Waiting for secondary on port 20005 to become available.
[ShardedClusterFixture:job0:shard1] Waiting for secondary on port 20005 to become available.
[ShardedClusterFixture:job0:shard1] Waiting for secondary on port 20005 to become available.
[ShardedClusterFixture:job0:shard1] Waiting for secondary on port 20005 to become available.
[ShardedClusterFixture:job0:shard1] Waiting for secondary on port 20005 to become available.
[ShardedClusterFixture:job0:shard1] Waiting for secondary on port 20005 to become available.
[ShardedClusterFixture:job0:shard1] Waiting for secondary on port 20005 to become available.
[ShardedClusterFixture:job0:shard1] Waiting for secondary on port 20005 to become available.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.254-0500 I REPL [ReplCoordExtern-0] sync source candidate: localhost:20004
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.254-0500 I INITSYNC [ReplCoordExtern-0] Initial syncer oplog truncation finished in: 0ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.254-0500 I REPL [ReplCoordExtern-0] ******
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.254-0500 I REPL [ReplCoordExtern-0] creating replication oplog of size: 1024MB...
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.254-0500 I STORAGE [ReplCoordExtern-0] createCollection: local.oplog.rs with generated UUID: 6c707c3f-4064-4e35-98fb-b2fff8245539 and options: { capped: true, size: 1073741824.0, autoIndexId: false }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.259-0500 I REPL [ReplCoordExtern-1] sync source candidate: localhost:20004
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.259-0500 I INITSYNC [ReplCoordExtern-1] Initial syncer oplog truncation finished in: 0ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.259-0500 I REPL [ReplCoordExtern-1] ******
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.259-0500 I REPL [ReplCoordExtern-1] creating replication oplog of size: 1024MB...
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.259-0500 I STORAGE [ReplCoordExtern-1] createCollection: local.oplog.rs with generated UUID: 307925b3-4143-4c06-a46a-f04119b3afb4 and options: { capped: true, size: 1073741824.0, autoIndexId: false }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.259-0500 I STORAGE [ReplCoordExtern-0] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.259-0500 I STORAGE [ReplCoordExtern-0] WiredTiger record store oplog processing took 0ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.264-0500 I STORAGE [ReplCoordExtern-1] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.264-0500 I STORAGE [ReplCoordExtern-1] WiredTiger record store oplog processing took 0ms
[ShardedClusterFixture:job0:shard1] Waiting for secondary on port 20005 to become available.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.292-0500 I REPL [ReplCoordExtern-0] ******
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.292-0500 I REPL [ReplCoordExtern-0] dropReplicatedDatabases - dropping 1 databases
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.292-0500 I REPL [ReplCoordExtern-0] dropReplicatedDatabases - dropped 1 databases
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.292-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.293-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45670 #17 (5 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.293-0500 I NETWORK [conn17] received client metadata from 127.0.0.1:45670 conn17: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.295-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45672 #18 (6 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.295-0500 I NETWORK [conn18] received client metadata from 127.0.0.1:45672 conn18: { driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.295-0500 I SHARDING [ReplCoordExtern-2] Marking collection local.temp_oplog_buffer as collection version:
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.296-0500 I REPL [ReplCoordExtern-1] ******
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.296-0500 I STORAGE [ReplCoordExtern-1] createCollection: admin.system.version with provided UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5 and options: { uuid: UUID("19b398bd-025a-4aca-9299-76bf6d82acc5") }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.296-0500 I REPL [ReplCoordExtern-1] dropReplicatedDatabases - dropping 1 databases
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.296-0500 I REPL [ReplCoordExtern-1] dropReplicatedDatabases - dropped 1 databases
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.296-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.296-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45674 #19 (7 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.297-0500 I NETWORK [conn19] received client metadata from 127.0.0.1:45674 conn19: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.298-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45676 #20 (8 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.298-0500 I SHARDING [ReplCoordExtern-2] Marking collection local.temp_oplog_buffer as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.299-0500 I NETWORK [conn20] received client metadata from 127.0.0.1:45676 conn20: { driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.300-0500 I STORAGE [ReplCoordExtern-1] createCollection: admin.system.version with provided UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5 and options: { uuid: UUID("19b398bd-025a-4aca-9299-76bf6d82acc5") }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.323-0500 I INDEX [ReplCoordExtern-1] index build: starting on admin.system.version properties: { v: 2, key: { _id: 1 }, name: "_id_" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.323-0500 I INDEX [ReplCoordExtern-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.323-0500 I COMMAND [ReplWriterWorker-15] setting featureCompatibilityVersion to 4.2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.324-0500 I INDEX [ReplCoordExtern-1] index build: inserted 1 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.327-0500 I INDEX [ReplCoordExtern-1] index build: done building index _id_ on ns admin.system.version
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.328-0500 I INDEX [ReplCoordExtern-1] index build: starting on admin.system.version properties: { v: 2, key: { _id: 1 }, name: "_id_" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.328-0500 I INDEX [ReplCoordExtern-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.328-0500 I COMMAND [ReplWriterWorker-15] setting featureCompatibilityVersion to 4.2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.329-0500 I INDEX [ReplCoordExtern-1] index build: inserted 1 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.331-0500 I STORAGE [ReplCoordExtern-1] createCollection: config.transactions with provided UUID: ec61ac84-71d3-4912-9466-2724ab31be3d and options: { uuid: UUID("ec61ac84-71d3-4912-9466-2724ab31be3d") }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.331-0500 I INDEX [ReplCoordExtern-1] index build: done building index _id_ on ns admin.system.version
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.336-0500 I STORAGE [ReplCoordExtern-1] createCollection: config.transactions with provided UUID: ec61ac84-71d3-4912-9466-2724ab31be3d and options: { uuid: UUID("ec61ac84-71d3-4912-9466-2724ab31be3d") }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.358-0500 I INDEX [ReplCoordExtern-1] index build: starting on config.transactions properties: { v: 2, key: { _id: 1 }, name: "_id_" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.358-0500 I INDEX [ReplCoordExtern-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.359-0500 I INDEX [ReplCoordExtern-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.362-0500 I INDEX [ReplCoordExtern-1] index build: done building index _id_ on ns config.transactions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.363-0500 I INDEX [ReplCoordExtern-1] index build: starting on config.transactions properties: { v: 2, key: { _id: 1 }, name: "_id_" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.363-0500 I INDEX [ReplCoordExtern-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.364-0500 I INDEX [ReplCoordExtern-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.365-0500 I INITSYNC [ReplCoordExtern-1] Finished cloning data: OK. Beginning oplog replay.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.365-0500 I NETWORK [conn18] end connection 127.0.0.1:45672 (7 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.365-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45678 #21 (8 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.365-0500 I NETWORK [conn21] received client metadata from 127.0.0.1:45678 conn21: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.366-0500 I INITSYNC [ReplCoordExtern-2] No need to apply operations. (currently at { : Timestamp(1574796653, 3) })
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.366-0500 I INDEX [ReplCoordExtern-1] index build: done building index _id_ on ns config.transactions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.366-0500 I INITSYNC [ReplCoordExtern-0] Finished fetching oplog during initial sync: CallbackCanceled: error in fetcher batch callback: oplog fetcher is shutting down. Last fetched optime: { ts: Timestamp(0, 0), t: -1 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.366-0500 I INITSYNC [ReplCoordExtern-0] Initial sync attempt finishing up.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.366-0500 I INITSYNC [ReplCoordExtern-0] Initial Sync Attempt Statistics: { failedInitialSyncAttempts: 0, maxFailedInitialSyncAttempts: 10, initialSyncStart: new Date(1574796653238), initialSyncAttempts: [], appliedOps: 0, initialSyncOplogStart: Timestamp(1574796653, 3), initialSyncOplogEnd: Timestamp(1574796653, 3), databases: { databasesCloned: 2, databaseCount: 2, admin: { collections: 1, clonedCollections: 1, start: new Date(1574796654295), admin.system.version: { documentsToCopy: 1, documentsCopied: 1, indexes: 1, fetchedBatches: 1, receivedBatches: 1 } }, config: { collections: 1, clonedCollections: 1, start: new Date(1574796654331), config.transactions: { documentsToCopy: 0, documentsCopied: 0, indexes: 1, fetchedBatches: 0, receivedBatches: 0 } } } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.366-0500 I STORAGE [ReplCoordExtern-0] Finishing collection drop for local.temp_oplog_buffer (12f90796-5f71-4799-a05f-43236c2a9ad8).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.369-0500 I INITSYNC [ReplCoordExtern-1] Finished cloning data: OK. Beginning oplog replay.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.369-0500 I NETWORK [conn20] end connection 127.0.0.1:45676 (7 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.369-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45680 #22 (8 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.369-0500 I SHARDING [ReplCoordExtern-0] Marking collection config.transactions as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.369-0500 I NETWORK [conn22] received client metadata from 127.0.0.1:45680 conn22: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.369-0500 I SHARDING [ReplCoordExtern-0] Marking collection local.replset.oplogTruncateAfterPoint as collection version:
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.370-0500 I INITSYNC [ReplCoordExtern-2] No need to apply operations. (currently at { : Timestamp(1574796653, 3) })
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.370-0500 I INITSYNC [ReplCoordExtern-0] Finished fetching oplog during initial sync: CallbackCanceled: error in fetcher batch callback: oplog fetcher is shutting down. Last fetched optime: { ts: Timestamp(0, 0), t: -1 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.370-0500 I INITSYNC [ReplCoordExtern-0] Initial sync attempt finishing up.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.370-0500 I INITSYNC [ReplCoordExtern-0] Initial Sync Attempt Statistics: { failedInitialSyncAttempts: 0, maxFailedInitialSyncAttempts: 10, initialSyncStart: new Date(1574796653243), initialSyncAttempts: [], appliedOps: 0, initialSyncOplogStart: Timestamp(1574796653, 3), initialSyncOplogEnd: Timestamp(1574796653, 3), databases: { databasesCloned: 2, databaseCount: 2, admin: { collections: 1, clonedCollections: 1, start: new Date(1574796654299), admin.system.version: { documentsToCopy: 1, documentsCopied: 1, indexes: 1, fetchedBatches: 1, receivedBatches: 1 } }, config: { collections: 1, clonedCollections: 1, start: new Date(1574796654336), config.transactions: { documentsToCopy: 0, documentsCopied: 0, indexes: 1, fetchedBatches: 0, receivedBatches: 0 } } } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.370-0500 I STORAGE [ReplCoordExtern-0] Finishing collection drop for local.temp_oplog_buffer (26cc4ffd-5f3e-4a3a-9024-1265b75e4dbe).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.370-0500 I INITSYNC [ReplCoordExtern-0] initial sync done; took 1s.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.370-0500 I REPL [ReplCoordExtern-0] transition to RECOVERING from STARTUP2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.370-0500 I REPL [ReplCoordExtern-0] Starting replication fetcher thread
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.370-0500 I REPL [ReplCoordExtern-0] Starting replication applier thread
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.370-0500 I REPL [ReplCoordExtern-0] Starting replication reporter thread
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.370-0500 I REPL [OplogApplier-0] Starting oplog application
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.371-0500 I REPL [BackgroundSync] could not find member to sync from
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.372-0500 I SHARDING [ReplCoordExtern-0] Marking collection config.transactions as collection version:
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.372-0500 I SHARDING [ReplCoordExtern-0] Marking collection local.replset.oplogTruncateAfterPoint as collection version:
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.373-0500 I INITSYNC [ReplCoordExtern-0] initial sync done; took 1s.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.373-0500 I REPL [ReplCoordExtern-0] transition to RECOVERING from STARTUP2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.373-0500 I REPL [ReplCoordExtern-0] Starting replication fetcher thread
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.373-0500 I REPL [ReplCoordExtern-0] Starting replication applier thread
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.373-0500 I REPL [ReplCoordExtern-0] Starting replication reporter thread
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.373-0500 I REPL [OplogApplier-0] Starting oplog application
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.374-0500 I REPL [BackgroundSync] could not find member to sync from
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.375-0500 I REPL [OplogApplier-0] transition to SECONDARY from RECOVERING
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.375-0500 I REPL [OplogApplier-0] Resetting sync source to empty, which was :27017
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.377-0500 I REPL [ReplCoord-4] Member localhost:20006 is now in state RECOVERING
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.377-0500 I REPL [ReplCoord-3] Member localhost:20005 is now in state SECONDARY
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.377-0500 I REPL [OplogApplier-0] transition to SECONDARY from RECOVERING
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.377-0500 I REPL [OplogApplier-0] Resetting sync source to empty, which was :27017
[ShardedClusterFixture:job0:shard1] Waiting for secondary on port 20005 to become available.
[ShardedClusterFixture:job0:shard1] Secondary on port 20005 is now available.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.384-0500 I NETWORK [conn10] end connection 127.0.0.1:50862 (3 connections now open)
[ShardedClusterFixture:job0:shard1] Waiting for secondary on port 20006 to become available.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.384-0500 I NETWORK [conn7] end connection 127.0.0.1:50856 (2 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.384-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34240 #15 (4 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.385-0500 I NETWORK [conn15] received client metadata from 127.0.0.1:34240 conn15: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.385-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34242 #16 (5 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.386-0500 I NETWORK [conn16] received client metadata from 127.0.0.1:34242 conn16: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1] Secondary on port 20006 is now available.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.386-0500 I NETWORK [conn16] end connection 127.0.0.1:34242 (4 connections now open)
[fsm_workload_test:job0_fixture_setup] 2019-11-26T14:30:54.387-0500 Waiting for ShardedClusterFixture (Job #0) to be ready.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.386-0500 I NETWORK [conn5] end connection 127.0.0.1:45626 (7 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.386-0500 I NETWORK [conn15] end connection 127.0.0.1:34240 (3 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.386-0500 I NETWORK [conn4] end connection 127.0.0.1:45624 (6 connections now open)
[ShardedClusterFixture:job0:configsvr] Waiting for primary on port 20000 to be elected.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.388-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55480 #8 (1 connection now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.388-0500 I NETWORK [conn8] received client metadata from 127.0.0.1:55480 conn8: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.388-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55482 #9 (2 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.389-0500 I NETWORK [conn9] received client metadata from 127.0.0.1:55482 conn9: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:configsvr] Primary on port 20000 successfully elected.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.389-0500 I NETWORK [conn9] end connection 127.0.0.1:55482 (1 connection now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.389-0500 I NETWORK [conn8] end connection 127.0.0.1:55480 (0 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.390-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55484 #10 (1 connection now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.390-0500 I NETWORK [conn10] received client metadata from 127.0.0.1:55484 conn10: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.391-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55486 #11 (2 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.391-0500 I NETWORK [conn11] received client metadata from 127.0.0.1:55486 conn11: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:configsvr] Waiting for node on port 20000 to have a stable recovery timestamp.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.396-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55488 #12 (3 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.396-0500 I NETWORK [conn12] received client metadata from 127.0.0.1:55488 conn12: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.397-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55490 #13 (4 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.397-0500 I NETWORK [conn13] received client metadata from 127.0.0.1:55490 conn13: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:configsvr] Node on port 20000 now has a stable timestamp for recovery. Time: Timestamp(1574796646, 40)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.397-0500 I NETWORK [conn11] end connection 127.0.0.1:55486 (3 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.397-0500 I NETWORK [conn10] end connection 127.0.0.1:55484 (2 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.397-0500 I NETWORK [conn13] end connection 127.0.0.1:55490 (1 connection now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.397-0500 I NETWORK [conn12] end connection 127.0.0.1:55488 (0 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.398-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55492 #14 (1 connection now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.399-0500 I NETWORK [conn14] received client metadata from 127.0.0.1:55492 conn14: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.399-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55494 #15 (2 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.399-0500 I NETWORK [conn15] received client metadata from 127.0.0.1:55494 conn15: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.400-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database config from version {} to version { uuid: UUID("6d730968-7438-4537-a240-5127b08df159"), lastMod: 0 } took 0 ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.400-0500 I CONTROL [conn15] Failed to refresh session cache, will try again at the next refresh interval :: caused by :: ShardNotFound: Failed to create config.system.sessions: cannot create the collection until there are shards
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.400-0500 I NETWORK [conn15] end connection 127.0.0.1:55494 (1 connection now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:54.400-0500 I NETWORK [conn14] end connection 127.0.0.1:55492 (0 connections now open)
[ShardedClusterFixture:job0:shard0] Waiting for primary on port 20001 to be elected.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.401-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38236 #20 (5 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.402-0500 I NETWORK [conn20] received client metadata from 127.0.0.1:38236 conn20: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.402-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38238 #21 (6 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.403-0500 I NETWORK [conn21] received client metadata from 127.0.0.1:38238 conn21: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0] Primary on port 20001 successfully elected.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.403-0500 I NETWORK [conn21] end connection 127.0.0.1:38238 (5 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.403-0500 I NETWORK [conn20] end connection 127.0.0.1:38236 (4 connections now open)
[ShardedClusterFixture:job0:shard0] Waiting for secondary on port 20002 to become available.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:54.405-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51274 #19 (5 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:54.405-0500 I NETWORK [conn19] received client metadata from 127.0.0.1:51274 conn19: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:54.406-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51276 #20 (6 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:54.406-0500 I NETWORK [conn20] received client metadata from 127.0.0.1:51276 conn20: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0] Secondary on port 20002 is now available.
[ShardedClusterFixture:job0:shard0] Waiting for secondary on port 20003 to become available.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:54.408-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52166 #17 (4 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:54.407-0500 I NETWORK [conn20] end connection 127.0.0.1:51276 (5 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:54.407-0500 I NETWORK [conn19] end connection 127.0.0.1:51274 (4 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:54.408-0500 I NETWORK [conn17] received client metadata from 127.0.0.1:52166 conn17: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:54.409-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52168 #18 (5 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:54.409-0500 I NETWORK [conn18] received client metadata from 127.0.0.1:52168 conn18: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0] Secondary on port 20003 is now available.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:54.409-0500 I NETWORK [conn18] end connection 127.0.0.1:52168 (4 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:54.409-0500 I NETWORK [conn17] end connection 127.0.0.1:52166 (3 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.410-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38248 #22 (5 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.411-0500 I NETWORK [conn22] received client metadata from 127.0.0.1:38248 conn22: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.411-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38250 #23 (6 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.411-0500 I NETWORK [conn23] received client metadata from 127.0.0.1:38250 conn23: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0] Waiting for node on port 20001 to have a stable recovery timestamp.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.413-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38252 #24 (7 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.413-0500 I NETWORK [conn24] received client metadata from 127.0.0.1:38252 conn24: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.413-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38254 #25 (8 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.414-0500 I NETWORK [conn25] received client metadata from 127.0.0.1:38254 conn25: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0] Node on port 20001 now has a stable timestamp for recovery. Time: Timestamp(1574796649, 2)
[ShardedClusterFixture:job0:shard0] Waiting for node on port 20002 to have a stable recovery timestamp.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:54.416-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51290 #21 (5 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.416-0500 I NETWORK [conn25] end connection 127.0.0.1:38254 (7 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:54.416-0500 I NETWORK [conn21] received client metadata from 127.0.0.1:51290 conn21: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.416-0500 I NETWORK [conn24] end connection 127.0.0.1:38252 (6 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:54.417-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51292 #22 (6 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:54.417-0500 I NETWORK [conn22] received client metadata from 127.0.0.1:51292 conn22: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0] Node on port 20002 now has a stable timestamp for recovery. Time: Timestamp(1574796649, 3)
[ShardedClusterFixture:job0:shard0] Waiting for node on port 20003 to have a stable recovery timestamp.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:54.418-0500 I NETWORK [conn22] end connection 127.0.0.1:51292 (5 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:54.418-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52182 #19 (4 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:54.419-0500 I NETWORK [conn21] end connection 127.0.0.1:51290 (4 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:54.419-0500 I NETWORK [conn19] received client metadata from 127.0.0.1:52182 conn19: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:54.419-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52184 #20 (5 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:54.419-0500 I NETWORK [conn20] received client metadata from 127.0.0.1:52184 conn20: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0] Node on port 20003 now has a stable timestamp for recovery. Time: Timestamp(1574796649, 3)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.420-0500 I NETWORK [conn23] end connection 127.0.0.1:38250 (5 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.420-0500 I NETWORK [conn22] end connection 127.0.0.1:38248 (4 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:54.420-0500 I NETWORK [conn20] end connection 127.0.0.1:52184 (4 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:54.420-0500 I NETWORK [conn19] end connection 127.0.0.1:52182 (3 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.421-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38264 #26 (5 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.421-0500 I NETWORK [conn26] received client metadata from 127.0.0.1:38264 conn26: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.422-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38266 #27 (6 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.422-0500 I NETWORK [conn27] received client metadata from 127.0.0.1:38266 conn27: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.422-0500 I CONTROL [conn27] Failed to refresh session cache, will try again at the next refresh interval :: caused by :: ShardingStateNotInitialized: sharding state is not yet initialized
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.423-0500 I NETWORK [conn27] end connection 127.0.0.1:38266 (5 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.423-0500 I NETWORK [conn26] end connection 127.0.0.1:38264 (4 connections now open)
[ShardedClusterFixture:job0:shard1] Waiting for primary on port 20004 to be elected.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.424-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45734 #23 (7 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.424-0500 I NETWORK [conn23] received client metadata from 127.0.0.1:45734 conn23: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.425-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45736 #24 (8 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.425-0500 I NETWORK [conn24] received client metadata from 127.0.0.1:45736 conn24: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1] Primary on port 20004 successfully elected.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.426-0500 I NETWORK [conn24] end connection 127.0.0.1:45736 (7 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.426-0500 I NETWORK [conn23] end connection 127.0.0.1:45734 (6 connections now open)
[ShardedClusterFixture:job0:shard1] Waiting for secondary on port 20005 to become available.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.427-0500 I NETWORK [listener] connection accepted from 127.0.0.1:50936 #16 (3 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.427-0500 I NETWORK [conn16] received client metadata from 127.0.0.1:50936 conn16: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.428-0500 I NETWORK [listener] connection accepted from 127.0.0.1:50938 #17 (4 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.428-0500 I NETWORK [conn17] received client metadata from 127.0.0.1:50938 conn17: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1] Secondary on port 20005 is now available.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.429-0500 I NETWORK [conn17] end connection 127.0.0.1:50938 (3 connections now open)
[ShardedClusterFixture:job0:shard1] Waiting for secondary on port 20006 to become available.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.429-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34300 #17 (4 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.429-0500 I NETWORK [conn16] end connection 127.0.0.1:50936 (2 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.430-0500 I NETWORK [conn17] received client metadata from 127.0.0.1:34300 conn17: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.430-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34302 #18 (5 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.430-0500 I NETWORK [conn18] received client metadata from 127.0.0.1:34302 conn18: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1] Secondary on port 20006 is now available.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.431-0500 I NETWORK [conn18] end connection 127.0.0.1:34302 (4 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:54.431-0500 I NETWORK [conn17] end connection 127.0.0.1:34300 (3 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.432-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45746 #25 (7 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.432-0500 I NETWORK [conn25] received client metadata from 127.0.0.1:45746 conn25: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.433-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45748 #26 (8 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.433-0500 I NETWORK [conn26] received client metadata from 127.0.0.1:45748 conn26: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1] Waiting for node on port 20004 to have a stable recovery timestamp.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.435-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45750 #27 (9 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.435-0500 I NETWORK [conn27] received client metadata from 127.0.0.1:45750 conn27: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.436-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45752 #28 (10 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.436-0500 I NETWORK [conn28] received client metadata from 127.0.0.1:45752 conn28: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1] Node on port 20004 now has a stable timestamp for recovery. Time: Timestamp(1574796653, 2)
[ShardedClusterFixture:job0:shard1] Waiting for node on port 20005 to have a stable recovery timestamp.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.437-0500 I NETWORK [conn28] end connection 127.0.0.1:45752 (9 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:54.437-0500 I NETWORK [conn27] end connection 127.0.0.1:45750 (8 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.437-0500 I NETWORK [listener] connection accepted from 127.0.0.1:50952 #18 (3 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.438-0500 I NETWORK [conn18] received client metadata from 127.0.0.1:50952 conn18: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.438-0500 I NETWORK [listener] connection accepted from 127.0.0.1:50954 #19 (4 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.438-0500 I NETWORK [conn19] received client metadata from 127.0.0.1:50954 conn19: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:54.647-0500 I REPL [BackgroundSync] sync source candidate: localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:54.648-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.648-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38292 #28 (5 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.648-0500 I NETWORK [conn28] received client metadata from 127.0.0.1:38292 conn28: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:54.649-0500 I REPL [BackgroundSync] Changed sync source from empty to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:54.649-0500 I REPL [BackgroundSync] scheduling fetcher to read remote oplog on localhost:20001 starting at filter: { ts: { $gte: Timestamp(1574796649, 3) } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.649-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38294 #29 (6 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:54.650-0500 I NETWORK [conn29] received client metadata from 127.0.0.1:38294 conn29: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:54.668-0500 I REPL [BackgroundSync] sync source candidate: localhost:20003
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:54.668-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20003
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:54.668-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52218 #23 (4 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:54.669-0500 I NETWORK [conn23] received client metadata from 127.0.0.1:52218 conn23: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:54.669-0500 I REPL [BackgroundSync] Changed sync source from empty to localhost:20003
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:54.670-0500 I REPL [BackgroundSync] scheduling fetcher to read remote oplog on localhost:20003 starting at filter: { ts: { $gte: Timestamp(1574796649, 3) } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:54.670-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52220 #24 (5 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:54.670-0500 I NETWORK [conn24] received client metadata from 127.0.0.1:52220 conn24: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:54.877-0500 I REPL [ReplCoord-2] Member localhost:20006 is now in state SECONDARY
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:55.171-0500 I REPL [ReplCoord-1] Member localhost:20005 is now in state SECONDARY
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:55.172-0500 I REPL [ReplCoord-2] Member localhost:20006 is now in state SECONDARY
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:55.374-0500 I STORAGE [ReplCoord-3] Triggering the first stable checkpoint. Initial Data: Timestamp(1574796653, 3) PrevStable: Timestamp(0, 0) CurrStable: Timestamp(1574796653, 3)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:55.375-0500 I REPL [BackgroundSync] sync source candidate: localhost:20004
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:55.375-0500 I REPL [BackgroundSync] Changed sync source from empty to localhost:20004
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:55.376-0500 I REPL [BackgroundSync] scheduling fetcher to read remote oplog on localhost:20004 starting at filter: { ts: { $gte: Timestamp(1574796653, 3) } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:55.376-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:55.376-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45766 #29 (9 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:55.376-0500 I NETWORK [conn29] received client metadata from 127.0.0.1:45766 conn29: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:55.377-0500 I STORAGE [ReplCoord-1] Triggering the first stable checkpoint. Initial Data: Timestamp(1574796653, 3) PrevStable: Timestamp(0, 0) CurrStable: Timestamp(1574796653, 3)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:55.377-0500 I REPL [BackgroundSync] sync source candidate: localhost:20004
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:55.378-0500 I REPL [BackgroundSync] Changed sync source from empty to localhost:20004
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:55.378-0500 I REPL [BackgroundSync] scheduling fetcher to read remote oplog on localhost:20004 starting at filter: { ts: { $gte: Timestamp(1574796653, 3) } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:55.378-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:55.378-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45768 #30 (10 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:55.379-0500 I NETWORK [conn30] received client metadata from 127.0.0.1:45768 conn30: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1] Node on port 20005 now has a stable timestamp for recovery. Time: Timestamp(1574796653, 3)
[ShardedClusterFixture:job0:shard1] Waiting for node on port 20006 to have a stable recovery timestamp.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:55.449-0500 I NETWORK [conn19] end connection 127.0.0.1:50954 (3 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:55.450-0500 I NETWORK [conn18] end connection 127.0.0.1:50952 (2 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:55.449-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34328 #20 (4 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:55.450-0500 I NETWORK [conn20] received client metadata from 127.0.0.1:34328 conn20: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:55.451-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34330 #21 (5 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:55.451-0500 I NETWORK [conn21] received client metadata from 127.0.0.1:34330 conn21: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1] Node on port 20006 now has a stable timestamp for recovery. Time: Timestamp(1574796653, 3)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:55.452-0500 I NETWORK [conn21] end connection 127.0.0.1:34330 (4 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:55.452-0500 I NETWORK [conn26] end connection 127.0.0.1:45748 (9 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:55.452-0500 I NETWORK [conn20] end connection 127.0.0.1:34328 (3 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:55.452-0500 I NETWORK [conn25] end connection 127.0.0.1:45746 (8 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:55.453-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45774 #31 (9 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:55.453-0500 I NETWORK [conn31] received client metadata from 127.0.0.1:45774 conn31: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:55.454-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45776 #32 (10 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:55.454-0500 I NETWORK [conn32] received client metadata from 127.0.0.1:45776 conn32: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:55.454-0500 I CONTROL [conn32] Failed to refresh session cache, will try again at the next refresh interval :: caused by :: ShardingStateNotInitialized: sharding state is not yet initialized
[ShardedClusterFixture:job0:mongos0] Starting mongos on port 20007...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongos --setParameter enableTestCommands=1 --setParameter logComponentVerbosity={'transaction': 3} --configdb=config-rs/localhost:20000 --port=20007
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:55.454-0500 I NETWORK [conn32] end connection 127.0.0.1:45776 (9 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:55.454-0500 I NETWORK [conn31] end connection 127.0.0.1:45774 (8 connections now open)
[ShardedClusterFixture:job0:mongos0] mongos started on port 20007 with pid 14692.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.484-0500 W SHARDING [main] Running a sharded cluster with fewer than 3 config servers should only be done for testing purposes and is not recommended for production.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.490-0500 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.492-0500 I CONTROL [main]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.492-0500 I CONTROL [main] ** WARNING: Access control is not enabled for the database.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.492-0500 I CONTROL [main] ** Read and write access to data and configuration is unrestricted.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.492-0500 I CONTROL [main]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.492-0500 I CONTROL [main] ** WARNING: This server is bound to localhost.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.492-0500 I CONTROL [main] ** Remote systems will be unable to connect to this server.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.492-0500 I CONTROL [main] ** Start the server with --bind_ip to specify which IP
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.492-0500 I CONTROL [main] ** addresses it should serve responses from, or with --bind_ip_all to
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.492-0500 I CONTROL [main] ** bind to all interfaces. If this behavior is desired, start the
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.492-0500 I CONTROL [main] ** server with --bind_ip 127.0.0.1 to disable this warning.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.492-0500 I CONTROL [main]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.493-0500 I SHARDING [mongosMain] mongos version v0.0.0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.493-0500 I CONTROL [mongosMain] db version v0.0.0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.493-0500 I CONTROL [mongosMain] git version: unknown
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.493-0500 I CONTROL [mongosMain] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.493-0500 I CONTROL [mongosMain] allocator: system
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.493-0500 I CONTROL [mongosMain] modules: enterprise ninja
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.493-0500 I CONTROL [mongosMain] build environment:
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.493-0500 I CONTROL [mongosMain] distarch: x86_64
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.493-0500 I CONTROL [mongosMain] target_arch: x86_64
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.493-0500 I CONTROL [mongosMain] options: { net: { port: 20007 }, setParameter: { enableTestCommands: "1", logComponentVerbosity: "{'transaction': 3}" }, sharding: { configDB: "config-rs/localhost:20000" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.494-0500 I NETWORK [mongosMain] Starting new replica set monitor for config-rs/localhost:20000
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.494-0500 I SHARDING [thread1] creating distributed lock ping thread for process nz_desktop:20007:1574796655:8358214168427282717 (sleeping for 30000ms)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.494-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:55.498-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55574 #16 (1 connection now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:55.498-0500 I NETWORK [conn16] received client metadata from 127.0.0.1:55574 conn16: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.499-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.499-0500 I SHARDING [Sharding-Fixed-0] Updating sharding state with confirmed set config-rs/localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:55.499-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55576 #17 (2 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:55.499-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55578 #18 (3 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:55.499-0500 I NETWORK [conn17] received client metadata from 127.0.0.1:55576 conn17: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:55.499-0500 I NETWORK [conn18] received client metadata from 127.0.0.1:55578 conn18: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:55.500-0500 I SHARDING [conn17] Marking collection config.lockpings as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:55.500-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55580 #19 (4 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:55.500-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55582 #20 (5 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.500-0500 I SHARDING [ShardRegistry] Received reply from config server node (unknown) indicating config server optime term has increased, previous optime { ts: Timestamp(0, 0), t: -1 }, now { ts: Timestamp(1574796654, 1), t: 1 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:55.500-0500 I NETWORK [conn19] received client metadata from 127.0.0.1:55580 conn19: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:55.500-0500 I NETWORK [conn20] received client metadata from 127.0.0.1:55582 conn20: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:55.501-0500 I SHARDING [conn19] Marking collection config.databases as collection version:
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.502-0500 W FTDC [mongosMain] FTDC is disabled because neither '--logpath' nor set parameter 'diagnosticDataCollectionDirectoryPath' are specified.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.502-0500 I FTDC [mongosMain] Initializing full-time diagnostic data capture with directory ''
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.502-0500 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: LockStateChangeFailed: findAndModify query predicate didn't match any lock document
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:55.502-0500 I SHARDING [conn19] Marking collection config.mongos as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:55.503-0500 I STORAGE [conn19] createCollection: config.mongos with generated UUID: 57207abe-6d8d-4102-a526-bc847dba6c09 and options: {}
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.507-0500 I NETWORK [mongosMain] Listening on /tmp/mongodb-20007.sock
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.507-0500 I NETWORK [mongosMain] Listening on 127.0.0.1
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.507-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database config from version {} to version { uuid: UUID("ce64d153-16ff-4148-9795-256958368b06"), lastMod: 0 } took 0 ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.507-0500 I NETWORK [mongosMain] waiting for connections on port 20007
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.507-0500 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: Collection config.system.sessions is not sharded.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.507-0500 I CONTROL [LogicalSessionCacheRefresh] Failed to refresh session cache, will try again at the next refresh interval :: caused by :: NamespaceNotSharded: Collection config.system.sessions is not sharded.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:55.530-0500 I INDEX [conn19] index build: done building index _id_ on ns config.mongos
[ShardedClusterFixture:job0:mongos0] Waiting to connect to mongos on port 20007.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.969-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44182 #6 (1 connection now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:55.970-0500 I NETWORK [conn6] received client metadata from 127.0.0.1:44182 conn6: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:56.071-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44184 #7 (2 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:56.071-0500 I NETWORK [conn7] received client metadata from 127.0.0.1:44184 conn7: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:56.072-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44186 #8 (3 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:56.072-0500 I NETWORK [conn8] received client metadata from 127.0.0.1:44186 conn8: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:mongos0] Successfully contacted the mongos on port 20007.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:56.073-0500 I NETWORK [conn8] end connection 127.0.0.1:44186 (2 connections now open)
[ShardedClusterFixture:job0:mongos1] Starting mongos on port 20008...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongos --setParameter enableTestCommands=1 --setParameter logComponentVerbosity={'transaction': 3} --configdb=config-rs/localhost:20000 --port=20008
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:56.073-0500 I NETWORK [conn7] end connection 127.0.0.1:44184 (1 connection now open)
[ShardedClusterFixture:job0:mongos1] mongos started on port 20008 with pid 14729.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.111-0500 W SHARDING [main] Running a sharded cluster with fewer than 3 config servers should only be done for testing purposes and is not recommended for production.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.119-0500 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.120-0500 I CONTROL [main]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.120-0500 I CONTROL [main] ** WARNING: Access control is not enabled for the database.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.120-0500 I CONTROL [main] ** Read and write access to data and configuration is unrestricted.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.120-0500 I CONTROL [main]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.120-0500 I CONTROL [main] ** WARNING: This server is bound to localhost.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.120-0500 I CONTROL [main] ** Remote systems will be unable to connect to this server.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.120-0500 I CONTROL [main] ** Start the server with --bind_ip to specify which IP
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.120-0500 I CONTROL [main] ** addresses it should serve responses from, or with --bind_ip_all to
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.120-0500 I CONTROL [main] ** bind to all interfaces. If this behavior is desired, start the
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.120-0500 I CONTROL [main] ** server with --bind_ip 127.0.0.1 to disable this warning.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.120-0500 I CONTROL [main]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.121-0500 I SHARDING [mongosMain] mongos version v0.0.0
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.121-0500 I CONTROL [mongosMain] db version v0.0.0
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.121-0500 I CONTROL [mongosMain] git version: unknown
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.121-0500 I CONTROL [mongosMain] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.121-0500 I CONTROL [mongosMain] allocator: system
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.121-0500 I CONTROL [mongosMain] modules: enterprise ninja
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.121-0500 I CONTROL [mongosMain] build environment:
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.121-0500 I CONTROL [mongosMain] distarch: x86_64
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.121-0500 I CONTROL [mongosMain] target_arch: x86_64
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.121-0500 I CONTROL [mongosMain] options: { net: { port: 20008 }, setParameter: { enableTestCommands: "1", logComponentVerbosity: "{'transaction': 3}" }, sharding: { configDB: "config-rs/localhost:20000" } }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.122-0500 I NETWORK [mongosMain] Starting new replica set monitor for config-rs/localhost:20000
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.122-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.126-0500 I SHARDING [thread1] creating distributed lock ping thread for process nz_desktop:20008:1574796656:7765268974563860519 (sleeping for 30000ms)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:56.127-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55592 #21 (6 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:56.127-0500 I NETWORK [conn21] received client metadata from 127.0.0.1:55592 conn21: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.128-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.128-0500 I SHARDING [Sharding-Fixed-0] Updating sharding state with confirmed set config-rs/localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:56.128-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55594 #22 (7 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:56.128-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55596 #23 (8 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:56.128-0500 I NETWORK [conn22] received client metadata from 127.0.0.1:55594 conn22: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:56.129-0500 I NETWORK [conn23] received client metadata from 127.0.0.1:55596 conn23: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:56.129-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55598 #24 (9 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.129-0500 I SHARDING [ShardRegistry] Received reply from config server node (unknown) indicating config server optime term has increased, previous optime { ts: Timestamp(0, 0), t: -1 }, now { ts: Timestamp(1574796655, 3), t: 1 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:56.129-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55600 #25 (10 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:56.129-0500 I NETWORK [conn24] received client metadata from 127.0.0.1:55598 conn24: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:56.129-0500 I NETWORK [conn25] received client metadata from 127.0.0.1:55600 conn25: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.129-0500 I SHARDING [mongosMain] Waiting for signing keys, sleeping for 1s and trying again.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:56.131-0500 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: LockStateChangeFailed: findAndModify query predicate didn't match any lock document
[ShardedClusterFixture:job0:mongos1] Waiting to connect to mongos on port 20008.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.131-0500 W FTDC [mongosMain] FTDC is disabled because neither '--logpath' nor set parameter 'diagnosticDataCollectionDirectoryPath' are specified.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.132-0500 I FTDC [mongosMain] Initializing full-time diagnostic data capture with directory ''
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.135-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database config from version {} to version { uuid: UUID("26377358-d041-44c0-800b-e4be90b3cb59"), lastMod: 0 } took 0 ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.135-0500 I NETWORK [mongosMain] Listening on /tmp/mongodb-20008.sock
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.135-0500 I NETWORK [mongosMain] Listening on 127.0.0.1
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.136-0500 I CONTROL [LogicalSessionCacheRefresh] Failed to refresh session cache, will try again at the next refresh interval :: caused by :: NamespaceNotSharded: Collection config.system.sessions is not sharded.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.136-0500 I NETWORK [mongosMain] waiting for connections on port 20008
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.136-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection config.system.sessions took 0 ms and found the collection is not sharded
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.136-0500 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: Collection config.system.sessions is not sharded.
[ShardedClusterFixture:job0:mongos1] Waiting to connect to mongos on port 20008.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.198-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57346 #6 (1 connection now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.198-0500 I NETWORK [conn6] received client metadata from 127.0.0.1:57346 conn6: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.299-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57348 #7 (2 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.299-0500 I NETWORK [conn7] received client metadata from 127.0.0.1:57348 conn7: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.299-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57350 #8 (3 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.300-0500 I NETWORK [conn8] received client metadata from 127.0.0.1:57350 conn8: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:mongos1] Successfully contacted the mongos on port 20008.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.301-0500 I NETWORK [conn8] end connection 127.0.0.1:57350 (2 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.301-0500 I NETWORK [conn7] end connection 127.0.0.1:57348 (1 connection now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.302-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57352 #9 (2 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.302-0500 I NETWORK [conn9] received client metadata from 127.0.0.1:57352 conn9: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.302-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44212 #9 (2 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.303-0500 I NETWORK [conn9] received client metadata from 127.0.0.1:44212 conn9: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.303-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57356 #10 (3 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.308-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44216 #10 (3 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.308-0500 I NETWORK [conn10] received client metadata from 127.0.0.1:57356 conn10: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.308-0500 I NETWORK [conn10] received client metadata from 127.0.0.1:44216 conn10: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.309-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44218 #11 (4 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.309-0500 I NETWORK [conn11] received client metadata from 127.0.0.1:44218 conn11: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.310-0500 I NETWORK [conn6] end connection 127.0.0.1:44182 (3 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.310-0500 I NETWORK [conn6] end connection 127.0.0.1:57346 (2 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.310-0500 I STORAGE [conn19] createCollection: config.settings with generated UUID: 6d167d1d-0483-49b9-9ac8-ee5b66996698 and options: {}
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.321-0500 I INDEX [conn19] index build: done building index _id_ on ns config.settings
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.323-0500 I SHARDING [conn19] ShouldAutoSplit changing from 1 to 0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.323-0500 I STORAGE [conn19] createCollection: config.actionlog with generated UUID: ff427093-1de4-4a9f-83c9-6b01392e1aea and options: { capped: true, size: 20971520 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.334-0500 I INDEX [conn19] index build: done building index _id_ on ns config.actionlog
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.334-0500 I SHARDING [conn19] about to log metadata event into actionlog: { _id: "nz_desktop:20000-2019-11-26T14:30:57.334-0500-5ddd7d715cde74b6784bb1c0", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55580", time: new Date(1574796657334), what: "balancer.stop", ns: "", details: {} }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.335-0500 I SHARDING [conn19] Marking collection config.actionlog as collection version:
[ShardedClusterFixture:job0] Stopped the balancer
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.336-0500 I NETWORK [conn10] end connection 127.0.0.1:44216 (2 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.336-0500 I NETWORK [conn10] end connection 127.0.0.1:57356 (1 connection now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.336-0500 I NETWORK [conn11] end connection 127.0.0.1:44218 (1 connection now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.337-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44220 #12 (2 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.337-0500 I NETWORK [conn12] received client metadata from 127.0.0.1:44220 conn12: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0] Adding shard-rs0/localhost:20001,localhost:20002,localhost:20003 as a shard...
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.339-0500 I NETWORK [conn19] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.340-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.340-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.340-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38364 #30 (7 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.340-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52288 #25 (6 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.340-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51402 #25 (5 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.340-0500 I NETWORK [conn30] received client metadata from 127.0.0.1:38364 conn30: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.340-0500 I NETWORK [conn25] received client metadata from 127.0.0.1:52288 conn25: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.340-0500 I NETWORK [conn25] received client metadata from 127.0.0.1:51402 conn25: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.340-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.341-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38370 #31 (8 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.341-0500 I NETWORK [conn31] received client metadata from 127.0.0.1:38370 conn31: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.342-0500 I COMMAND [conn31] CMD: drop config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.342-0500 I SHARDING [conn31] initializing sharding state with: { shardName: "shard-rs0", clusterId: ObjectId('5ddd7d665cde74b6784bb161'), configsvrConnectionString: "config-rs/localhost:20000" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.347-0500 I NETWORK [conn31] Starting new replica set monitor for config-rs/localhost:20000
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.347-0500 I SHARDING [thread27] creating distributed lock ping thread for process nz_desktop:20001:1574796657:-7363633931289976189 (sleeping for 30000ms)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.347-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.347-0500 I TXN [conn31] Incoming coordinateCommit requests are now enabled
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.347-0500 I SHARDING [ReplWriterWorker-8] initializing sharding state with: { shardName: "shard-rs0", clusterId: ObjectId('5ddd7d665cde74b6784bb161'), configsvrConnectionString: "config-rs/localhost:20000" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.347-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55632 #30 (11 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.347-0500 I SHARDING [conn31] Finished initializing sharding components for primary node.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.348-0500 I NETWORK [conn30] received client metadata from 127.0.0.1:55632 conn30: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.348-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.348-0500 I SHARDING [Sharding-Fixed-0] Updating config server with confirmed set config-rs/localhost:20000
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.348-0500 I NETWORK [ReplWriterWorker-8] Starting new replica set monitor for config-rs/localhost:20000
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.348-0500 I SHARDING [thread17] creating distributed lock ping thread for process nz_desktop:20003:1574796657:-2123896925116328441 (sleeping for 30000ms)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.348-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55634 #31 (12 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.348-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.349-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55636 #32 (13 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.349-0500 I NETWORK [conn31] received client metadata from 127.0.0.1:55634 conn31: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.349-0500 I NETWORK [conn32] received client metadata from 127.0.0.1:55636 conn32: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.349-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55638 #33 (14 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.350-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55640 #34 (15 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.350-0500 I NETWORK [conn33] received client metadata from 127.0.0.1:55638 conn33: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.350-0500 I SHARDING [ShardRegistry] Received reply from config server node (unknown) indicating config server optime term has increased, previous optime { ts: Timestamp(0, 0), t: -1 }, now { ts: Timestamp(1574796657, 6), t: 1 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.350-0500 I NETWORK [conn34] received client metadata from 127.0.0.1:55640 conn34: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.350-0500 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: LockStateChangeFailed: findAndModify query predicate didn't match any lock document
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.351-0500 I SHARDING [PeriodicBalancerConfigRefresher] ShouldAutoSplit changing from 1 to 0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.352-0500 I SHARDING [ReplWriterWorker-8] Finished initializing sharding components for secondary node.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.352-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55642 #35 (16 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.353-0500 I NETWORK [conn35] received client metadata from 127.0.0.1:55642 conn35: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.353-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.353-0500 I SHARDING [Sharding-Fixed-0] Updating config server with confirmed set config-rs/localhost:20000
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.353-0500 I SHARDING [ReplWriterWorker-3] initializing sharding state with: { shardName: "shard-rs0", clusterId: ObjectId('5ddd7d665cde74b6784bb161'), configsvrConnectionString: "config-rs/localhost:20000" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.354-0500 I NETWORK [ReplWriterWorker-3] Starting new replica set monitor for config-rs/localhost:20000
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.354-0500 I COMMAND [conn31] setting featureCompatibilityVersion to upgrading to 4.4
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.354-0500 I NETWORK [conn31] Skip closing connection for connection # 31
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.354-0500 I NETWORK [conn31] Skip closing connection for connection # 30
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.354-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.354-0500 I NETWORK [conn31] Skip closing connection for connection # 29
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.354-0500 I NETWORK [conn31] Skip closing connection for connection # 28
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.354-0500 I NETWORK [conn31] Skip closing connection for connection # 19
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.354-0500 I NETWORK [conn31] Skip closing connection for connection # 17
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.354-0500 I NETWORK [conn31] Skip closing connection for connection # 14
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.354-0500 I NETWORK [conn31] Skip closing connection for connection # 12
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.357-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55644 #36 (17 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.357-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55646 #37 (18 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.357-0500 I NETWORK [conn36] received client metadata from 127.0.0.1:55644 conn36: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.357-0500 I COMMAND [ReplWriterWorker-5] setting featureCompatibilityVersion to upgrading to 4.4
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.357-0500 I NETWORK [conn37] received client metadata from 127.0.0.1:55646 conn37: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.357-0500 I NETWORK [ReplWriterWorker-5] Skip closing connection for connection # 25
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.358-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55648 #38 (19 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.357-0500 I NETWORK [ReplWriterWorker-5] Skip closing connection for connection # 24
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.358-0500 I SHARDING [thread17] creating distributed lock ping thread for process nz_desktop:20002:1574796657:-961983587514488543 (sleeping for 30000ms)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.358-0500 I NETWORK [conn38] received client metadata from 127.0.0.1:55648 conn38: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.357-0500 I NETWORK [ReplWriterWorker-5] Skip closing connection for connection # 23
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.358-0500 I SHARDING [ReplWriterWorker-3] Finished initializing sharding components for secondary node.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.358-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55650 #39 (20 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.357-0500 I NETWORK [ReplWriterWorker-5] Skip closing connection for connection # 11
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.359-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.358-0500 I NETWORK [conn39] received client metadata from 127.0.0.1:55650 conn39: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.358-0500 I COMMAND [conn31] setting featureCompatibilityVersion to 4.4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.357-0500 I NETWORK [ReplWriterWorker-5] Skip closing connection for connection # 5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.359-0500 I SHARDING [Sharding-Fixed-0] Updating config server with confirmed set config-rs/localhost:20000
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.359-0500 I NETWORK [conn31] Skip closing connection for connection # 31
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.357-0500 I NETWORK [ReplWriterWorker-5] Skip closing connection for connection # 4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.359-0500 I COMMAND [ReplWriterWorker-6] setting featureCompatibilityVersion to upgrading to 4.4
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.359-0500 I NETWORK [conn31] Skip closing connection for connection # 30
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.359-0500 I NETWORK [ReplWriterWorker-6] Skip closing connection for connection # 25
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.358-0500 I SHARDING [ShardRegistry] Received reply from config server node (unknown) indicating config server optime term has increased, previous optime { ts: Timestamp(0, 0), t: -1 }, now { ts: Timestamp(1574796657, 7), t: 1 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.359-0500 I NETWORK [conn31] Skip closing connection for connection # 29
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.359-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55652 #40 (21 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.359-0500 I NETWORK [ReplWriterWorker-6] Skip closing connection for connection # 18
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.359-0500 I NETWORK [conn31] Skip closing connection for connection # 28
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.360-0500 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: LockStateChangeFailed: findAndModify query predicate didn't match any lock document
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.359-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55654 #41 (22 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.359-0500 I NETWORK [ReplWriterWorker-6] Skip closing connection for connection # 16
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.359-0500 I NETWORK [conn31] Skip closing connection for connection # 19
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.360-0500 I NETWORK [conn40] received client metadata from 127.0.0.1:55652 conn40: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.359-0500 I NETWORK [ReplWriterWorker-6] Skip closing connection for connection # 11
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.359-0500 I NETWORK [conn31] Skip closing connection for connection # 17
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.360-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55656 #42 (23 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.359-0500 I NETWORK [ReplWriterWorker-6] Skip closing connection for connection # 4
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.359-0500 I NETWORK [conn31] Skip closing connection for connection # 14
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.359-0500 I CONNPOOL [ReplWriterWorker-6] Dropping all pooled connections to localhost:20000 due to PooledConnectionsDropped: Pooled connections dropped
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.360-0500 I NETWORK [conn40] end connection 127.0.0.1:55652 (22 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.359-0500 I NETWORK [conn31] Skip closing connection for connection # 12
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.360-0500 I NETWORK [conn41] received client metadata from 127.0.0.1:55654 conn41: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.359-0500 I SHARDING [shard-registry-reload] Periodic reload of shard registry failed :: caused by :: PooledConnectionsDropped: could not get updated shard list from config server :: caused by :: Pooled connections dropped; will retry after 30s
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.360-0500 I NETWORK [conn41] end connection 127.0.0.1:55654 (21 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.361-0500 I COMMAND [ReplWriterWorker-10] setting featureCompatibilityVersion to 4.4
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.360-0500 I NETWORK [conn42] received client metadata from 127.0.0.1:55656 conn42: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.361-0500 I NETWORK [ReplWriterWorker-10] Skip closing connection for connection # 25
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.361-0500 I NETWORK [ReplWriterWorker-10] Skip closing connection for connection # 24
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.361-0500 I NETWORK [ReplWriterWorker-10] Skip closing connection for connection # 23
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.361-0500 I NETWORK [ReplWriterWorker-10] Skip closing connection for connection # 11
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.361-0500 I NETWORK [ReplWriterWorker-10] Skip closing connection for connection # 5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.361-0500 I NETWORK [ReplWriterWorker-10] Skip closing connection for connection # 4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.361-0500 I SHARDING [ShardRegistry] Received reply from config server node (unknown) indicating config server optime term has increased, previous optime { ts: Timestamp(0, 0), t: -1 }, now { ts: Timestamp(1574796657, 9), t: 1 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.361-0500 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: LockStateChangeFailed: findAndModify query predicate didn't match any lock document
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.362-0500 I COMMAND [ReplWriterWorker-4] setting featureCompatibilityVersion to 4.4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.362-0500 I NETWORK [ReplWriterWorker-4] Skip closing connection for connection # 25
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.362-0500 I NETWORK [ReplWriterWorker-4] Skip closing connection for connection # 18
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.362-0500 I NETWORK [ReplWriterWorker-4] Skip closing connection for connection # 16
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.362-0500 I NETWORK [ReplWriterWorker-4] Skip closing connection for connection # 11
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.362-0500 I NETWORK [ReplWriterWorker-4] Skip closing connection for connection # 4
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.362-0500 I SHARDING [conn19] going to insert new entry for shard into config.shards: { _id: "shard-rs0", host: "shard-rs0/localhost:20001,localhost:20002,localhost:20003", state: 1 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.362-0500 I STORAGE [conn19] createCollection: config.changelog with generated UUID: 65b892c8-48e9-4ca9-8300-743a486a361f and options: { capped: true, size: 209715200 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.373-0500 I INDEX [conn19] index build: done building index _id_ on ns config.changelog
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.374-0500 I SHARDING [conn19] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:30:57.374-0500-5ddd7d715cde74b6784bb1e5", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55580", time: new Date(1574796657374), what: "addShard", ns: "", details: { name: "shard-rs0", host: "shard-rs0/localhost:20001,localhost:20002,localhost:20003" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.374-0500 I SHARDING [conn19] Marking collection config.changelog as collection version:
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.375-0500 I NETWORK [conn12] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0] Adding shard-rs1/localhost:20004,localhost:20005,localhost:20006 as a shard...
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.376-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57398 #11 (2 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.376-0500 I NETWORK [conn11] received client metadata from 127.0.0.1:57398 conn11: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.377-0500 I NETWORK [conn22] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.378-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.378-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.378-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.378-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45866 #33 (9 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.378-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51066 #21 (3 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.378-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34428 #22 (4 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.378-0500 I NETWORK [conn33] received client metadata from 127.0.0.1:45866 conn33: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.378-0500 I NETWORK [conn21] received client metadata from 127.0.0.1:51066 conn21: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.378-0500 I NETWORK [conn22] received client metadata from 127.0.0.1:34428 conn22: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.378-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.379-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45872 #34 (10 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.379-0500 I NETWORK [conn34] received client metadata from 127.0.0.1:45872 conn34: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.379-0500 I COMMAND [conn34] CMD: drop config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.380-0500 I SHARDING [conn34] initializing sharding state with: { shardName: "shard-rs1", clusterId: ObjectId('5ddd7d665cde74b6784bb161'), configsvrConnectionString: "config-rs/localhost:20000" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.384-0500 I NETWORK [conn34] Starting new replica set monitor for config-rs/localhost:20000
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.385-0500 I SHARDING [thread30] creating distributed lock ping thread for process nz_desktop:20004:1574796657:2902281840457103640 (sleeping for 30000ms)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.385-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.385-0500 I SHARDING [ReplWriterWorker-8] initializing sharding state with: { shardName: "shard-rs1", clusterId: ObjectId('5ddd7d665cde74b6784bb161'), configsvrConnectionString: "config-rs/localhost:20000" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.385-0500 I SHARDING [ReplWriterWorker-10] initializing sharding state with: { shardName: "shard-rs1", clusterId: ObjectId('5ddd7d665cde74b6784bb161'), configsvrConnectionString: "config-rs/localhost:20000" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.386-0500 I NETWORK [ReplWriterWorker-8] Starting new replica set monitor for config-rs/localhost:20000
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.386-0500 I NETWORK [ReplWriterWorker-10] Starting new replica set monitor for config-rs/localhost:20000
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.389-0500 I TXN [conn34] Incoming coordinateCommit requests are now enabled
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.389-0500 I SHARDING [conn34] Finished initializing sharding components for primary node.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.389-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55668 #47 (22 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.389-0500 I NETWORK [conn47] received client metadata from 127.0.0.1:55668 conn47: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.389-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.389-0500 I SHARDING [Sharding-Fixed-0] Updating config server with confirmed set config-rs/localhost:20000
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.389-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.389-0500 I SHARDING [thread15] creating distributed lock ping thread for process nz_desktop:20006:1574796657:8805374381407459879 (sleeping for 30000ms)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.389-0500 I SHARDING [thread14] creating distributed lock ping thread for process nz_desktop:20005:1574796657:6300883275503185230 (sleeping for 30000ms)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.389-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.390-0500 I SHARDING [ReplWriterWorker-8] Finished initializing sharding components for secondary node.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.390-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55670 #48 (23 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.390-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55672 #49 (24 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.390-0500 I NETWORK [conn48] received client metadata from 127.0.0.1:55670 conn48: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.390-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55674 #50 (25 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.390-0500 I NETWORK [conn49] received client metadata from 127.0.0.1:55672 conn49: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.390-0500 I NETWORK [conn50] received client metadata from 127.0.0.1:55674 conn50: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.390-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55676 #51 (26 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.390-0500 I SHARDING [ShardRegistry] Received reply from config server node (unknown) indicating config server optime term has increased, previous optime { ts: Timestamp(0, 0), t: -1 }, now { ts: Timestamp(1574796657, 12), t: 1 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.390-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55678 #52 (27 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.390-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.391-0500 I SHARDING [Sharding-Fixed-0] Updating config server with confirmed set config-rs/localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.391-0500 I NETWORK [conn51] received client metadata from 127.0.0.1:55676 conn51: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.391-0500 I NETWORK [conn52] received client metadata from 127.0.0.1:55678 conn52: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.391-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55680 #53 (28 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.391-0500 I NETWORK [shard-registry-reload] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.391-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55682 #54 (29 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.391-0500 I NETWORK [conn53] received client metadata from 127.0.0.1:55680 conn53: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.391-0500 I NETWORK [conn54] received client metadata from 127.0.0.1:55682 conn54: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.391-0500 I SHARDING [PeriodicBalancerConfigRefresher] ShouldAutoSplit changing from 1 to 0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.392-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55684 #55 (30 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.392-0500 I SHARDING [ShardRegistry] Received reply from config server node (unknown) indicating config server optime term has increased, previous optime { ts: Timestamp(0, 0), t: -1 }, now { ts: Timestamp(1574796657, 13), t: 1 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.392-0500 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: LockStateChangeFailed: findAndModify query predicate didn't match any lock document
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.392-0500 I NETWORK [conn55] received client metadata from 127.0.0.1:55684 conn55: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.392-0500 I NETWORK [shard-registry-reload] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.392-0500 I COMMAND [conn34] setting featureCompatibilityVersion to upgrading to 4.4
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.392-0500 I NETWORK [conn34] Skip closing connection for connection # 34
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.392-0500 I NETWORK [conn34] Skip closing connection for connection # 33
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.392-0500 I NETWORK [conn34] Skip closing connection for connection # 30
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.392-0500 I NETWORK [conn34] Skip closing connection for connection # 29
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.392-0500 I NETWORK [conn34] Skip closing connection for connection # 22
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.392-0500 I NETWORK [conn34] Skip closing connection for connection # 21
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.392-0500 I NETWORK [conn34] Skip closing connection for connection # 19
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.392-0500 I NETWORK [conn34] Skip closing connection for connection # 17
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.392-0500 I NETWORK [conn34] Skip closing connection for connection # 14
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.392-0500 I NETWORK [conn34] Skip closing connection for connection # 12
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.393-0500 I SHARDING [ReplWriterWorker-10] Finished initializing sharding components for secondary node.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.393-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55686 #56 (31 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.394-0500 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: LockStateChangeFailed: findAndModify query predicate didn't match any lock document
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.394-0500 I NETWORK [conn56] received client metadata from 127.0.0.1:55686 conn56: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.394-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.394-0500 I SHARDING [Sharding-Fixed-0] Updating config server with confirmed set config-rs/localhost:20000
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.394-0500 I COMMAND [ReplWriterWorker-14] setting featureCompatibilityVersion to upgrading to 4.4
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.394-0500 I NETWORK [ReplWriterWorker-14] Skip closing connection for connection # 22
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.394-0500 I NETWORK [ReplWriterWorker-14] Skip closing connection for connection # 11
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.394-0500 I NETWORK [ReplWriterWorker-14] Skip closing connection for connection # 6
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.394-0500 I NETWORK [ReplWriterWorker-14] Skip closing connection for connection # 4
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.394-0500 I CONNPOOL [ReplWriterWorker-14] Dropping all pooled connections to localhost:20000 due to PooledConnectionsDropped: Pooled connections dropped
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.394-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55688 #57 (32 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.395-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55690 #58 (33 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.395-0500 I COMMAND [ReplWriterWorker-9] setting featureCompatibilityVersion to upgrading to 4.4
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.395-0500 I NETWORK [ReplWriterWorker-9] Skip closing connection for connection # 21
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.395-0500 I SHARDING [shard-registry-reload] Periodic reload of shard registry failed :: caused by :: PooledConnectionsDropped: could not get updated shard list from config server :: caused by :: Pooled connections dropped; will retry after 30s
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.395-0500 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: PooledConnectionsDropped: Pooled connections dropped
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.395-0500 I NETWORK [conn57] received client metadata from 127.0.0.1:55688 conn57: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.395-0500 I NETWORK [ReplWriterWorker-9] Skip closing connection for connection # 11
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.395-0500 I NETWORK [ReplWriterWorker-9] Skip closing connection for connection # 4
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.395-0500 I NETWORK [conn58] received client metadata from 127.0.0.1:55690 conn58: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.395-0500 I NETWORK [conn57] end connection 127.0.0.1:55688 (32 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.395-0500 I NETWORK [conn58] end connection 127.0.0.1:55690 (31 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.396-0500 I COMMAND [conn34] setting featureCompatibilityVersion to 4.4
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.396-0500 I NETWORK [conn34] Skip closing connection for connection # 34
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.396-0500 I NETWORK [conn34] Skip closing connection for connection # 33
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.396-0500 I NETWORK [conn34] Skip closing connection for connection # 30
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.396-0500 I NETWORK [conn34] Skip closing connection for connection # 29
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.396-0500 I NETWORK [conn34] Skip closing connection for connection # 22
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.396-0500 I NETWORK [conn34] Skip closing connection for connection # 21
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.396-0500 I NETWORK [conn34] Skip closing connection for connection # 19
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.396-0500 I NETWORK [conn34] Skip closing connection for connection # 17
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.396-0500 I NETWORK [conn34] Skip closing connection for connection # 14
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.396-0500 I NETWORK [conn34] Skip closing connection for connection # 12
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.397-0500 I COMMAND [ReplWriterWorker-7] setting featureCompatibilityVersion to 4.4
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.397-0500 I COMMAND [ReplWriterWorker-3] setting featureCompatibilityVersion to 4.4
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.397-0500 I NETWORK [ReplWriterWorker-7] Skip closing connection for connection # 21
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.398-0500 I NETWORK [ReplWriterWorker-3] Skip closing connection for connection # 22
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.397-0500 I NETWORK [ReplWriterWorker-7] Skip closing connection for connection # 11
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.398-0500 I NETWORK [ReplWriterWorker-3] Skip closing connection for connection # 11
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.397-0500 I NETWORK [ReplWriterWorker-7] Skip closing connection for connection # 4
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.398-0500 I NETWORK [ReplWriterWorker-3] Skip closing connection for connection # 6
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.398-0500 I NETWORK [ReplWriterWorker-3] Skip closing connection for connection # 4
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.399-0500 I SHARDING [conn22] going to insert new entry for shard into config.shards: { _id: "shard-rs1", host: "shard-rs1/localhost:20004,localhost:20005,localhost:20006", state: 1 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.399-0500 I SHARDING [conn22] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:30:57.399-0500-5ddd7d715cde74b6784bb208", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55594", time: new Date(1574796657399), what: "addShard", ns: "", details: { name: "shard-rs1", host: "shard-rs1/localhost:20004,localhost:20005,localhost:20006" } }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.400-0500 I NETWORK [conn11] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.400-0500 I NETWORK [conn11] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.402-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55692 #59 (32 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.402-0500 I NETWORK [conn59] received client metadata from 127.0.0.1:55692 conn59: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.403-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55694 #60 (33 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.403-0500 I NETWORK [conn60] received client metadata from 127.0.0.1:55694 conn60: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.405-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55696 #61 (34 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.405-0500 I NETWORK [conn61] received client metadata from 127.0.0.1:55696 conn61: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.405-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55698 #62 (35 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.406-0500 I NETWORK [conn62] received client metadata from 127.0.0.1:55698 conn62: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.406-0500 I NETWORK [conn62] end connection 127.0.0.1:55698 (34 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.406-0500 I NETWORK [conn61] end connection 127.0.0.1:55696 (33 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.407-0500 I NETWORK [conn60] end connection 127.0.0.1:55694 (32 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.407-0500 I NETWORK [conn59] end connection 127.0.0.1:55692 (31 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.408-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55700 #63 (32 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.408-0500 I NETWORK [conn63] received client metadata from 127.0.0.1:55700 conn63: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.408-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55702 #64 (33 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.409-0500 I NETWORK [conn64] received client metadata from 127.0.0.1:55702 conn64: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.409-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection config.system.sessions took 0 ms and found the collection is not sharded
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.410-0500 I SHARDING [conn64] distributed lock 'config' acquired for 'shardCollection', ts : 5ddd7d715cde74b6784bb217
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.412-0500 I SHARDING [conn64] distributed lock 'config.system.sessions' acquired for 'shardCollection', ts : 5ddd7d715cde74b6784bb219
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.412-0500 I SHARDING [conn64] Marking collection config.system.sessions as collection version:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.412-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38444 #37 (9 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.413-0500 I NETWORK [conn37] received client metadata from 127.0.0.1:38444 conn37: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.414-0500 I SHARDING [conn31] Marking collection config.chunks as collection version:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.415-0500 I STORAGE [conn37] createCollection: config.system.sessions with provided UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479 and options: { uuid: UUID("13cbac84-c366-42f3-b1e6-6924cc7c7479") }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.424-0500 I INDEX [conn37] index build: done building index _id_ on ns config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.425-0500 I INDEX [conn37] Registering index build: 35552dfb-3e75-4ced-9990-1d49b28e3fe5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.425-0500 I INDEX [conn37] Waiting for index build to complete: 35552dfb-3e75-4ced-9990-1d49b28e3fe5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.425-0500 I INDEX [conn37] Index build completed: 35552dfb-3e75-4ced-9990-1d49b28e3fe5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.426-0500 I STORAGE [ReplWriterWorker-3] createCollection: config.system.sessions with provided UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479 and options: { uuid: UUID("13cbac84-c366-42f3-b1e6-6924cc7c7479") }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.437-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.438-0500 I SHARDING [conn31] Marking collection config.tags as collection version:
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.438-0500 I STORAGE [ReplWriterWorker-10] createCollection: config.system.sessions with provided UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479 and options: { uuid: UUID("13cbac84-c366-42f3-b1e6-6924cc7c7479") }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.439-0500 I NETWORK [conn37] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.439-0500 I NETWORK [conn37] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.439-0500 I SHARDING [conn37] CMD: shardcollection: { _shardsvrShardCollection: "config.system.sessions", key: { _id: 1 }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("033132de-6169-4b8e-9108-1f44ddfb4bf1"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796657, 18), signature: { hash: BinData(0, D4A12BC9CC96C739408A5C23B7634C70BC58BDC4), keyId: 6763700092420489256 } }, $client: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }, $configServerState: { opTime: { ts: Timestamp(1574796657, 18), t: 1 } }, $db: "admin" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.439-0500 I SHARDING [conn37] about to log metadata event into changelog: { _id: "nz_desktop:20001-2019-11-26T14:30:57.439-0500-5ddd7d713bbfe7fa5630d449", server: "nz_desktop:20001", shard: "shard-rs0", clientAddr: "127.0.0.1:38444", time: new Date(1574796657439), what: "shardCollection.start", ns: "config.system.sessions", details: { shardKey: { _id: 1 }, collection: "config.system.sessions", uuid: UUID("13cbac84-c366-42f3-b1e6-6924cc7c7479"), empty: true, fromMapReduce: false, primary: "shard-rs0:shard-rs0/localhost:20001,localhost:20002,localhost:20003", numChunks: 1 } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.441-0500 D4 TXN [conn31] New transaction started with txnNumber: 0 on session with lsid 221fcdc0-8ee3-4ced-acc5-980c8be690e9
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.443-0500 I STORAGE [conn31] createCollection: config.collections with generated UUID: c846d630-16e0-4675-b90f-3cd769544ef0 and options: {}
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.451-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.456-0500 I INDEX [conn31] index build: done building index _id_ on ns config.collections
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.457-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database config from version {} to version { uuid: UUID("02f27be9-a4c4-4db2-aad3-b2af43f1c585"), lastMod: 0 } took 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.457-0500 I SHARDING [ShardServerCatalogCacheLoader-0] Marking collection config.cache.databases as collection version:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.457-0500 I STORAGE [ShardServerCatalogCacheLoader-0] createCollection: config.cache.databases with generated UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f and options: {}
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.490-0500 I SHARDING [ShardServerCatalogCacheLoader-1] Marking collection config.cache.collections as collection version:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.491-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection config.system.sessions to version 1|0||5ddd7d713bbfe7fa5630d44a took 33 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.491-0500 I STORAGE [ShardServerCatalogCacheLoader-1] createCollection: config.cache.collections with generated UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b and options: {}
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.492-0500 I SHARDING [conn37] Marking collection config.system.sessions as collection version: 1|0||5ddd7d713bbfe7fa5630d44a, shard version: 1|0||5ddd7d713bbfe7fa5630d44a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.492-0500 I SHARDING [conn37] Created 1 chunk(s) for: config.system.sessions, producing collection version 1|0||5ddd7d713bbfe7fa5630d44a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.492-0500 I SHARDING [conn37] about to log metadata event into changelog: { _id: "nz_desktop:20001-2019-11-26T14:30:57.492-0500-5ddd7d713bbfe7fa5630d451", server: "nz_desktop:20001", shard: "shard-rs0", clientAddr: "127.0.0.1:38444", time: new Date(1574796657492), what: "shardCollection.end", ns: "config.system.sessions", details: { version: "1|0||5ddd7d713bbfe7fa5630d44a" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.496-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection config.system.sessions to version 1|0||5ddd7d713bbfe7fa5630d44a took 0 ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.497-0500 I SHARDING [conn64] distributed lock with ts: 5ddd7d715cde74b6784bb219' unlocked.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.499-0500 I SHARDING [conn64] distributed lock with ts: 5ddd7d715cde74b6784bb217' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.499-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38446 #38 (10 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.499-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45914 #40 (11 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.500-0500 I NETWORK [conn38] received client metadata from 127.0.0.1:38446 conn38: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.500-0500 I NETWORK [conn40] received client metadata from 127.0.0.1:45914 conn40: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.500-0500 I INDEX [conn38] Registering index build: 359c95e8-61ed-44ee-b553-4c0889c233d6
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.502-0500 I SHARDING [conn52] distributed lock 'config' acquired for 'createCollection', ts : 5ddd7d715cde74b6784bb230
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.504-0500 I SHARDING [conn52] distributed lock 'config.system.sessions' acquired for 'createCollection', ts : 5ddd7d715cde74b6784bb232
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.504-0500 I STORAGE [conn52] createCollection: config.system.sessions with generated UUID: 9014747b-5aa2-462f-9e13-1e6b27298390 and options: {}
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.507-0500 I INDEX [ShardServerCatalogCacheLoader-0] index build: done building index _id_ on ns config.cache.databases
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.510-0500 I STORAGE [ReplWriterWorker-9] createCollection: config.cache.databases with provided UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f and options: { uuid: UUID("d4db3d14-1174-436c-a1b7-966e3cf5246f") }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.518-0500 I INDEX [conn52] index build: done building index _id_ on ns config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.520-0500 I SHARDING [conn52] Collection config.system.sessions already exists in sharding catalog as { _id: "config.system.sessions", lastmodEpoch: ObjectId('5ddd7d713bbfe7fa5630d44a'), lastmod: new Date(4294967296), dropped: false, key: { _id: 1 }, unique: false, uuid: UUID("13cbac84-c366-42f3-b1e6-6924cc7c7479"), distributionMode: "sharded" }, createCollection not writing new entry
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.521-0500 I INDEX [ShardServerCatalogCacheLoader-1] index build: done building index _id_ on ns config.cache.collections
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.521-0500 I SHARDING [conn52] distributed lock with ts: 5ddd7d715cde74b6784bb232' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.521-0500 I STORAGE [ShardServerCatalogCacheLoader-1] createCollection: config.cache.chunks.config.system.sessions with provided UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb and options: { uuid: UUID("64c6a829-dbfe-4506-b9df-8620f75d7efb") }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.522-0500 I SHARDING [conn52] distributed lock with ts: 5ddd7d715cde74b6784bb230' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.526-0500 I INDEX [conn38] index build: starting on config.system.sessions properties: { v: 2, key: { lastUse: 1 }, name: "lsidTTLIndex", expireAfterSeconds: 1800 } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.526-0500 I INDEX [conn38] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.526-0500 I STORAGE [conn38] Index build initialized: 359c95e8-61ed-44ee-b553-4c0889c233d6: config.system.sessions (13cbac84-c366-42f3-b1e6-6924cc7c7479 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.529-0500 I INDEX [conn38] Waiting for index build to complete: 359c95e8-61ed-44ee-b553-4c0889c233d6
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.529-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.531-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.533-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lsidTTLIndex on ns config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.541-0500 I INDEX [ShardServerCatalogCacheLoader-1] index build: done building index _id_ on ns config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.541-0500 I INDEX [ShardServerCatalogCacheLoader-1] Registering index build: bc9e5e8f-3574-4550-b7f3-3e2cf75edcef
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.542-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 359c95e8-61ed-44ee-b553-4c0889c233d6: config.system.sessions ( 13cbac84-c366-42f3-b1e6-6924cc7c7479 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.542-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns config.cache.databases
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.543-0500 I STORAGE [ReplWriterWorker-1] createCollection: config.cache.collections with provided UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b and options: { uuid: UUID("9215d95d-c07d-4373-a3d5-16d1fad88b5b") }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.543-0500 I STORAGE [ReplWriterWorker-12] createCollection: config.cache.databases with provided UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f and options: { uuid: UUID("d4db3d14-1174-436c-a1b7-966e3cf5246f") }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.556-0500 I INDEX [ShardServerCatalogCacheLoader-1] index build: starting on config.cache.chunks.config.system.sessions properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.556-0500 I INDEX [ShardServerCatalogCacheLoader-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.556-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Index build initialized: bc9e5e8f-3574-4550-b7f3-3e2cf75edcef: config.cache.chunks.config.system.sessions (64c6a829-dbfe-4506-b9df-8620f75d7efb ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.556-0500 I INDEX [conn38] Index build completed: 359c95e8-61ed-44ee-b553-4c0889c233d6
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.556-0500 I INDEX [ShardServerCatalogCacheLoader-1] Waiting for index build to complete: bc9e5e8f-3574-4550-b7f3-3e2cf75edcef
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.557-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.557-0500 W WRITE [conn38] The $currentDate update operator is deprecated. As an alternative perform updates with an aggregation pipeline and either the 'NOW' or 'CLUSTER_TIME' system variables.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.557-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.559-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns config.cache.collections
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.560-0500 I SHARDING [ReplWriterWorker-12] Marking collection config.cache.databases as collection version:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.560-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.560-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns config.cache.databases
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.560-0500 I SHARDING [ReplWriterWorker-6] Marking collection config.cache.collections as collection version:
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.561-0500 I STORAGE [ReplWriterWorker-14] createCollection: config.cache.collections with provided UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b and options: { uuid: UUID("9215d95d-c07d-4373-a3d5-16d1fad88b5b") }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.561-0500 I STORAGE [ReplWriterWorker-8] createCollection: config.cache.chunks.config.system.sessions with provided UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb and options: { uuid: UUID("64c6a829-dbfe-4506-b9df-8620f75d7efb") }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.562-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: bc9e5e8f-3574-4550-b7f3-3e2cf75edcef: config.cache.chunks.config.system.sessions ( 64c6a829-dbfe-4506-b9df-8620f75d7efb ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.562-0500 I INDEX [ShardServerCatalogCacheLoader-1] Index build completed: bc9e5e8f-3574-4550-b7f3-3e2cf75edcef
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.562-0500 I SHARDING [ShardServerCatalogCacheLoader-1] Marking collection config.cache.chunks.config.system.sessions as collection version:
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.574-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.578-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns config.cache.collections
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.579-0500 I SHARDING [ReplWriterWorker-0] Marking collection config.cache.collections as collection version:
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.579-0500 I SHARDING [ReplWriterWorker-1] Marking collection config.cache.databases as collection version:
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.580-0500 I STORAGE [ReplWriterWorker-15] createCollection: config.cache.chunks.config.system.sessions with provided UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb and options: { uuid: UUID("64c6a829-dbfe-4506-b9df-8620f75d7efb") }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.587-0500 I INDEX [ReplWriterWorker-10] index build: starting on config.system.sessions properties: { v: 2, key: { lastUse: 1 }, name: "lsidTTLIndex", expireAfterSeconds: 1800 } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.587-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.587-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 539e2e90-6b22-4ad5-8ce3-3df30a988b40: config.system.sessions (13cbac84-c366-42f3-b1e6-6924cc7c7479 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.587-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.588-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.591-0500 I COMMAND [conn64] command admin.$cmd command: refreshLogicalSessionCacheNow { refreshLogicalSessionCacheNow: 1, lsid: { id: UUID("033132de-6169-4b8e-9108-1f44ddfb4bf1") }, $clusterTime: { clusterTime: Timestamp(1574796657, 16), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin", $readPreference: { mode: "primaryPreferred" } } numYields:0 reslen:272 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 5 } }, Global: { acquireCount: { r: 1, w: 4 } }, Database: { acquireCount: { r: 1, w: 4 } }, Collection: { acquireCount: { r: 2, w: 4 } }, Mutex: { acquireCount: { r: 9, W: 1 } } } flowControl:{ acquireCount: 4 } storage:{} protocol:op_msg 182ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.596-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55710 #68 (34 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.596-0500 I NETWORK [conn68] received client metadata from 127.0.0.1:55710 conn68: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.596-0500 I NETWORK [conn64] end connection 127.0.0.1:55702 (33 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.596-0500 I SHARDING [ShardRegistry] Received reply from config server node (unknown) indicating config server optime term has increased, previous optime { ts: Timestamp(0, 0), t: -1 }, now { ts: Timestamp(1574796657, 33), t: 1 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.596-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38452 #39 (11 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.596-0500 I NETWORK [conn63] end connection 127.0.0.1:55700 (32 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.597-0500 I NETWORK [conn39] received client metadata from 127.0.0.1:38452 conn39: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.597-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38454 #40 (12 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.598-0500 I NETWORK [conn40] received client metadata from 127.0.0.1:38454 conn40: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.598-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 11 side writes (inserted: 11, deleted: 0) for 'lsidTTLIndex' in 6 ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.598-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lsidTTLIndex on ns config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.600-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.600-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.600-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.600-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38456 #41 (13 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.601-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52380 #30 (7 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.601-0500 I NETWORK [conn41] received client metadata from 127.0.0.1:38456 conn41: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.601-0500 I NETWORK [conn30] received client metadata from 127.0.0.1:52380 conn30: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.601-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.601-0500 I SHARDING [updateShardIdentityConfigString] Updating config server with confirmed set shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.601-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38462 #46 (14 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.602-0500 I NETWORK [conn46] received client metadata from 127.0.0.1:38462 conn46: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.602-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51494 #30 (6 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.602-0500 I NETWORK [conn30] received client metadata from 127.0.0.1:51494 conn30: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.602-0500 I INDEX [ReplWriterWorker-11] index build: starting on config.cache.chunks.config.system.sessions properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.602-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.602-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 91b69457-56e5-4f91-9439-91f3351a8c46: config.cache.chunks.config.system.sessions (64c6a829-dbfe-4506-b9df-8620f75d7efb ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.603-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.603-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 539e2e90-6b22-4ad5-8ce3-3df30a988b40: config.system.sessions ( 13cbac84-c366-42f3-b1e6-6924cc7c7479 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.604-0500 I SHARDING [ReplWriterWorker-13] Marking collection config.cache.chunks.config.system.sessions as collection version:
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.604-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.607-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: drain applied 1 side writes (inserted: 1, deleted: 0) for 'lastmod_1' in 0 ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.607-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index lastmod_1 on ns config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.608-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.608-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 91b69457-56e5-4f91-9439-91f3351a8c46: config.cache.chunks.config.system.sessions ( 64c6a829-dbfe-4506-b9df-8620f75d7efb ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.615-0500 I NETWORK [conn39] end connection 127.0.0.1:38452 (13 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.615-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45930 #41 (12 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.615-0500 I NETWORK [conn40] end connection 127.0.0.1:38454 (12 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.615-0500 I NETWORK [conn41] received client metadata from 127.0.0.1:45930 conn41: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.616-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45932 #42 (13 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.616-0500 I NETWORK [conn42] received client metadata from 127.0.0.1:45932 conn42: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.617-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database config from version {} to version { uuid: UUID("d7482ace-7705-4e68-b85c-1ac8a3f9b153"), lastMod: 0 } took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.617-0500 I SHARDING [ShardServerCatalogCacheLoader-0] Marking collection config.cache.databases as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.618-0500 I STORAGE [ShardServerCatalogCacheLoader-0] createCollection: config.cache.databases with generated UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13 and options: {}
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.618-0500 I SHARDING [ShardServerCatalogCacheLoader-1] Marking collection config.cache.collections as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.619-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection config.system.sessions to version 1|0||5ddd7d713bbfe7fa5630d44a took 1 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.619-0500 I STORAGE [ShardServerCatalogCacheLoader-1] createCollection: config.cache.collections with generated UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c and options: {}
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.620-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38468 #47 (13 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.620-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51504 #31 (7 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.620-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52394 #31 (8 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.620-0500 I NETWORK [conn47] received client metadata from 127.0.0.1:38468 conn47: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.620-0500 I NETWORK [conn31] received client metadata from 127.0.0.1:51504 conn31: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.620-0500 I NETWORK [conn31] received client metadata from 127.0.0.1:52394 conn31: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.620-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.621-0500 I SHARDING [updateShardIdentityConfigString] Updating config server with confirmed set shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.621-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38474 #48 (14 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.621-0500 I INDEX [ReplWriterWorker-7] index build: starting on config.system.sessions properties: { v: 2, key: { lastUse: 1 }, name: "lsidTTLIndex", expireAfterSeconds: 1800 } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.621-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.621-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: ef9c21d0-cb24-411e-abcf-31575249aeed: config.system.sessions (13cbac84-c366-42f3-b1e6-6924cc7c7479 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.621-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.621-0500 I NETWORK [conn48] received client metadata from 127.0.0.1:38474 conn48: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.622-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.625-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 11 side writes (inserted: 11, deleted: 0) for 'lsidTTLIndex' in 0 ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.625-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lsidTTLIndex on ns config.system.sessions
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.627-0500 I NETWORK [conn11] end connection 127.0.0.1:57398 (1 connection now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.627-0500 I NETWORK [conn9] end connection 127.0.0.1:57352 (0 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.627-0500 I NETWORK [conn12] end connection 127.0.0.1:44220 (1 connection now open)
[fsm_workload_test:job0_fixture_setup] 2019-11-26T14:30:57.628-0500 Finished the setup of ShardedClusterFixture (Job #0).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.627-0500 I NETWORK [conn42] end connection 127.0.0.1:45932 (12 connections now open)
[executor:fsm_workload_test:job0] 2019-11-26T14:30:57.628-0500 job0_fixture_setup ran in 13.76 seconds: no failures detected.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.627-0500 I NETWORK [conn9] end connection 127.0.0.1:44212 (0 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.627-0500 I NETWORK [conn41] end connection 127.0.0.1:45930 (11 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.630-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57476 #12 (1 connection now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.630-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44336 #13 (1 connection now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.630-0500 I NETWORK [conn12] received client metadata from 127.0.0.1:57476 conn12: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[CheckReplDBHashInBackground:job0] Starting the background check repl dbhash thread.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.631-0500 I NETWORK [conn13] received client metadata from 127.0.0.1:44336 conn13: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.635-0500 I NETWORK [conn12] end connection 127.0.0.1:57476 (0 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.635-0500 I NETWORK [conn13] end connection 127.0.0.1:44336 (0 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.635-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: ef9c21d0-cb24-411e-abcf-31575249aeed: config.system.sessions ( 13cbac84-c366-42f3-b1e6-6924cc7c7479 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[CheckReplDBHashInBackground:job0] Resuming the background check repl dbhash thread.
[executor:fsm_workload_test:job0] 2019-11-26T14:30:57.637-0500 Running agg_out.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval TestData = new Object(); TestData["usingReplicaSetShards"] = true; TestData["runningWithAutoSplit"] = false; TestData["runningWithBalancer"] = false; TestData["fsmWorkloads"] = ["jstests/concurrency/fsm_workloads/agg_out.js"]; TestData["resmokeDbPathPrefix"] = "/home/nz_linux/data/job0/resmoke"; TestData["dbNamePrefix"] = "test0_"; TestData["sameDB"] = false; TestData["sameCollection"] = false; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "resmoke_runner"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); --readMode=commands mongodb://localhost:20007,localhost:20008 jstests/concurrency/fsm_libs/resmoke_runner.js
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.638-0500 Starting FSM workload jstests/concurrency/fsm_workloads/agg_out.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval TestData = new Object(); TestData["usingReplicaSetShards"] = true; TestData["runningWithAutoSplit"] = false; TestData["runningWithBalancer"] = false; TestData["fsmWorkloads"] = ["jstests/concurrency/fsm_workloads/agg_out.js"]; TestData["resmokeDbPathPrefix"] = "/home/nz_linux/data/job0/resmoke"; TestData["dbNamePrefix"] = "test0_"; TestData["sameDB"] = false; TestData["sameCollection"] = false; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "resmoke_runner"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); --readMode=commands mongodb://localhost:20007,localhost:20008 jstests/concurrency/fsm_libs/resmoke_runner.js
[executor:fsm_workload_test:job0] 2019-11-26T14:30:57.638-0500 Running agg_out:CheckReplDBHashInBackground...
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.639-0500 Starting JSTest jstests/hooks/run_check_repl_dbhash_background.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_check_repl_dbhash_background"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_check_repl_dbhash_background.js
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.641-0500 I INDEX [ReplWriterWorker-4] index build: starting on config.cache.chunks.config.system.sessions properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.641-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.641-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: b5cebfe5-2137-47f2-a92f-6bf31f6379ce: config.cache.chunks.config.system.sessions (64c6a829-dbfe-4506-b9df-8620f75d7efb ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.642-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.642-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.638-0500 I INDEX [ShardServerCatalogCacheLoader-0] index build: done building index _id_ on ns config.cache.databases
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.640-0500 I STORAGE [ReplWriterWorker-2] createCollection: config.cache.databases with provided UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13 and options: { uuid: UUID("e62da42c-0881-4ab9-ac4f-a628b927bd13") }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.640-0500 I STORAGE [ReplWriterWorker-5] createCollection: config.cache.databases with provided UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13 and options: { uuid: UUID("e62da42c-0881-4ab9-ac4f-a628b927bd13") }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.643-0500 I SHARDING [ReplWriterWorker-3] Marking collection config.cache.chunks.config.system.sessions as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.644-0500 I INDEX [ShardServerCatalogCacheLoader-1] index build: done building index _id_ on ns config.cache.collections
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.645-0500 I STORAGE [ShardServerCatalogCacheLoader-1] createCollection: config.cache.chunks.config.system.sessions with provided UUID: 89d743ca-3d59-460f-a575-cb12eb122385 and options: { uuid: UUID("89d743ca-3d59-460f-a575-cb12eb122385") }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.645-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 1 side writes (inserted: 1, deleted: 0) for 'lastmod_1' in 1 ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.645-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.647-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: b5cebfe5-2137-47f2-a92f-6bf31f6379ce: config.cache.chunks.config.system.sessions ( 64c6a829-dbfe-4506-b9df-8620f75d7efb ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.649-0500 FSM workload jstests/concurrency/fsm_workloads/agg_out.js started with pid 14933.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.652-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns config.cache.databases
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.653-0500 I STORAGE [ReplWriterWorker-7] createCollection: config.cache.collections with provided UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c and options: { uuid: UUID("20c18e31-cbdc-4c75-b799-d89f05ff917c") }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.653-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns config.cache.databases
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.654-0500 I STORAGE [ReplWriterWorker-2] createCollection: config.cache.collections with provided UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c and options: { uuid: UUID("20c18e31-cbdc-4c75-b799-d89f05ff917c") }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.659-0500 I INDEX [ShardServerCatalogCacheLoader-1] index build: done building index _id_ on ns config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.659-0500 I INDEX [ShardServerCatalogCacheLoader-1] Registering index build: 63fd3764-ab14-448e-ad6d-a2a093804fea
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.659-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js started with pid 14937.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.668-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns config.cache.collections
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.669-0500 I SHARDING [ReplWriterWorker-0] Marking collection config.cache.databases as collection version:
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.670-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns config.cache.collections
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.670-0500 I SHARDING [ReplWriterWorker-15] Marking collection config.cache.collections as collection version:
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.670-0500 I SHARDING [ReplWriterWorker-0] Marking collection config.cache.databases as collection version:
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.671-0500 I STORAGE [ReplWriterWorker-9] createCollection: config.cache.chunks.config.system.sessions with provided UUID: 89d743ca-3d59-460f-a575-cb12eb122385 and options: { uuid: UUID("89d743ca-3d59-460f-a575-cb12eb122385") }
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.671-0500 MongoDB shell version v0.0.0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.671-0500 I SHARDING [ReplWriterWorker-15] Marking collection config.cache.collections as collection version:
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.672-0500 I STORAGE [ReplWriterWorker-12] createCollection: config.cache.chunks.config.system.sessions with provided UUID: 89d743ca-3d59-460f-a575-cb12eb122385 and options: { uuid: UUID("89d743ca-3d59-460f-a575-cb12eb122385") }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.674-0500 I INDEX [ShardServerCatalogCacheLoader-1] index build: starting on config.cache.chunks.config.system.sessions properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.674-0500 I INDEX [ShardServerCatalogCacheLoader-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.674-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Index build initialized: 63fd3764-ab14-448e-ad6d-a2a093804fea: config.cache.chunks.config.system.sessions (89d743ca-3d59-460f-a575-cb12eb122385 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.674-0500 I INDEX [ShardServerCatalogCacheLoader-1] Waiting for index build to complete: 63fd3764-ab14-448e-ad6d-a2a093804fea
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.674-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.675-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.677-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.680-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 63fd3764-ab14-448e-ad6d-a2a093804fea: config.cache.chunks.config.system.sessions ( 89d743ca-3d59-460f-a575-cb12eb122385 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.680-0500 I INDEX [ShardServerCatalogCacheLoader-1] Index build completed: 63fd3764-ab14-448e-ad6d-a2a093804fea
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.680-0500 I SHARDING [ShardServerCatalogCacheLoader-1] Marking collection config.cache.chunks.config.system.sessions as collection version:
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.681-0500 MongoDB shell version v0.0.0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.686-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.688-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.702-0500 I INDEX [ReplWriterWorker-14] index build: starting on config.cache.chunks.config.system.sessions properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.702-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.702-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 9b482de6-305e-4486-9323-dd61f4f8ce1e: config.cache.chunks.config.system.sessions (89d743ca-3d59-460f-a575-cb12eb122385 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.703-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.703-0500 I INDEX [ReplWriterWorker-9] index build: starting on config.cache.chunks.config.system.sessions properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.703-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.703-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.703-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 24c25621-bb76-400f-9db3-42d14bdb7db1: config.cache.chunks.config.system.sessions (89d743ca-3d59-460f-a575-cb12eb122385 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.704-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.704-0500 I SHARDING [ReplWriterWorker-2] Marking collection config.cache.chunks.config.system.sessions as collection version:
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.704-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.705-0500 I SHARDING [ReplWriterWorker-5] Marking collection config.cache.chunks.config.system.sessions as collection version:
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.706-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 1 side writes (inserted: 1, deleted: 0) for 'lastmod_1' in 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.706-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.707-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 1 side writes (inserted: 1, deleted: 0) for 'lastmod_1' in 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.708-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.708-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 9b482de6-305e-4486-9323-dd61f4f8ce1e: config.cache.chunks.config.system.sessions ( 89d743ca-3d59-460f-a575-cb12eb122385 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.709-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 24c25621-bb76-400f-9db3-42d14bdb7db1: config.cache.chunks.config.system.sessions ( 89d743ca-3d59-460f-a575-cb12eb122385 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.721-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.722-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44338 #14 (1 connection now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.722-0500 I NETWORK [conn14] received client metadata from 127.0.0.1:44338 conn14: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.724-0500 Implicit session: session { "id" : UUID("a25ad27f-0f45-4d8c-9a20-b4514599d004") }
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.725-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.727-0500 true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.731-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.732-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44340 #15 (2 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.732-0500 I NETWORK [conn15] received client metadata from 127.0.0.1:44340 conn15: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.734-0500 Implicit session: session { "id" : UUID("66c530bb-3da1-44c6-a5c7-88f8031277d0") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.735-0500 MongoDB server version: 0.0.0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.737-0500 true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.740-0500 2019-11-26T14:30:57.740-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.740-0500 2019-11-26T14:30:57.740-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.741-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55744 #69 (33 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.741-0500 I NETWORK [conn69] received client metadata from 127.0.0.1:55744 conn69: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.741-0500 2019-11-26T14:30:57.741-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.741-0500 2019-11-26T14:30:57.741-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.741-0500 2019-11-26T14:30:57.741-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.741-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55746 #70 (34 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.741-0500 I NETWORK [conn70] received client metadata from 127.0.0.1:55746 conn70: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.742-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55748 #71 (35 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.742-0500 I NETWORK [conn71] received client metadata from 127.0.0.1:55748 conn71: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.742-0500 2019-11-26T14:30:57.742-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.742-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55750 #72 (36 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.742-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.742-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.743-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.743-0500 [jsTest] New session started with sessionID: { "id" : UUID("d8a3e04a-ba1b-48f4-a09c-fb45109338db") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.743-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.743-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.743-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.742-0500 I NETWORK [conn72] received client metadata from 127.0.0.1:55750 conn72: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.743-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.743-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.743-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.743-0500 [jsTest] New session started with sessionID: { "id" : UUID("4a5d97cb-10da-4849-9214-5fff6cf9599d") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.744-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.744-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.744-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.744-0500 2019-11-26T14:30:57.744-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.744-0500 2019-11-26T14:30:57.744-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.744-0500 2019-11-26T14:30:57.744-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.745-0500 2019-11-26T14:30:57.744-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.745-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51526 #32 (8 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.745-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52418 #32 (9 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.745-0500 2019-11-26T14:30:57.745-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.745-0500 I NETWORK [conn32] received client metadata from 127.0.0.1:51526 conn32: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.745-0500 2019-11-26T14:30:57.745-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.745-0500 2019-11-26T14:30:57.745-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.745-0500 2019-11-26T14:30:57.745-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.745-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38494 #49 (15 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.746-0500 2019-11-26T14:30:57.745-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.745-0500 I NETWORK [conn32] received client metadata from 127.0.0.1:52418 conn32: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.745-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51534 #33 (9 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.745-0500 I NETWORK [conn49] received client metadata from 127.0.0.1:38494 conn49: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.745-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52424 #33 (10 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.746-0500 2019-11-26T14:30:57.746-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.745-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38498 #50 (16 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.746-0500 I NETWORK [conn33] received client metadata from 127.0.0.1:51534 conn33: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.746-0500 I NETWORK [conn33] received client metadata from 127.0.0.1:52424 conn33: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.746-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38504 #51 (17 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.746-0500 I NETWORK [conn50] received client metadata from 127.0.0.1:38498 conn50: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.746-0500 I NETWORK [conn51] received client metadata from 127.0.0.1:38504 conn51: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.746-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38506 #52 (18 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.746-0500 I NETWORK [conn52] received client metadata from 127.0.0.1:38506 conn52: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.747-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.747-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.747-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.747-0500 [jsTest] New session started with sessionID: { "id" : UUID("37e9e567-a959-4f2b-873c-738b33f10827") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.747-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.747-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.747-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.747-0500 2019-11-26T14:30:57.747-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.747-0500 2019-11-26T14:30:57.747-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.747-0500 2019-11-26T14:30:57.747-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.747-0500 2019-11-26T14:30:57.747-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.747-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.747-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.747-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34532 #27 (5 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.747-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.747-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45976 #47 (12 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.748-0500 [jsTest] New session started with sessionID: { "id" : UUID("bb431d4b-31d8-425a-b0b9-41615b0d804b") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.748-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.748-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.748-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.748-0500 2019-11-26T14:30:57.748-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.748-0500 2019-11-26T14:30:57.748-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.748-0500 2019-11-26T14:30:57.748-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.748-0500 2019-11-26T14:30:57.748-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.747-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51176 #26 (4 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.748-0500 2019-11-26T14:30:57.748-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.748-0500 I NETWORK [conn27] received client metadata from 127.0.0.1:34532 conn27: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.749-0500 2019-11-26T14:30:57.748-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.748-0500 I NETWORK [conn47] received client metadata from 127.0.0.1:45976 conn47: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.748-0500 I NETWORK [conn26] received client metadata from 127.0.0.1:51176 conn26: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.748-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45984 #48 (13 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.748-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45986 #49 (14 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.748-0500 I NETWORK [conn48] received client metadata from 127.0.0.1:45984 conn48: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.748-0500 I NETWORK [conn49] received client metadata from 127.0.0.1:45986 conn49: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.749-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45988 #50 (15 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.749-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.749-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.749-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.749-0500 [jsTest] New session started with sessionID: { "id" : UUID("ef11e494-27e7-42b2-9602-814de039a853") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.749-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.749-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.749-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.749-0500 I NETWORK [conn50] received client metadata from 127.0.0.1:45988 conn50: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.750-0500 Skipping data consistency checks for 1-node CSRS: { "type" : "replica set", "primary" : "localhost:20000", "nodes" : [ "localhost:20000" ] }
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.750-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.750-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.750-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.750-0500 [jsTest] New session started with sessionID: { "id" : UUID("7e794ed9-cbb9-4986-8b2b-2799a098f8b4") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.750-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.750-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.750-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.758-0500 setting random seed: 1804224863
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.759-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44382 #16 (3 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.759-0500 I NETWORK [conn16] received client metadata from 127.0.0.1:44382 conn16: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.759-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.759-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.760-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.760-0500 [jsTest] New session started with sessionID: { "id" : UUID("50f2e91e-4790-41ce-a194-c5a33dc5e3b5") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.760-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.760-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.760-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.761-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55786 #73 (37 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.761-0500 I NETWORK [conn73] received client metadata from 127.0.0.1:55786 conn73: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.762-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.762-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.762-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.762-0500 [jsTest] New session started with sessionID: { "id" : UUID("d2236a0b-c5e7-4396-818b-a248e75c27a6") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.762-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.762-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.762-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.764-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38528 #53 (19 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.764-0500 I NETWORK [conn53] received client metadata from 127.0.0.1:38528 conn53: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.765-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.765-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.765-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.765-0500 [jsTest] New session started with sessionID: { "id" : UUID("d09f00ce-bb11-4e5c-954d-cd956703c4ea") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.765-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.765-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.765-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.766-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45996 #51 (16 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.766-0500 I NETWORK [conn51] received client metadata from 127.0.0.1:45996 conn51: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:57.767-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.768-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44390 #17 (4 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:57.807-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.769-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57534 #13 (1 connection now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.770-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34538 #28 (6 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.350-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.770-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51180 #27 (5 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.350-0500 Implicit session: session { "id" : UUID("72ce6b61-4463-4b29-ac0a-6ed8c809707a") }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.772-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55796 #74 (38 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.351-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.775-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38540 #54 (20 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.351-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.779-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51578 #34 (10 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.351-0500 [jsTest] New session started with sessionID: { "id" : UUID("55d9c833-0466-40f1-bc22-6f4e039f5bdf") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.780-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52468 #34 (11 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.351-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.783-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46016 #52 (17 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.352-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.769-0500 I NETWORK [conn17] received client metadata from 127.0.0.1:44390 conn17: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.352-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:57.769-0500 I NETWORK [conn13] received client metadata from 127.0.0.1:57534 conn13: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.352-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.770-0500 I NETWORK [conn28] received client metadata from 127.0.0.1:34538 conn28: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.770-0500 I NETWORK [conn27] received client metadata from 127.0.0.1:51180 conn27: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.352-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.772-0500 I NETWORK [conn74] received client metadata from 127.0.0.1:55796 conn74: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.352-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.353-0500 [jsTest] New session started with sessionID: { "id" : UUID("2897aeb0-6d01-4351-92c4-317c91b3e466") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.353-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.353-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.353-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.353-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.353-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.353-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.353-0500 [jsTest] New session started with sessionID: { "id" : UUID("25eebded-b395-466c-a1f4-876229ac5bcf") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.353-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.353-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.353-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.353-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.353-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.353-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.354-0500 [jsTest] New session started with sessionID: { "id" : UUID("62943891-2062-42d5-9fc8-e503574e9272") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.354-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.354-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.354-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.354-0500 Running data consistency checks for replica set: shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.354-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.354-0500 Implicit session: session { "id" : UUID("5d249494-71d5-40c1-99c7-e57203653d81") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.354-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.354-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.354-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.354-0500 [jsTest] New session started with sessionID: { "id" : UUID("3419a97d-f146-4648-89c3-dd73b18d6087") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.780-0500 I NETWORK [conn34] received client metadata from 127.0.0.1:51578 conn34: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.354-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.780-0500 I NETWORK [conn34] received client metadata from 127.0.0.1:52468 conn34: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.354-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.776-0500 I NETWORK [conn54] received client metadata from 127.0.0.1:38540 conn54: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.354-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.783-0500 I NETWORK [conn52] received client metadata from 127.0.0.1:46016 conn52: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.354-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.807-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44418 #18 (5 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.355-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:58.295-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57626 #14 (2 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.355-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.787-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34580 #29 (7 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.355-0500 [jsTest] New session started with sessionID: { "id" : UUID("6c4c146e-b24d-4a40-997b-ded3fc3b0bfa") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.786-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51218 #28 (6 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.355-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.774-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55798 #75 (39 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.355-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.818-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51602 #35 (11 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.355-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.819-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52492 #35 (12 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.356-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.779-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38542 #55 (21 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.356-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.786-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46018 #53 (18 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.356-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.807-0500 I NETWORK [conn18] received client metadata from 127.0.0.1:44418 conn18: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.356-0500 [jsTest] New session started with sessionID: { "id" : UUID("2d0264fe-ec30-4fc5-b0ba-fd44155528a6") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:58.296-0500 I NETWORK [conn14] received client metadata from 127.0.0.1:57626 conn14: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.356-0500 Recreating replica set from config {
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.787-0500 I NETWORK [conn29] received client metadata from 127.0.0.1:34580 conn29: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.356-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.787-0500 I NETWORK [conn28] received client metadata from 127.0.0.1:51218 conn28: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.357-0500 "_id" : "config-rs",
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.774-0500 I NETWORK [conn75] received client metadata from 127.0.0.1:55798 conn75: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.357-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.818-0500 I NETWORK [conn35] received client metadata from 127.0.0.1:51602 conn35: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.357-0500 "version" : 1,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.819-0500 I NETWORK [conn35] received client metadata from 127.0.0.1:52492 conn35: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.357-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.779-0500 I NETWORK [conn55] received client metadata from 127.0.0.1:38542 conn55: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.357-0500 "configsvr" : true,
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.786-0500 I NETWORK [conn53] received client metadata from 127.0.0.1:46018 conn53: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.357-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.824-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44430 #19 (6 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.357-0500 "protocolVersion" : NumberLong(1),
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:58.296-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57630 #15 (3 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.358-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.836-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34604 #30 (8 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.358-0500 "writeConcernMajorityJournalDefault" : true,
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.835-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51242 #29 (7 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.358-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.805-0500 I SHARDING [conn19] distributed lock 'test0_fsmdb0' acquired for 'dropCollection', ts : 5ddd7d715cde74b6784bb257
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.358-0500 "members" : [
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.829-0500 W CONTROL [conn35] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.829-0500 W CONTROL [conn35] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 0 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.358-0500 [jsTest] New session started with sessionID: { "id" : UUID("1d5243c7-3132-4949-a91f-5a0cdb0e0c19") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.781-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38548 #56 (22 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.358-0500 {
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.788-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46024 #54 (19 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.359-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.825-0500 I NETWORK [conn19] received client metadata from 127.0.0.1:44430 conn19: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.359-0500 "_id" : 0,
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:58.297-0500 I NETWORK [conn15] received client metadata from 127.0.0.1:57630 conn15: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.359-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.836-0500 I NETWORK [conn30] received client metadata from 127.0.0.1:34604 conn30: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.359-0500 "host" : "localhost:20000",
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.835-0500 I NETWORK [conn29] received client metadata from 127.0.0.1:51242 conn29: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.807-0500 I SHARDING [conn19] distributed lock 'test0_fsmdb0.fsmcoll0' acquired for 'dropCollection', ts : 5ddd7d715cde74b6784bb259
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.359-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.833-0500 W CONTROL [conn35] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 0 }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.359-0500 "arbiterOnly" : false,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.834-0500 W CONTROL [conn35] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 0 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.360-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.781-0500 I NETWORK [conn56] received client metadata from 127.0.0.1:38548 conn56: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.360-0500 "buildIndexes" : true,
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.788-0500 I NETWORK [conn54] received client metadata from 127.0.0.1:46024 conn54: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.360-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.360-0500 "hidden" : false,
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.835-0500 I NETWORK [conn18] end connection 127.0.0.1:44418 (5 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.360-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:58.308-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test0_fsmdb0 from version {} to version { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 } took 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.360-0500 "priority" : 1,
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.846-0500 W CONTROL [conn30] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 0 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.361-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.846-0500 W CONTROL [conn29] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 0 }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.361-0500 "tags" : {
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.808-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d715cde74b6784bb259' unlocked.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.361-0500 [jsTest] New session started with sessionID: { "id" : UUID("32e870b6-2de9-4506-9c83-c7852c41686b") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.836-0500 I NETWORK [conn35] end connection 127.0.0.1:51602 (10 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.361-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.836-0500 I NETWORK [conn35] end connection 127.0.0.1:52492 (11 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.361-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.815-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38564 #57 (23 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.361-0500 },
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.814-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46028 #55 (20 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.361-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.884-0500 I NETWORK [conn19] end connection 127.0.0.1:44430 (4 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.362-0500 "slaveDelay" : NumberLong(0),
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:58.309-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test0_fsmdb0.fsmcoll0 to version 1|3||5ddd7d71cf8184c2e1492ff8 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.848-0500 I STORAGE [ReplWriterWorker-6] createCollection: test0_fsmdb0.fsmcoll0 with provided UUID: d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3 and options: { uuid: UUID("d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.362-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.848-0500 I STORAGE [ReplWriterWorker-3] createCollection: test0_fsmdb0.fsmcoll0 with provided UUID: d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3 and options: { uuid: UUID("d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3") }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.809-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d715cde74b6784bb257' unlocked.
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.362-0500 "votes" : 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.894-0500 I NETWORK [conn32] end connection 127.0.0.1:51526 (9 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.362-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.894-0500 I NETWORK [conn32] end connection 127.0.0.1:52418 (10 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.362-0500 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.815-0500 I NETWORK [conn57] received client metadata from 127.0.0.1:38564 conn57: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.363-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.814-0500 I NETWORK [conn55] received client metadata from 127.0.0.1:46028 conn55: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.363-0500 ],
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.888-0500 I NETWORK [conn15] end connection 127.0.0.1:44340 (3 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.363-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:58.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.363-0500 "settings" : {
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.863-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test0_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.363-0500 [jsTest] New session started with sessionID: { "id" : UUID("e6bc04dc-5436-49e7-b465-bdc8914f8c67") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.863-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.874-0500 I INDEX [ReplWriterWorker-10] index build: starting on test0_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.363-0500 "chainingAllowed" : true,
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.909-0500 I STORAGE [ReplWriterWorker-8] createCollection: test0_fsmdb0.fsmcoll0 with provided UUID: d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3 and options: { uuid: UUID("d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.363-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.897-0500 I STORAGE [ReplWriterWorker-7] createCollection: test0_fsmdb0.fsmcoll0 with provided UUID: d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3 and options: { uuid: UUID("d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3") }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.364-0500 "heartbeatIntervalMillis" : 2000,
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.817-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38566 #58 (24 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.364-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.828-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test0_fsmdb0 from version {} to version { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 } took 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.364-0500 "heartbeatTimeoutSecs" : 10,
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:57.981-0500 I COMMAND [conn17] command test0_fsmdb0.fsmcoll0 appName: "MongoDB Shell" command: shardCollection { shardCollection: "test0_fsmdb0.fsmcoll0", key: { _id: "hashed" }, lsid: { id: UUID("05f83359-a3f0-406c-a0fb-ce607d9e5952") }, $clusterTime: { clusterTime: Timestamp(1574796657, 80), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:245 protocol:op_msg 150ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.364-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:58.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:58.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.364-0500 "electionTimeoutMillis" : 86400000,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.812-0500 I SHARDING [conn19] distributed lock 'test0_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d715cde74b6784bb261
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.364-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.874-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.364-0500 "catchUpTimeoutMillis" : -1,
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.923-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test0_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.365-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.908-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test0_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.365-0500 "catchUpTakeoverDelayMillis" : 30000,
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.818-0500 I NETWORK [conn58] received client metadata from 127.0.0.1:38566 conn58: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.365-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.828-0500 I NETWORK [conn55] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.365-0500 "getLastErrorModes" : {
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.080-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test0_fsmdb0 from version {} to version { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 } took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.365-0500 [jsTest] New session started with sessionID: { "id" : UUID("2749ed4d-1704-4069-9ee9-e6005aff210f") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.874-0500 I INDEX [ReplWriterWorker-12] index build: starting on test0_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.365-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:58.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.365-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.815-0500 I SHARDING [conn19] Registering new database { _id: "test0_fsmdb0", primary: "shard-rs1", partitioned: false, version: { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 } } in sharding catalog
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.366-0500 },
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.874-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 08fb7f4f-3f2a-455c-a373-add791180abe: test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3 ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.366-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.939-0500 I INDEX [ReplWriterWorker-13] index build: starting on test0_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.366-0500 "getLastErrorDefaults" : {
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.924-0500 I INDEX [ReplWriterWorker-6] index build: starting on test0_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.366-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.828-0500 W CONTROL [conn58] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 0 }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.366-0500 "w" : 1,
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.828-0500 I SHARDING [conn55] setting this node's cached database version for test0_fsmdb0 to { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.366-0500 Running data consistency checks for replica set: shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.081-0500 I NETWORK [conn17] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.366-0500 "wtimeout" : 0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.874-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.366-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:58.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.880-0500 },
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.816-0500 I STORAGE [conn19] createCollection: config.databases with generated UUID: 1c31f9a7-ee46-41d3-a296-2e1f323b51b8 and options: {}
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.880-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.874-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.881-0500 "replicaSetId" : ObjectId("5ddd7d655cde74b6784bb14d")
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.939-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.881-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.924-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.881-0500 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.833-0500 W CONTROL [conn58] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 0 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.881-0500 [jsTest] New session started with sessionID: { "id" : UUID("75e9faee-7503-4f9e-a095-9cea46dd9474") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.832-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46040 #56 (21 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.881-0500 }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.082-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test0_fsmdb0.fsmcoll0 to version 1|3||5ddd7d71cf8184c2e1492ff8 took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.881-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.874-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 87d2cad9-4144-4fbe-8718-9cde37e402aa: test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3 ): indexes: 1
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.882-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:58.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.882-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.825-0500 I INDEX [conn19] index build: done building index _id_ on ns config.databases
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.882-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.875-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.882-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.939-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: e1f94162-a3ce-472b-99f1-e522ba97d37c: test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3 ): indexes: 1
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.882-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.924-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 2e1f4654-f9e5-4ec4-99c7-d5b608de1377: test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3 ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.882-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.835-0500 I NETWORK [conn57] end connection 127.0.0.1:38564 (23 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.883-0500 [jsTest] New session started with sessionID: { "id" : UUID("0897fc81-6bf1-4d9e-b7a1-c56e9b071d35") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.832-0500 I NETWORK [conn56] received client metadata from 127.0.0.1:46040 conn56: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.883-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.883-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:58.310-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.883-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.829-0500 I SHARDING [conn19] Enabling sharding for database [test0_fsmdb0] in config db
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.883-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.878-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.883-0500 [jsTest] New session started with sessionID: { "id" : UUID("c8feff48-e255-4945-89c6-47a33cbd47c8") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.939-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.883-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.940-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.884-0500 Recreating replica set from config {
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.835-0500 I NETWORK [conn58] end connection 127.0.0.1:38566 (22 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.884-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.835-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46042 #57 (22 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.884-0500 "_id" : "shard-rs0",
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.874-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.884-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.884-0500 "version" : 2,
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:58.311-0500 I SHARDING [UpdateReplicaSetOnConfigServer] Updating sharding state with confirmed set shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.884-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.830-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d715cde74b6784bb261' unlocked.
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.885-0500 "protocolVersion" : NumberLong(1),
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.878-0500 I SHARDING [conn29] Marking collection admin.run_check_repl_dbhash_background as collection version:
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.885-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.925-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.885-0500 "writeConcernMajorityJournalDefault" : true,
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.943-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test0_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.885-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.881-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.885-0500 "members" : [
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.835-0500 I NETWORK [conn57] received client metadata from 127.0.0.1:46042 conn57: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.885-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.875-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.886-0500 {
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.886-0500 [jsTest] New session started with sessionID: { "id" : UUID("f3b43960-38c8-4a17-b6e2-c398e86cff38") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:58.311-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.886-0500 "_id" : 0,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.832-0500 I SHARDING [conn19] distributed lock 'test0_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d715cde74b6784bb26b
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.886-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.880-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 08fb7f4f-3f2a-455c-a373-add791180abe: test0_fsmdb0.fsmcoll0 ( d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.886-0500 "host" : "localhost:20001",
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.925-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.886-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.945-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: e1f94162-a3ce-472b-99f1-e522ba97d37c: test0_fsmdb0.fsmcoll0 ( d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.887-0500 "arbiterOnly" : false,
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.881-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.887-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.835-0500 I STORAGE [conn55] createCollection: test0_fsmdb0.fsmcoll0 with provided UUID: d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3 and options: { uuid: UUID("d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3") }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.887-0500 "buildIndexes" : true,
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.878-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.887-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js finished.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.086-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.887-0500 "hidden" : false,
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:58.311-0500 I SHARDING [Sharding-Fixed-1] Updating sharding state with confirmed set shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.889-0500 Starting JSTest jstests/hooks/run_check_repl_dbhash_background.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_check_repl_dbhash_background"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_check_repl_dbhash_background.js
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.833-0500 I SHARDING [conn19] distributed lock 'test0_fsmdb0.fsmcoll0' acquired for 'shardCollection', ts : 5ddd7d715cde74b6784bb26d
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.889-0500 "priority" : 1,
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.882-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51246 #30 (8 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.889-0500 "tags" : {
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.927-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.893-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.967-0500 I STORAGE [ReplWriterWorker-7] createCollection: config.cache.chunks.test0_fsmdb0.fsmcoll0 with provided UUID: 44049d48-fa0f-4a8e-b7c3-56550b94d236 and options: { uuid: UUID("44049d48-fa0f-4a8e-b7c3-56550b94d236") }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.893-0500 },
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.881-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.893-0500 "slaveDelay" : NumberLong(0),
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.845-0500 W CONTROL [conn57] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 0 }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.894-0500 "votes" : 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.879-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 87d2cad9-4144-4fbe-8718-9cde37e402aa: test0_fsmdb0.fsmcoll0 ( d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.894-0500 },
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.086-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.894-0500 {
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:58.570-0500 I COMMAND [conn15] command test0_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("5033fb0b-b223-4faf-974b-5b5f981f137e") }, $db: "test0_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 262ms
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.894-0500 "_id" : 1,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.888-0500 I NETWORK [conn70] end connection 127.0.0.1:55746 (38 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.895-0500 "host" : "localhost:20002",
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.882-0500 I NETWORK [conn30] received client metadata from 127.0.0.1:51246 conn30: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.895-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.895-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.895-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.895-0500 "priority" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.895-0500 "tags" : {
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.929-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 2e1f4654-f9e5-4ec4-99c7-d5b608de1377: test0_fsmdb0.fsmcoll0 ( d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.895-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:57.984-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns config.cache.chunks.test0_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.895-0500 },
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.882-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.896-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.896-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.896-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.896-0500 {
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.845-0500 I INDEX [conn55] index build: done building index _id_ on ns test0_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.896-0500 "_id" : 2,
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.896-0500 "host" : "localhost:20003",
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.896-0500 "arbiterOnly" : false,
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.879-0500 I SHARDING [conn30] Marking collection admin.run_check_repl_dbhash_background as collection version:
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.896-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.896-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.896-0500 "priority" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.896-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.896-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.897-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.897-0500 "slaveDelay" : NumberLong(0),
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.087-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.897-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.897-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.897-0500 ],
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.897-0500 "settings" : {
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:30:58.570-0500 I COMMAND [conn14] command test0_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("14894096-7669-49a5-82f4-aeca25550ea6") }, $db: "test0_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 262ms
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.897-0500 "chainingAllowed" : true,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.894-0500 I NETWORK [conn69] end connection 127.0.0.1:55744 (37 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.897-0500 "heartbeatIntervalMillis" : 2000,
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.883-0500 W CONTROL [conn29] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 4 }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.897-0500 "heartbeatTimeoutSecs" : 10,
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.897-0500 "electionTimeoutMillis" : 86400000,
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.897-0500 "catchUpTimeoutMillis" : -1,
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.897-0500 "catchUpTakeoverDelayMillis" : 30000,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.952-0500 I STORAGE [ReplWriterWorker-13] createCollection: config.cache.chunks.test0_fsmdb0.fsmcoll0 with provided UUID: 44049d48-fa0f-4a8e-b7c3-56550b94d236 and options: { uuid: UUID("44049d48-fa0f-4a8e-b7c3-56550b94d236") }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.898-0500 "getLastErrorModes" : {
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.898-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.898-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.898-0500 "getLastErrorDefaults" : {
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.898-0500 "w" : 1,
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:58.029-0500 I INDEX [ReplWriterWorker-8] index build: starting on config.cache.chunks.test0_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.898-0500 "wtimeout" : 0
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.898-0500 },
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.882-0500 I SHARDING [updateShardIdentityConfigString] Updating config server with confirmed set shard-rs1/localhost:20004,localhost:20005,localhost:20006
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.898-0500 "replicaSetId" : ObjectId("5ddd7d683bbfe7fa5630d3b8")
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.846-0500 I INDEX [conn55] Registering index build: 1fb684bc-3d29-4564-a087-fcde85da89f4
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.898-0500 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.882-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34610 #31 (9 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.898-0500 }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.087-0500 I SHARDING [UpdateReplicaSetOnConfigServer] Updating sharding state with confirmed set shard-rs0/localhost:20001,localhost:20002,localhost:20003
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.899-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.928-0500 D4 TXN [conn52] New transaction started with txnNumber: 0 on session with lsid c577c27d-39d0-4ac1-a35d-973ba828856f
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.899-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.885-0500 I NETWORK [conn29] end connection 127.0.0.1:51242 (7 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.899-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.966-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns config.cache.chunks.test0_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.899-0500 [jsTest] New session started with sessionID: { "id" : UUID("6a11d55b-b643-485c-bf58-5aca750c6309") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:58.029-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.899-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.884-0500 I STORAGE [conn48] createCollection: test0_fsmdb0.fsmcoll0 with provided UUID: d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3 and options: { uuid: UUID("d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.900-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js started with pid 15073.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.858-0500 I INDEX [conn55] index build: starting on test0_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.900-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.882-0500 I NETWORK [conn31] received client metadata from 127.0.0.1:34610 conn31: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.900-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.900-0500 Recreating replica set from config {
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.087-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.900-0500 "_id" : "shard-rs1",
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.976-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test0_fsmdb0 from version {} to version { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 } took 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.900-0500 "version" : 2,
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.894-0500 I NETWORK [conn26] end connection 127.0.0.1:51176 (6 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.901-0500 "protocolVersion" : NumberLong(1),
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.986-0500 I INDEX [ReplWriterWorker-7] index build: starting on config.cache.chunks.test0_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.901-0500 "writeConcernMajorityJournalDefault" : true,
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:58.029-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 6bf498b2-3257-4037-9549-e00c4123e58e: config.cache.chunks.test0_fsmdb0.fsmcoll0 (44049d48-fa0f-4a8e-b7c3-56550b94d236 ): indexes: 1
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.901-0500 "members" : [
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.888-0500 I NETWORK [conn51] end connection 127.0.0.1:38504 (21 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.901-0500 {
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.858-0500 I INDEX [conn55] build may temporarily use up to 500 megabytes of RAM
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.901-0500 "_id" : 0,
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.883-0500 W CONTROL [conn30] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 4 }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.901-0500 "host" : "localhost:20004",
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.091-0500 I SHARDING [Sharding-Fixed-1] Updating sharding state with confirmed set shard-rs1/localhost:20004,localhost:20005,localhost:20006
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.901-0500 "arbiterOnly" : false,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.976-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test0_fsmdb0.fsmcoll0 to version 1|3||5ddd7d71cf8184c2e1492ff8 took 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.902-0500 "buildIndexes" : true,
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.950-0500 I STORAGE [ReplWriterWorker-6] createCollection: config.cache.chunks.test0_fsmdb0.fsmcoll0 with provided UUID: dad6441c-7462-448b-9e35-8123157c4429 and options: { uuid: UUID("dad6441c-7462-448b-9e35-8123157c4429") }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.902-0500 "hidden" : false,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.986-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.902-0500 "priority" : 1,
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:58.030-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.902-0500 "tags" : {
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.894-0500 I NETWORK [conn49] end connection 127.0.0.1:38494 (20 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.902-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.858-0500 I STORAGE [conn55] Index build initialized: 1fb684bc-3d29-4564-a087-fcde85da89f4: test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3 ): indexes: 1
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.902-0500 },
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.885-0500 I NETWORK [conn30] end connection 127.0.0.1:34604 (8 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.902-0500 "slaveDelay" : NumberLong(0),
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.183-0500 I NETWORK [conn17] Successfully connected to shard-rs0/localhost:20001,localhost:20002,localhost:20003 (1 connections now open to shard-rs0/localhost:20001,localhost:20002,localhost:20003 with a 0 second timeout)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.902-0500 "votes" : 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.980-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d715cde74b6784bb26d' unlocked.
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.903-0500 },
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.967-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns config.cache.chunks.test0_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.903-0500 {
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.986-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: e183b86a-4a3d-404c-9a0c-03c1f20253fb: config.cache.chunks.test0_fsmdb0.fsmcoll0 (44049d48-fa0f-4a8e-b7c3-56550b94d236 ): indexes: 1
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.903-0500 "_id" : 1,
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:58.030-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.903-0500 "host" : "localhost:20005",
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.895-0500 I INDEX [conn48] index build: done building index _id_ on ns test0_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.903-0500 "arbiterOnly" : false,
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.858-0500 I INDEX [conn55] Waiting for index build to complete: 1fb684bc-3d29-4564-a087-fcde85da89f4
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.903-0500 "buildIndexes" : true,
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.894-0500 I NETWORK [conn27] end connection 127.0.0.1:34532 (7 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.903-0500 "hidden" : false,
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.184-0500 I NETWORK [conn17] Successfully connected to shard-rs1/localhost:20004,localhost:20005,localhost:20006 (1 connections now open to shard-rs1/localhost:20004,localhost:20005,localhost:20006 with a 0 second timeout)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.904-0500 "priority" : 0,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.981-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d715cde74b6784bb26b' unlocked.
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.904-0500 "tags" : {
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.987-0500 I INDEX [ReplWriterWorker-1] index build: starting on config.cache.chunks.test0_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.904-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.986-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.904-0500 },
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:58.031-0500 I SHARDING [ReplWriterWorker-1] Marking collection config.cache.chunks.test0_fsmdb0.fsmcoll0 as collection version:
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.904-0500 "slaveDelay" : NumberLong(0),
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.901-0500 I INDEX [conn48] index build: done building index _id_hashed on ns test0_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.904-0500 "votes" : 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.858-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.904-0500 },
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.950-0500 I STORAGE [ReplWriterWorker-8] createCollection: config.cache.chunks.test0_fsmdb0.fsmcoll0 with provided UUID: dad6441c-7462-448b-9e35-8123157c4429 and options: { uuid: UUID("dad6441c-7462-448b-9e35-8123157c4429") }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.904-0500 {
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.185-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test0_fsmdb0.agg_out took 0 ms and found the collection is not sharded
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.905-0500 "_id" : 2,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.981-0500 I COMMAND [conn19] command admin.$cmd appName: "MongoDB Shell" command: _configsvrShardCollection { _configsvrShardCollection: "test0_fsmdb0.fsmcoll0", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("05f83359-a3f0-406c-a0fb-ce607d9e5952"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1574796657, 80), signature: { hash: BinData(0, D4A12BC9CC96C739408A5C23B7634C70BC58BDC4), keyId: 6763700092420489256 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44390", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796657, 80), t: 1 } }, $db: "admin" } numYields:0 reslen:587 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 6 } }, Global: { acquireCount: { r: 2, w: 4 } }, Database: { acquireCount: { r: 2, w: 4 } }, Collection: { acquireCount: { r: 2, w: 4 } }, Mutex: { acquireCount: { r: 10, W: 1 } } } flowControl:{ acquireCount: 4 } storage:{} protocol:op_msg 149ms
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.905-0500 "host" : "localhost:20006",
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.905-0500 "arbiterOnly" : false,
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.987-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.905-0500 "buildIndexes" : true,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.986-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.905-0500 "hidden" : false,
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:58.034-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 4 side writes (inserted: 4, deleted: 0) for 'lastmod_1' in 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.905-0500 "priority" : 0,
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.902-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test0_fsmdb0 from version {} to version { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 } took 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.905-0500 "tags" : {
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.859-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.906-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.966-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns config.cache.chunks.test0_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.906-0500 },
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.267-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44470 #30 (4 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.906-0500 "slaveDelay" : NumberLong(0),
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.983-0500 I SHARDING [conn19] distributed lock 'test0_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d715cde74b6784bb28d
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.906-0500 "votes" : 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.987-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 402a9136-b356-487a-a251-fc77c91f9382: config.cache.chunks.test0_fsmdb0.fsmcoll0 (dad6441c-7462-448b-9e35-8123157c4429 ): indexes: 1
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.906-0500 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.987-0500 I SHARDING [ReplWriterWorker-11] Marking collection config.cache.chunks.test0_fsmdb0.fsmcoll0 as collection version:
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.906-0500 ],
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:58.034-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.test0_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.906-0500 "settings" : {
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.903-0500 I SHARDING [conn48] Marking collection test0_fsmdb0.fsmcoll0 as collection version:
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.907-0500 "chainingAllowed" : true,
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.861-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.907-0500 "heartbeatIntervalMillis" : 2000,
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.987-0500 I INDEX [ReplWriterWorker-1] index build: starting on config.cache.chunks.test0_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.907-0500 "heartbeatTimeoutSecs" : 10,
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.267-0500 I NETWORK [conn30] received client metadata from 127.0.0.1:44470 conn30: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.907-0500 "electionTimeoutMillis" : 86400000,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.984-0500 I SHARDING [conn19] Enabling sharding for database [test0_fsmdb0] in config db
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.907-0500 "catchUpTimeoutMillis" : -1,
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.987-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.907-0500 "catchUpTakeoverDelayMillis" : 30000,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.989-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: drain applied 4 side writes (inserted: 4, deleted: 0) for 'lastmod_1' in 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.907-0500 "getLastErrorModes" : {
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:58.036-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 6bf498b2-3257-4037-9549-e00c4123e58e: config.cache.chunks.test0_fsmdb0.fsmcoll0 ( 44049d48-fa0f-4a8e-b7c3-56550b94d236 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.908-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.932-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38590 #63 (21 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.908-0500 },
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.864-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 1fb684bc-3d29-4564-a087-fcde85da89f4: test0_fsmdb0.fsmcoll0 ( d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.908-0500 "getLastErrorDefaults" : {
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.987-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.908-0500 "w" : 1,
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.268-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44472 #31 (5 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.908-0500 "wtimeout" : 0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.985-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d715cde74b6784bb28d' unlocked.
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.908-0500 },
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.988-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.908-0500 "replicaSetId" : ObjectId("5ddd7d6bcf8184c2e1492eba")
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.990-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index lastmod_1 on ns config.cache.chunks.test0_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.909-0500 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:58.086-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51628 #36 (10 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.909-0500 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.933-0500 I NETWORK [conn63] received client metadata from 127.0.0.1:38590 conn63: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.909-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.864-0500 I INDEX [conn55] Index build completed: 1fb684bc-3d29-4564-a087-fcde85da89f4
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.909-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.987-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 5bccd9bc-16ff-4358-ae99-1ed0b03b6f7f: config.cache.chunks.test0_fsmdb0.fsmcoll0 (dad6441c-7462-448b-9e35-8123157c4429 ): indexes: 1
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.909-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.268-0500 I NETWORK [conn31] received client metadata from 127.0.0.1:44472 conn31: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.909-0500 [jsTest] New session started with sessionID: { "id" : UUID("6e8f39a5-dcb6-4224-914a-4b7a8c3a6e3b") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.987-0500 I SHARDING [conn19] distributed lock 'test0_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d715cde74b6784bb293
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.909-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.988-0500 I SHARDING [ReplWriterWorker-11] Marking collection config.cache.chunks.test0_fsmdb0.fsmcoll0 as collection version:
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.909-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:57.991-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: e183b86a-4a3d-404c-9a0c-03c1f20253fb: config.cache.chunks.test0_fsmdb0.fsmcoll0 ( 44049d48-fa0f-4a8e-b7c3-56550b94d236 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.910-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:58.086-0500 I NETWORK [conn36] received client metadata from 127.0.0.1:51628 conn36: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.910-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.934-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test0_fsmdb0.fsmcoll0 to version 1|3||5ddd7d71cf8184c2e1492ff8 took 1 ms
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.910-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.879-0500 I SHARDING [conn55] CMD: shardcollection: { _shardsvrShardCollection: "test0_fsmdb0.fsmcoll0", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("05f83359-a3f0-406c-a0fb-ce607d9e5952"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796657, 82), signature: { hash: BinData(0, D4A12BC9CC96C739408A5C23B7634C70BC58BDC4), keyId: 6763700092420489256 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44390", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796657, 82), t: 1 } }, $db: "admin" }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.910-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.987-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.910-0500 [jsTest] New session started with sessionID: { "id" : UUID("d1997353-9ab6-4f83-9dac-ca99a5cfeaa6") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.286-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44474 #32 (6 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.910-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.989-0500 I SHARDING [conn19] distributed lock 'test0_fsmdb0.fsmcoll0' acquired for 'shardCollection', ts : 5ddd7d715cde74b6784bb295
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.910-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.991-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 4 side writes (inserted: 4, deleted: 0) for 'lastmod_1' in 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.911-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:58.086-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52514 #36 (11 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.911-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:58.310-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51682 #37 (11 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.911-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.935-0500 I SHARDING [conn63] Updating metadata for collection test0_fsmdb0.fsmcoll0 from collection version: to collection version: 1|3||5ddd7d71cf8184c2e1492ff8, shard version: 1|1||5ddd7d71cf8184c2e1492ff8 due to version change
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.911-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.880-0500 I SHARDING [conn55] about to log metadata event into changelog: { _id: "nz_desktop:20004-2019-11-26T14:30:57.880-0500-5ddd7d71cf8184c2e1492ff6", server: "nz_desktop:20004", shard: "shard-rs1", clientAddr: "127.0.0.1:46028", time: new Date(1574796657880), what: "shardCollection.start", ns: "test0_fsmdb0.fsmcoll0", details: { shardKey: { _id: "hashed" }, collection: "test0_fsmdb0.fsmcoll0", uuid: UUID("d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3"), empty: true, fromMapReduce: false, primary: "shard-rs1:shard-rs1/localhost:20004,localhost:20005,localhost:20006", numChunks: 4 } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.911-0500 [jsTest] New session started with sessionID: { "id" : UUID("5303806a-f35b-4921-923c-e1d640632784") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.988-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.911-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.286-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44476 #33 (7 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.911-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.990-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test0_fsmdb0 from version {} to version { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 } took 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.912-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.991-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.test0_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.912-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:58.086-0500 I NETWORK [conn36] received client metadata from 127.0.0.1:52514 conn36: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.912-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:58.310-0500 I NETWORK [conn37] received client metadata from 127.0.0.1:51682 conn37: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.912-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.935-0500 I STORAGE [ShardServerCatalogCacheLoader-0] createCollection: config.cache.chunks.test0_fsmdb0.fsmcoll0 with provided UUID: 44049d48-fa0f-4a8e-b7c3-56550b94d236 and options: { uuid: UUID("44049d48-fa0f-4a8e-b7c3-56550b94d236") }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.912-0500 [jsTest] New session started with sessionID: { "id" : UUID("38554dc4-e9a9-4ec4-a5e4-d18d66aa1099") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.882-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46050 #58 (23 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.912-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.988-0500 I SHARDING [ReplWriterWorker-10] Marking collection config.cache.chunks.test0_fsmdb0.fsmcoll0 as collection version:
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.912-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.286-0500 I NETWORK [conn32] received client metadata from 127.0.0.1:44474 conn32: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.912-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.991-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test0_fsmdb0.fsmcoll0 to version 1|3||5ddd7d71cf8184c2e1492ff8 took 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.913-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:57.994-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 402a9136-b356-487a-a251-fc77c91f9382: config.cache.chunks.test0_fsmdb0.fsmcoll0 ( dad6441c-7462-448b-9e35-8123157c4429 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.913-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:58.310-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52572 #37 (12 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.913-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.950-0500 I INDEX [ShardServerCatalogCacheLoader-0] index build: done building index _id_ on ns config.cache.chunks.test0_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.913-0500 [jsTest] New session started with sessionID: { "id" : UUID("cc2c1152-a14a-435b-867a-7962f5d61aac") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.882-0500 I NETWORK [conn58] received client metadata from 127.0.0.1:46050 conn58: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.913-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.991-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 4 side writes (inserted: 4, deleted: 0) for 'lastmod_1' in 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.913-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.286-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44478 #34 (8 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.913-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.992-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d715cde74b6784bb295' unlocked.
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.913-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.086-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51264 #31 (7 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.914-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:58.310-0500 I NETWORK [conn37] received client metadata from 127.0.0.1:52572 conn37: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.914-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.914-0500 [jsTest] New session started with sessionID: { "id" : UUID("5151cfad-b1b3-4067-9ac1-c25e460c902b") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.950-0500 I INDEX [ShardServerCatalogCacheLoader-0] Registering index build: 7ce827f8-b65f-4c26-9ea5-b78ba28743b8
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.914-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.882-0500 W CONTROL [conn57] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 4 }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.914-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.992-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.test0_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.914-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.286-0500 I NETWORK [conn33] received client metadata from 127.0.0.1:44476 conn33: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.914-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:57.993-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d715cde74b6784bb293' unlocked.
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.914-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.086-0500 I NETWORK [conn31] received client metadata from 127.0.0.1:51264 conn31: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.915-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.966-0500 I INDEX [ShardServerCatalogCacheLoader-0] index build: starting on config.cache.chunks.test0_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.915-0500 [jsTest] Workload(s) started: jstests/concurrency/fsm_workloads/agg_out.js
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.883-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46054 #59 (24 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.915-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:57.994-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 5bccd9bc-16ff-4358-ae99-1ed0b03b6f7f: config.cache.chunks.test0_fsmdb0.fsmcoll0 ( dad6441c-7462-448b-9e35-8123157c4429 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.915-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.286-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44480 #35 (9 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.915-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:58.152-0500 I SHARDING [conn52] distributed lock 'test0_fsmdb0' acquired for 'createCollection', ts : 5ddd7d725cde74b6784bb2a7
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.915-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.165-0500 I STORAGE [ReplWriterWorker-12] createCollection: test0_fsmdb0.agg_out with provided UUID: b94b968d-a0c7-4026-a629-39b3d74e6ef1 and options: { uuid: UUID("b94b968d-a0c7-4026-a629-39b3d74e6ef1") }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.915-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.966-0500 I INDEX [ShardServerCatalogCacheLoader-0] build may temporarily use up to 500 megabytes of RAM
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.916-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.883-0500 I NETWORK [conn59] received client metadata from 127.0.0.1:46054 conn59: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.916-0500 [jsTest] New session started with sessionID: { "id" : UUID("05f83359-a3f0-406c-a0fb-ce607d9e5952") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.086-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34626 #32 (8 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.916-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.287-0500 I NETWORK [conn34] received client metadata from 127.0.0.1:44478 conn34: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.916-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:58.153-0500 I SHARDING [conn52] distributed lock 'test0_fsmdb0.agg_out' acquired for 'createCollection', ts : 5ddd7d725cde74b6784bb2a9
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.916-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.179-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test0_fsmdb0.agg_out
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.916-0500 Using 5 threads (requested 5)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.966-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Index build initialized: 7ce827f8-b65f-4c26-9ea5-b78ba28743b8: config.cache.chunks.test0_fsmdb0.fsmcoll0 (44049d48-fa0f-4a8e-b7c3-56550b94d236 ): indexes: 1
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.916-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.916-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.916-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500 Implicit session: session { "id" : UUID("fc845c81-d797-4c0e-812b-5cf0f7874d4f") }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500 Implicit session: session { "id" : UUID("fe21ba3c-282c-45cb-8615-8ea14638276a") }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500 Implicit session: session { "id" : UUID("a8339e56-8061-4a7a-adae-3928817a882a") }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500 Implicit session: session { "id" : UUID("3c777b01-5d0e-4162-ba92-ee969a2bd540") }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500 Implicit session: session { "id" : UUID("b2e0e9c0-748c-487e-b866-8c104872b2d8") }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500 [tid:3] setting random seed: 779747966
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500 [tid:2] setting random seed: 1934952709
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500 [tid:0] setting random seed: 143932390
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500 [tid:1] setting random seed: 994395972
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500 [tid:4] setting random seed: 1304715007
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500 [tid:3]
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500 [jsTest] New session started with sessionID: { "id" : UUID("5033fb0b-b223-4faf-974b-5b5f981f137e") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.917-0500 [tid:2]
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.918-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.918-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.918-0500 [jsTest] New session started with sessionID: { "id" : UUID("0cab2b8e-d3fc-4dad-b1d8-353217ca01c7") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.918-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.918-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.918-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.918-0500 [tid:0]
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.918-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.918-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.918-0500 [jsTest] New session started with sessionID: { "id" : UUID("f380e6e7-e7db-4d01-a770-8e3aea8758d2") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.918-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.918-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.884-0500 I NETWORK [conn56] end connection 127.0.0.1:46040 (23 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.918-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.918-0500 [tid:1]
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.918-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.918-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.918-0500 [jsTest] New session started with sessionID: { "id" : UUID("14894096-7669-49a5-82f4-aeca25550ea6") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.918-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.918-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.918-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.919-0500 [tid:4]
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.919-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.919-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.919-0500 [jsTest] New session started with sessionID: { "id" : UUID("4215d65a-2dd1-4ab2-bcf3-1c5ee3325733") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.919-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.919-0500
[fsm_workload_test:agg_out] 2019-11-26T14:30:58.919-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.086-0500 I NETWORK [conn32] received client metadata from 127.0.0.1:34626 conn32: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.287-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44482 #36 (10 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:58.181-0500 I SHARDING [conn52] distributed lock with ts: 5ddd7d725cde74b6784bb2a9' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.213-0500 I INDEX [ReplWriterWorker-9] index build: starting on test0_fsmdb0.agg_out properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.966-0500 I INDEX [ShardServerCatalogCacheLoader-0] Waiting for index build to complete: 7ce827f8-b65f-4c26-9ea5-b78ba28743b8
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.885-0500 I NETWORK [conn57] end connection 127.0.0.1:46042 (22 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.165-0500 I STORAGE [ReplWriterWorker-1] createCollection: test0_fsmdb0.agg_out with provided UUID: b94b968d-a0c7-4026-a629-39b3d74e6ef1 and options: { uuid: UUID("b94b968d-a0c7-4026-a629-39b3d74e6ef1") }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.180-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:58.182-0500 I SHARDING [conn52] distributed lock with ts: 5ddd7d725cde74b6784bb2a7' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.213-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.966-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.888-0500 I NETWORK [conn49] end connection 127.0.0.1:45986 (21 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.287-0500 I NETWORK [conn35] received client metadata from 127.0.0.1:44480 conn35: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.214-0500 I INDEX [ReplWriterWorker-10] index build: starting on test0_fsmdb0.agg_out properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.213-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 194f6c0e-f357-4046-b236-03f6ece4f854: test0_fsmdb0.agg_out (b94b968d-a0c7-4026-a629-39b3d74e6ef1 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.967-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.894-0500 I NETWORK [conn47] end connection 127.0.0.1:45976 (20 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.287-0500 I NETWORK [conn36] received client metadata from 127.0.0.1:44482 conn36: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.214-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.213-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.969-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index lastmod_1 on ns config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.932-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test0_fsmdb0.fsmcoll0 to version 1|3||5ddd7d71cf8184c2e1492ff8 took 1 ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.290-0500 I NETWORK [conn31] end connection 127.0.0.1:44472 (9 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.214-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: a25d9b86-8f93-40f5-8ce4-ce45df009c7b: test0_fsmdb0.agg_out (b94b968d-a0c7-4026-a629-39b3d74e6ef1 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.214-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.972-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 7ce827f8-b65f-4c26-9ea5-b78ba28743b8: config.cache.chunks.test0_fsmdb0.fsmcoll0 ( 44049d48-fa0f-4a8e-b7c3-56550b94d236 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.932-0500 I SHARDING [conn55] Marking collection test0_fsmdb0.fsmcoll0 as collection version: 1|3||5ddd7d71cf8184c2e1492ff8, shard version: 1|3||5ddd7d71cf8184c2e1492ff8
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.290-0500 I NETWORK [conn30] end connection 127.0.0.1:44470 (8 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.214-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.216-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.972-0500 I INDEX [ShardServerCatalogCacheLoader-0] Index build completed: 7ce827f8-b65f-4c26-9ea5-b78ba28743b8
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.932-0500 I STORAGE [ShardServerCatalogCacheLoader-1] createCollection: config.cache.chunks.test0_fsmdb0.fsmcoll0 with provided UUID: dad6441c-7462-448b-9e35-8123157c4429 and options: { uuid: UUID("dad6441c-7462-448b-9e35-8123157c4429") }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.296-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44486 #37 (9 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.215-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.218-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 194f6c0e-f357-4046-b236-03f6ece4f854: test0_fsmdb0.agg_out ( b94b968d-a0c7-4026-a629-39b3d74e6ef1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:57.972-0500 I SHARDING [ShardServerCatalogCacheLoader-0] Marking collection config.cache.chunks.test0_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.949-0500 I INDEX [ShardServerCatalogCacheLoader-1] index build: done building index _id_ on ns config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.296-0500 I NETWORK [conn37] received client metadata from 127.0.0.1:44486 conn37: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.217-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.309-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51304 #32 (8 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.086-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38596 #64 (22 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.949-0500 I INDEX [ShardServerCatalogCacheLoader-1] Registering index build: 3997e8a4-8af3-47b4-81bc-09cb80565e7e
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.921-0500 MongoDB shell version v0.0.0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.298-0500 I NETWORK [conn32] end connection 127.0.0.1:44474 (8 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.220-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: a25d9b86-8f93-40f5-8ce4-ce45df009c7b: test0_fsmdb0.agg_out ( b94b968d-a0c7-4026-a629-39b3d74e6ef1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.309-0500 I NETWORK [conn32] received client metadata from 127.0.0.1:51304 conn32: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.086-0500 I NETWORK [conn64] received client metadata from 127.0.0.1:38596 conn64: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.963-0500 I INDEX [ShardServerCatalogCacheLoader-1] index build: starting on config.cache.chunks.test0_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.298-0500 I NETWORK [conn33] end connection 127.0.0.1:44476 (7 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.309-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34666 #33 (9 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.310-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51318 #33 (9 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.087-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38604 #65 (23 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.963-0500 I INDEX [ShardServerCatalogCacheLoader-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.299-0500 I NETWORK [conn34] end connection 127.0.0.1:44478 (6 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.309-0500 I NETWORK [conn33] received client metadata from 127.0.0.1:34666 conn33: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.311-0500 I NETWORK [conn33] received client metadata from 127.0.0.1:51318 conn33: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.087-0500 I NETWORK [conn65] received client metadata from 127.0.0.1:38604 conn65: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.964-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Index build initialized: 3997e8a4-8af3-47b4-81bc-09cb80565e7e: config.cache.chunks.test0_fsmdb0.fsmcoll0 (dad6441c-7462-448b-9e35-8123157c4429 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.310-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34676 #34 (10 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.446-0500 I COMMAND [conn36] command test0_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("0cab2b8e-d3fc-4dad-b1d8-353217ca01c7") }, $db: "test0_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 138ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.344-0500 I STORAGE [ReplWriterWorker-7] createCollection: test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47 with provided UUID: 8c546110-a600-42be-a2f8-58129a036e1b and options: { uuid: UUID("8c546110-a600-42be-a2f8-58129a036e1b"), temp: true }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.183-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38608 #66 (24 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.964-0500 I INDEX [ShardServerCatalogCacheLoader-1] Waiting for index build to complete: 3997e8a4-8af3-47b4-81bc-09cb80565e7e
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.310-0500 I NETWORK [conn34] received client metadata from 127.0.0.1:34676 conn34: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.503-0500 I COMMAND [conn37] command test0_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("4215d65a-2dd1-4ab2-bcf3-1c5ee3325733") }, $db: "test0_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 195ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.183-0500 I NETWORK [conn66] received client metadata from 127.0.0.1:38608 conn66: { driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.964-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.357-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.344-0500 I STORAGE [ReplWriterWorker-2] createCollection: test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47 with provided UUID: 8c546110-a600-42be-a2f8-58129a036e1b and options: { uuid: UUID("8c546110-a600-42be-a2f8-58129a036e1b"), temp: true }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.546-0500 I COMMAND [conn35] command test0_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f380e6e7-e7db-4d01-a770-8e3aea8758d2") }, $db: "test0_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 238ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.308-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38632 #67 (25 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.964-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.358-0500 I STORAGE [ReplWriterWorker-4] createCollection: test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed with provided UUID: de7bc474-c0d1-4e53-be30-0838b3a89414 and options: { uuid: UUID("de7bc474-c0d1-4e53-be30-0838b3a89414"), temp: true }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.570-0500 I COMMAND [conn36] command test0_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("0cab2b8e-d3fc-4dad-b1d8-353217ca01c7") }, $clusterTime: { clusterTime: Timestamp(1574796658, 1030), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test0_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 122ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.308-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38634 #68 (26 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.359-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.967-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.375-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.721-0500 I COMMAND [conn35] command test0_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f380e6e7-e7db-4d01-a770-8e3aea8758d2") }, $clusterTime: { clusterTime: Timestamp(1574796658, 2493), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test0_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 173ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.308-0500 I NETWORK [conn68] received client metadata from 127.0.0.1:38634 conn68: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.360-0500 I STORAGE [ReplWriterWorker-11] createCollection: test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed with provided UUID: de7bc474-c0d1-4e53-be30-0838b3a89414 and options: { uuid: UUID("de7bc474-c0d1-4e53-be30-0838b3a89414"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.971-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 3997e8a4-8af3-47b4-81bc-09cb80565e7e: config.cache.chunks.test0_fsmdb0.fsmcoll0 ( dad6441c-7462-448b-9e35-8123157c4429 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.375-0500 I STORAGE [ReplWriterWorker-1] createCollection: test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447 with provided UUID: a3fdf230-f6aa-432a-9b30-89f199e2c6c3 and options: { uuid: UUID("a3fdf230-f6aa-432a-9b30-89f199e2c6c3"), temp: true }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.308-0500 I NETWORK [conn67] received client metadata from 127.0.0.1:38632 conn67: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.376-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.971-0500 I INDEX [ShardServerCatalogCacheLoader-1] Index build completed: 3997e8a4-8af3-47b4-81bc-09cb80565e7e
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.389-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.310-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38646 #69 (27 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.377-0500 I STORAGE [ReplWriterWorker-0] createCollection: test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447 with provided UUID: a3fdf230-f6aa-432a-9b30-89f199e2c6c3 and options: { uuid: UUID("a3fdf230-f6aa-432a-9b30-89f199e2c6c3"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.971-0500 I SHARDING [ShardServerCatalogCacheLoader-1] Marking collection config.cache.chunks.test0_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.390-0500 I STORAGE [ReplWriterWorker-6] createCollection: test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85 with provided UUID: f5f461a4-5e3a-4722-9b40-3176f32a641f and options: { uuid: UUID("f5f461a4-5e3a-4722-9b40-3176f32a641f"), temp: true }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.310-0500 I NETWORK [conn69] received client metadata from 127.0.0.1:38646 conn69: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.390-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.974-0500 I SHARDING [conn55] Created 4 chunk(s) for: test0_fsmdb0.fsmcoll0, producing collection version 1|3||5ddd7d71cf8184c2e1492ff8
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.405-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.311-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38662 #70 (28 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.391-0500 I STORAGE [ReplWriterWorker-6] createCollection: test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85 with provided UUID: f5f461a4-5e3a-4722-9b40-3176f32a641f and options: { uuid: UUID("f5f461a4-5e3a-4722-9b40-3176f32a641f"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.974-0500 I SHARDING [conn55] about to log metadata event into changelog: { _id: "nz_desktop:20004-2019-11-26T14:30:57.974-0500-5ddd7d71cf8184c2e1493029", server: "nz_desktop:20004", shard: "shard-rs1", clientAddr: "127.0.0.1:46028", time: new Date(1574796657974), what: "shardCollection.end", ns: "test0_fsmdb0.fsmcoll0", details: { version: "1|3||5ddd7d71cf8184c2e1492ff8" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.406-0500 I STORAGE [ReplWriterWorker-12] createCollection: test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718 with provided UUID: 3d78450b-1218-4164-a3d4-22cb658c3066 and options: { uuid: UUID("3d78450b-1218-4164-a3d4-22cb658c3066"), temp: true }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.311-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38666 #71 (29 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.407-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:57.975-0500 I COMMAND [conn55] command admin.$cmd appName: "MongoDB Shell" command: _shardsvrShardCollection { _shardsvrShardCollection: "test0_fsmdb0.fsmcoll0", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("05f83359-a3f0-406c-a0fb-ce607d9e5952"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796657, 82), signature: { hash: BinData(0, D4A12BC9CC96C739408A5C23B7634C70BC58BDC4), keyId: 6763700092420489256 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44390", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796657, 82), t: 1 } }, $db: "admin" } numYields:0 reslen:415 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 9 } }, ReplicationStateTransition: { acquireCount: { w: 15 } }, Global: { acquireCount: { r: 8, w: 7 } }, Database: { acquireCount: { r: 8, w: 7, W: 1 } }, Collection: { acquireCount: { r: 8, w: 3, W: 4 } }, Mutex: { acquireCount: { r: 16, W: 4 } } } flowControl:{ acquireCount: 5 } storage:{} protocol:op_msg 141ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.422-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.311-0500 I NETWORK [conn70] received client metadata from 127.0.0.1:38662 conn70: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.407-0500 I STORAGE [ReplWriterWorker-1] createCollection: test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718 with provided UUID: 3d78450b-1218-4164-a3d4-22cb658c3066 and options: { uuid: UUID("3d78450b-1218-4164-a3d4-22cb658c3066"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.086-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46064 #61 (21 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.439-0500 I INDEX [ReplWriterWorker-9] index build: starting on test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.312-0500 I NETWORK [conn71] received client metadata from 127.0.0.1:38666 conn71: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.423-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.086-0500 I NETWORK [conn61] received client metadata from 127.0.0.1:46064 conn61: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.439-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.439-0500 I INDEX [ReplWriterWorker-10] index build: starting on test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.506-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38682 #72 (30 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.092-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46072 #62 (22 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.439-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 14b2d0e0-79c5-4956-bab9-7e64b1ce416d: test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47 (8c546110-a600-42be-a2f8-58129a036e1b ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.439-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.507-0500 I NETWORK [conn72] received client metadata from 127.0.0.1:38682 conn72: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.092-0500 I NETWORK [conn62] received client metadata from 127.0.0.1:46072 conn62: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.439-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.439-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 9ac8d135-8c1d-47c7-b3dd-59f719ec4f20: test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47 (8c546110-a600-42be-a2f8-58129a036e1b ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.511-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38684 #73 (31 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.153-0500 I STORAGE [conn55] createCollection: test0_fsmdb0.agg_out with generated UUID: b94b968d-a0c7-4026-a629-39b3d74e6ef1 and options: {}
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.439-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.439-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.511-0500 I NETWORK [conn73] received client metadata from 127.0.0.1:38684 conn73: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.164-0500 I INDEX [conn55] index build: done building index _id_ on ns test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.443-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 64 side writes (inserted: 64, deleted: 0) for '_id_hashed' in 1 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.440-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.184-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46076 #63 (23 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.443-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.444-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 64 side writes (inserted: 64, deleted: 0) for '_id_hashed' in 1 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.184-0500 I NETWORK [conn63] received client metadata from 127.0.0.1:46076 conn63: { driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.449-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 14b2d0e0-79c5-4956-bab9-7e64b1ce416d: test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47 ( 8c546110-a600-42be-a2f8-58129a036e1b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.447-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 171 side writes (inserted: 171, deleted: 0) for '_id_hashed' in 1 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.185-0500 I INDEX [conn62] Registering index build: 423ae1f6-4a8a-4af3-a6c4-ee550897f91c
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.464-0500 I INDEX [ReplWriterWorker-0] index build: starting on test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.449-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 142 side writes (inserted: 142, deleted: 0) for '_id_hashed' in 1 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.195-0500 I INDEX [conn62] index build: starting on test0_fsmdb0.agg_out properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.464-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.449-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.195-0500 I INDEX [conn62] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.464-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 48839305-c870-4e16-b85a-3bea3cfb882d: test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed (de7bc474-c0d1-4e53-be30-0838b3a89414 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.450-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 9ac8d135-8c1d-47c7-b3dd-59f719ec4f20: test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47 ( 8c546110-a600-42be-a2f8-58129a036e1b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.195-0500 I STORAGE [conn62] Index build initialized: 423ae1f6-4a8a-4af3-a6c4-ee550897f91c: test0_fsmdb0.agg_out (b94b968d-a0c7-4026-a629-39b3d74e6ef1 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.464-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.468-0500 I INDEX [ReplWriterWorker-13] index build: starting on test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.195-0500 I INDEX [conn62] Waiting for index build to complete: 423ae1f6-4a8a-4af3-a6c4-ee550897f91c
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.464-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.468-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.195-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.466-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47 (8c546110-a600-42be-a2f8-58129a036e1b) to test0_fsmdb0.agg_out and drop b94b968d-a0c7-4026-a629-39b3d74e6ef1.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.468-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 1679dba4-6162-455d-9441-c98ed2504aa8: test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed (de7bc474-c0d1-4e53-be30-0838b3a89414 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.196-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.467-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.468-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.198-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.467-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test0_fsmdb0.agg_out (b94b968d-a0c7-4026-a629-39b3d74e6ef1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 1030), t: 1 } and commit timestamp Timestamp(1574796658, 1030)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.469-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.199-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 423ae1f6-4a8a-4af3-a6c4-ee550897f91c: test0_fsmdb0.agg_out ( b94b968d-a0c7-4026-a629-39b3d74e6ef1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.468-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test0_fsmdb0.agg_out (b94b968d-a0c7-4026-a629-39b3d74e6ef1).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.470-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47 (8c546110-a600-42be-a2f8-58129a036e1b) to test0_fsmdb0.agg_out and drop b94b968d-a0c7-4026-a629-39b3d74e6ef1.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.199-0500 I INDEX [conn62] Index build completed: 423ae1f6-4a8a-4af3-a6c4-ee550897f91c
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.468-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 8c546110-a600-42be-a2f8-58129a036e1b from test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47 to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.471-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.308-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46102 #64 (24 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.468-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (b94b968d-a0c7-4026-a629-39b3d74e6ef1)'. Ident: 'index-42--2310912778499990807', commit timestamp: 'Timestamp(1574796658, 1030)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.472-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test0_fsmdb0.agg_out (b94b968d-a0c7-4026-a629-39b3d74e6ef1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 1030), t: 1 } and commit timestamp Timestamp(1574796658, 1030)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.308-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46104 #65 (25 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.468-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (b94b968d-a0c7-4026-a629-39b3d74e6ef1)'. Ident: 'index-43--2310912778499990807', commit timestamp: 'Timestamp(1574796658, 1030)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.472-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test0_fsmdb0.agg_out (b94b968d-a0c7-4026-a629-39b3d74e6ef1).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.308-0500 I NETWORK [conn64] received client metadata from 127.0.0.1:46102 conn64: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.468-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-41--2310912778499990807, commit timestamp: Timestamp(1574796658, 1030)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.472-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection 8c546110-a600-42be-a2f8-58129a036e1b from test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47 to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.308-0500 I NETWORK [conn65] received client metadata from 127.0.0.1:46104 conn65: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.469-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 48839305-c870-4e16-b85a-3bea3cfb882d: test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed ( de7bc474-c0d1-4e53-be30-0838b3a89414 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.472-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (b94b968d-a0c7-4026-a629-39b3d74e6ef1)'. Ident: 'index-42--7234316082034423155', commit timestamp: 'Timestamp(1574796658, 1030)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.488-0500 I STORAGE [ReplWriterWorker-8] createCollection: test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f with provided UUID: a6dfb80e-0801-4b7b-8d26-00ad7c26e35a and options: { uuid: UUID("a6dfb80e-0801-4b7b-8d26-00ad7c26e35a"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.472-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (b94b968d-a0c7-4026-a629-39b3d74e6ef1)'. Ident: 'index-43--7234316082034423155', commit timestamp: 'Timestamp(1574796658, 1030)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.503-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.472-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-41--7234316082034423155, commit timestamp: Timestamp(1574796658, 1030)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.520-0500 I INDEX [ReplWriterWorker-12] index build: starting on test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.473-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 1679dba4-6162-455d-9441-c98ed2504aa8: test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed ( de7bc474-c0d1-4e53-be30-0838b3a89414 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.309-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46110 #67 (26 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.520-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.489-0500 I STORAGE [ReplWriterWorker-15] createCollection: test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f with provided UUID: a6dfb80e-0801-4b7b-8d26-00ad7c26e35a and options: { uuid: UUID("a6dfb80e-0801-4b7b-8d26-00ad7c26e35a"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.310-0500 I NETWORK [conn67] received client metadata from 127.0.0.1:46110 conn67: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.520-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 0537ec9f-a457-4646-a976-e8d34e9a4a58: test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447 (a3fdf230-f6aa-432a-9b30-89f199e2c6c3 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.504-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.310-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46122 #70 (27 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.520-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.523-0500 I INDEX [ReplWriterWorker-8] index build: starting on test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.310-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.523-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.523-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.310-0500 I SHARDING [updateShardIdentityConfigString] Updating config server with confirmed set shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.532-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.523-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 21ec0f76-d909-4925-b20b-34870b9e0c3e: test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447 (a3fdf230-f6aa-432a-9b30-89f199e2c6c3 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.310-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46124 #71 (28 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.540-0500 I INDEX [ReplWriterWorker-7] index build: starting on test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.524-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.311-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46126 #74 (29 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.540-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.524-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.311-0500 I NETWORK [conn70] received client metadata from 127.0.0.1:46122 conn70: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.540-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 8d3f5a79-e586-4e84-8d51-00312dddf846: test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85 (f5f461a4-5e3a-4722-9b40-3176f32a641f ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.526-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.311-0500 I NETWORK [conn71] received client metadata from 127.0.0.1:46124 conn71: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.540-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.536-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 21ec0f76-d909-4925-b20b-34870b9e0c3e: test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447 ( a3fdf230-f6aa-432a-9b30-89f199e2c6c3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.311-0500 I NETWORK [conn74] received client metadata from 127.0.0.1:46126 conn74: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.541-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 0537ec9f-a457-4646-a976-e8d34e9a4a58: test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447 ( a3fdf230-f6aa-432a-9b30-89f199e2c6c3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.544-0500 I INDEX [ReplWriterWorker-11] index build: starting on test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.311-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46130 #75 (30 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.541-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.544-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.311-0500 I NETWORK [conn75] received client metadata from 127.0.0.1:46130 conn75: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.544-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.544-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 93bb9e2b-d4f5-447b-8ab2-404e5ef85444: test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85 (f5f461a4-5e3a-4722-9b40-3176f32a641f ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.312-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46134 #77 (31 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.547-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 8d3f5a79-e586-4e84-8d51-00312dddf846: test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85 ( f5f461a4-5e3a-4722-9b40-3176f32a641f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.544-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.312-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46136 #80 (32 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.563-0500 I INDEX [ReplWriterWorker-4] index build: starting on test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.545-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.312-0500 I NETWORK [conn77] received client metadata from 127.0.0.1:46134 conn77: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.563-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.547-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.312-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46139 #81 (33 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.563-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: cee27d59-208b-462f-a379-3ea757639100: test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718 (3d78450b-1218-4164-a3d4-22cb658c3066 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.553-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 93bb9e2b-d4f5-447b-8ab2-404e5ef85444: test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85 ( f5f461a4-5e3a-4722-9b40-3176f32a641f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.312-0500 I NETWORK [conn80] received client metadata from 127.0.0.1:46136 conn80: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.564-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.567-0500 I INDEX [ReplWriterWorker-9] index build: starting on test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.312-0500 I STORAGE [conn77] createCollection: test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47 with generated UUID: 8c546110-a600-42be-a2f8-58129a036e1b and options: { temp: true }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:30:58.971-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.565-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.713-0500 Implicit session: session { "id" : UUID("5827d1aa-7ed2-49ad-a962-e7c285c2bcf4") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.713-0500 MongoDB server version: 0.0.0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.713-0500 true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.713-0500 2019-11-26T14:30:58.980-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.713-0500 2019-11-26T14:30:58.980-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.713-0500 2019-11-26T14:30:58.981-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.713-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.713-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.713-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.713-0500 [jsTest] New session started with sessionID: { "id" : UUID("13791288-5852-4533-863e-b4a26d9a2e71") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.713-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.713-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.713-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.713-0500 2019-11-26T14:30:58.985-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.713-0500 2019-11-26T14:30:58.985-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.713-0500 2019-11-26T14:30:58.985-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.713-0500 2019-11-26T14:30:58.985-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.713-0500 2019-11-26T14:30:58.986-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.714-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.714-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.714-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.714-0500 [jsTest] New session started with sessionID: { "id" : UUID("0c2bb09e-0bd7-4cf3-be78-a8652850401a") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.714-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.714-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.714-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.567-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.714-0500 2019-11-26T14:30:58.987-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.714-0500 2019-11-26T14:30:58.987-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.714-0500 2019-11-26T14:30:58.987-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.714-0500 2019-11-26T14:30:58.987-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.972-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44544 #42 (7 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:58.981-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55948 #77 (38 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:58.985-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52614 #38 (13 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:58.985-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51730 #38 (12 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.985-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38694 #74 (32 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.312-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46138 #82 (34 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.565-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed (de7bc474-c0d1-4e53-be30-0838b3a89414) to test0_fsmdb0.agg_out and drop 8c546110-a600-42be-a2f8-58129a036e1b.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.568-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: d83e6da9-6d69-4afb-8dde-00ba11dd97bc: test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718 (3d78450b-1218-4164-a3d4-22cb658c3066 ): indexes: 1
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:30:58.972-0500 I NETWORK [conn42] received client metadata from 127.0.0.1:44544 conn42: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:58.981-0500 I NETWORK [conn77] received client metadata from 127.0.0.1:55948 conn77: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:58.985-0500 I NETWORK [conn38] received client metadata from 127.0.0.1:52614 conn38: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.985-0500 I NETWORK [conn74] received client metadata from 127.0.0.1:38694 conn74: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.312-0500 I NETWORK [conn81] received client metadata from 127.0.0.1:46139 conn81: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:30:58.985-0500 I NETWORK [conn38] received client metadata from 127.0.0.1:51730 conn38: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.567-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.568-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:58.982-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55950 #78 (39 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:30:59.473-0500 I NETWORK [conn4] end connection 127.0.0.1:52028 (12 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.986-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38698 #75 (33 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.312-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46142 #84 (35 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.567-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test0_fsmdb0.agg_out (8c546110-a600-42be-a2f8-58129a036e1b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 1540), t: 1 } and commit timestamp Timestamp(1574796658, 1540)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.568-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:30:58.982-0500 I NETWORK [conn78] received client metadata from 127.0.0.1:55950 conn78: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:58.986-0500 I NETWORK [conn75] received client metadata from 127.0.0.1:38698 conn75: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.313-0500 I NETWORK [conn82] received client metadata from 127.0.0.1:46138 conn82: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.568-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test0_fsmdb0.agg_out (8c546110-a600-42be-a2f8-58129a036e1b).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.570-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed (de7bc474-c0d1-4e53-be30-0838b3a89414) to test0_fsmdb0.agg_out and drop 8c546110-a600-42be-a2f8-58129a036e1b.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:30:59.473-0500 I CONNPOOL [ReplNetwork] Ending connection to host localhost:20003 due to bad connection status: CallbackCanceled: Callback was canceled; 1 connections to that host remain open
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.313-0500 I NETWORK [conn84] received client metadata from 127.0.0.1:46142 conn84: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.568-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection de7bc474-c0d1-4e53-be30-0838b3a89414 from test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.571-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.313-0500 I STORAGE [conn82] createCollection: test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed with generated UUID: de7bc474-c0d1-4e53-be30-0838b3a89414 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.568-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (8c546110-a600-42be-a2f8-58129a036e1b)'. Ident: 'index-46--2310912778499990807', commit timestamp: 'Timestamp(1574796658, 1540)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.571-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test0_fsmdb0.agg_out (8c546110-a600-42be-a2f8-58129a036e1b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 1540), t: 1 } and commit timestamp Timestamp(1574796658, 1540)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.313-0500 I STORAGE [conn84] createCollection: test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447 with generated UUID: a3fdf230-f6aa-432a-9b30-89f199e2c6c3 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.568-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (8c546110-a600-42be-a2f8-58129a036e1b)'. Ident: 'index-55--2310912778499990807', commit timestamp: 'Timestamp(1574796658, 1540)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.571-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test0_fsmdb0.agg_out (8c546110-a600-42be-a2f8-58129a036e1b).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.314-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46144 #85 (36 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.568-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-45--2310912778499990807, commit timestamp: Timestamp(1574796658, 1540)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.571-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection de7bc474-c0d1-4e53-be30-0838b3a89414 from test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.314-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46146 #88 (37 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.570-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: cee27d59-208b-462f-a379-3ea757639100: test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718 ( 3d78450b-1218-4164-a3d4-22cb658c3066 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.572-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (8c546110-a600-42be-a2f8-58129a036e1b)'. Ident: 'index-46--7234316082034423155', commit timestamp: 'Timestamp(1574796658, 1540)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.314-0500 I NETWORK [conn85] received client metadata from 127.0.0.1:46144 conn85: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.587-0500 I INDEX [ReplWriterWorker-1] index build: starting on test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.572-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (8c546110-a600-42be-a2f8-58129a036e1b)'. Ident: 'index-55--7234316082034423155', commit timestamp: 'Timestamp(1574796658, 1540)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.314-0500 I NETWORK [conn88] received client metadata from 127.0.0.1:46146 conn88: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.587-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.572-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-45--7234316082034423155, commit timestamp: Timestamp(1574796658, 1540)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.315-0500 I STORAGE [conn85] createCollection: test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85 with generated UUID: f5f461a4-5e3a-4722-9b40-3176f32a641f and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.587-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: ebe2dc71-2054-43dc-b0b4-d65822ae1b7f: test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f (a6dfb80e-0801-4b7b-8d26-00ad7c26e35a ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.573-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: d83e6da9-6d69-4afb-8dde-00ba11dd97bc: test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718 ( 3d78450b-1218-4164-a3d4-22cb658c3066 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.315-0500 I STORAGE [conn88] createCollection: test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718 with generated UUID: 3d78450b-1218-4164-a3d4-22cb658c3066 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.587-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.588-0500 I INDEX [ReplWriterWorker-11] index build: starting on test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.341-0500 I INDEX [conn77] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.587-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.588-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.342-0500 I INDEX [conn77] Registering index build: 7dd58ed9-f778-479e-a4c3-862b21aee10c
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.588-0500 I STORAGE [ReplWriterWorker-6] createCollection: test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c with provided UUID: b6c880d1-9035-42f2-bb70-5f65e6a39010 and options: { uuid: UUID("b6c880d1-9035-42f2-bb70-5f65e6a39010"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.588-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: e4af2642-79da-4316-94ac-0dec811f0928: test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f (a6dfb80e-0801-4b7b-8d26-00ad7c26e35a ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.347-0500 I INDEX [conn82] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.590-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.588-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.348-0500 I INDEX [conn82] Registering index build: 36c1fa80-2723-47b9-84b2-ff20b92d496a
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.600-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: ebe2dc71-2054-43dc-b0b4-d65822ae1b7f: test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f ( a6dfb80e-0801-4b7b-8d26-00ad7c26e35a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.589-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.606-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.590-0500 I STORAGE [ReplWriterWorker-0] createCollection: test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c with provided UUID: b6c880d1-9035-42f2-bb70-5f65e6a39010 and options: { uuid: UUID("b6c880d1-9035-42f2-bb70-5f65e6a39010"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.355-0500 I INDEX [conn84] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.615-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447 (a3fdf230-f6aa-432a-9b30-89f199e2c6c3) to test0_fsmdb0.agg_out and drop de7bc474-c0d1-4e53-be30-0838b3a89414.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.592-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.355-0500 I INDEX [conn84] Registering index build: aac17a2c-437f-4aba-b5c6-bb2d271e1f64
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.615-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test0_fsmdb0.agg_out (de7bc474-c0d1-4e53-be30-0838b3a89414) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 2301), t: 1 } and commit timestamp Timestamp(1574796658, 2301)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.601-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: e4af2642-79da-4316-94ac-0dec811f0928: test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f ( a6dfb80e-0801-4b7b-8d26-00ad7c26e35a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:00.719-0500 I COMMAND [conn36] command test0_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("0cab2b8e-d3fc-4dad-b1d8-353217ca01c7") }, $clusterTime: { clusterTime: Timestamp(1574796658, 4052), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test0_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2131ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.362-0500 I INDEX [conn85] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.615-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test0_fsmdb0.agg_out (de7bc474-c0d1-4e53-be30-0838b3a89414).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.608-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.363-0500 I INDEX [conn85] Registering index build: cec3c590-bc1f-4e93-8f37-c251863b5304
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.719-0500 2019-11-26T14:31:00.719-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.615-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection a3fdf230-f6aa-432a-9b30-89f199e2c6c3 from test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447 to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.625-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447 (a3fdf230-f6aa-432a-9b30-89f199e2c6c3) to test0_fsmdb0.agg_out and drop de7bc474-c0d1-4e53-be30-0838b3a89414.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.369-0500 I INDEX [conn88] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.615-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (de7bc474-c0d1-4e53-be30-0838b3a89414)'. Ident: 'index-48--2310912778499990807', commit timestamp: 'Timestamp(1574796658, 2301)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.625-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test0_fsmdb0.agg_out (de7bc474-c0d1-4e53-be30-0838b3a89414) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 2301), t: 1 } and commit timestamp Timestamp(1574796658, 2301)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.370-0500 I INDEX [conn88] Registering index build: c4a86f34-137c-472f-b6cd-e6308fa190f9
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.615-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (de7bc474-c0d1-4e53-be30-0838b3a89414)'. Ident: 'index-57--2310912778499990807', commit timestamp: 'Timestamp(1574796658, 2301)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.625-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test0_fsmdb0.agg_out (de7bc474-c0d1-4e53-be30-0838b3a89414).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.386-0500 I INDEX [conn77] index build: starting on test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.615-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-47--2310912778499990807, commit timestamp: Timestamp(1574796658, 2301)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.625-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection a3fdf230-f6aa-432a-9b30-89f199e2c6c3 from test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447 to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.386-0500 I INDEX [conn77] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.633-0500 I INDEX [ReplWriterWorker-6] index build: starting on test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.625-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (de7bc474-c0d1-4e53-be30-0838b3a89414)'. Ident: 'index-48--7234316082034423155', commit timestamp: 'Timestamp(1574796658, 2301)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.386-0500 I STORAGE [conn77] Index build initialized: 7dd58ed9-f778-479e-a4c3-862b21aee10c: test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47 (8c546110-a600-42be-a2f8-58129a036e1b ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.633-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.625-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (de7bc474-c0d1-4e53-be30-0838b3a89414)'. Ident: 'index-57--7234316082034423155', commit timestamp: 'Timestamp(1574796658, 2301)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.386-0500 I INDEX [conn77] Waiting for index build to complete: 7dd58ed9-f778-479e-a4c3-862b21aee10c
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.634-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 7d157b94-028b-459a-b2b5-678b0d343a55: test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c (b6c880d1-9035-42f2-bb70-5f65e6a39010 ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.721-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.625-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-47--7234316082034423155, commit timestamp: Timestamp(1574796658, 2301)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.721-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.721-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.721-0500 [jsTest] New session started with sessionID: { "id" : UUID("7e91fdc1-50f5-48c4-bbe4-ae673b4e422e") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.386-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.721-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.721-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.634-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.722-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.722-0500 Skipping data consistency checks for 1-node CSRS: { "type" : "replica set", "primary" : "localhost:20000", "nodes" : [ "localhost:20000" ] }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.642-0500 I INDEX [ReplWriterWorker-0] index build: starting on test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.387-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.634-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.642-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.388-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.636-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.642-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 4b2c6273-1a69-44dc-80d7-2accd6a48061: test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c (b6c880d1-9035-42f2-bb70-5f65e6a39010 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.396-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 7dd58ed9-f778-479e-a4c3-862b21aee10c: test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47 ( 8c546110-a600-42be-a2f8-58129a036e1b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.636-0500 I STORAGE [ReplWriterWorker-15] createCollection: test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c with provided UUID: 4fddd2b8-65d6-44a6-98ec-801e410db392 and options: { uuid: UUID("4fddd2b8-65d6-44a6-98ec-801e410db392"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.642-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.404-0500 I INDEX [conn82] index build: starting on test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.638-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 7d157b94-028b-459a-b2b5-678b0d343a55: test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c ( b6c880d1-9035-42f2-bb70-5f65e6a39010 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.643-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.404-0500 I INDEX [conn82] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.652-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.645-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.404-0500 I STORAGE [conn82] Index build initialized: 36c1fa80-2723-47b9-84b2-ff20b92d496a: test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed (de7bc474-c0d1-4e53-be30-0838b3a89414 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.658-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718 (3d78450b-1218-4164-a3d4-22cb658c3066) to test0_fsmdb0.agg_out and drop a3fdf230-f6aa-432a-9b30-89f199e2c6c3.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.645-0500 I STORAGE [ReplWriterWorker-8] createCollection: test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c with provided UUID: 4fddd2b8-65d6-44a6-98ec-801e410db392 and options: { uuid: UUID("4fddd2b8-65d6-44a6-98ec-801e410db392"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.404-0500 I INDEX [conn77] Index build completed: 7dd58ed9-f778-479e-a4c3-862b21aee10c
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.658-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test0_fsmdb0.agg_out (a3fdf230-f6aa-432a-9b30-89f199e2c6c3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 3613), t: 1 } and commit timestamp Timestamp(1574796658, 3613)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.649-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 4b2c6273-1a69-44dc-80d7-2accd6a48061: test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c ( b6c880d1-9035-42f2-bb70-5f65e6a39010 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.404-0500 I INDEX [conn82] Waiting for index build to complete: 36c1fa80-2723-47b9-84b2-ff20b92d496a
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:00.723-0500 I COMMAND [conn15] command test0_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("5033fb0b-b223-4faf-974b-5b5f981f137e") }, $clusterTime: { clusterTime: Timestamp(1574796658, 3614), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test0_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2150ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.658-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test0_fsmdb0.agg_out (a3fdf230-f6aa-432a-9b30-89f199e2c6c3).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.663-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.418-0500 I INDEX [conn84] index build: starting on test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.658-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection 3d78450b-1218-4164-a3d4-22cb658c3066 from test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718 to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.667-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718 (3d78450b-1218-4164-a3d4-22cb658c3066) to test0_fsmdb0.agg_out and drop a3fdf230-f6aa-432a-9b30-89f199e2c6c3.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.418-0500 I INDEX [conn84] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.658-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (a3fdf230-f6aa-432a-9b30-89f199e2c6c3)'. Ident: 'index-50--2310912778499990807', commit timestamp: 'Timestamp(1574796658, 3613)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.668-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test0_fsmdb0.agg_out (a3fdf230-f6aa-432a-9b30-89f199e2c6c3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 3613), t: 1 } and commit timestamp Timestamp(1574796658, 3613)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.418-0500 I STORAGE [conn84] Index build initialized: aac17a2c-437f-4aba-b5c6-bb2d271e1f64: test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447 (a3fdf230-f6aa-432a-9b30-89f199e2c6c3 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.658-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (a3fdf230-f6aa-432a-9b30-89f199e2c6c3)'. Ident: 'index-61--2310912778499990807', commit timestamp: 'Timestamp(1574796658, 3613)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.668-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test0_fsmdb0.agg_out (a3fdf230-f6aa-432a-9b30-89f199e2c6c3).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.419-0500 I INDEX [conn84] Waiting for index build to complete: aac17a2c-437f-4aba-b5c6-bb2d271e1f64
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.658-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-49--2310912778499990807, commit timestamp: Timestamp(1574796658, 3613)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.668-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 3d78450b-1218-4164-a3d4-22cb658c3066 from test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718 to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.419-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.659-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85 (f5f461a4-5e3a-4722-9b40-3176f32a641f) to test0_fsmdb0.agg_out and drop 3d78450b-1218-4164-a3d4-22cb658c3066.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.668-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (a3fdf230-f6aa-432a-9b30-89f199e2c6c3)'. Ident: 'index-50--7234316082034423155', commit timestamp: 'Timestamp(1574796658, 3613)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.419-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.659-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test0_fsmdb0.agg_out (3d78450b-1218-4164-a3d4-22cb658c3066) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 3614), t: 1 } and commit timestamp Timestamp(1574796658, 3614)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.668-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (a3fdf230-f6aa-432a-9b30-89f199e2c6c3)'. Ident: 'index-61--7234316082034423155', commit timestamp: 'Timestamp(1574796658, 3613)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.436-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.659-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test0_fsmdb0.agg_out (3d78450b-1218-4164-a3d4-22cb658c3066).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.668-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-49--7234316082034423155, commit timestamp: Timestamp(1574796658, 3613)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.445-0500 I INDEX [conn85] index build: starting on test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.659-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection f5f461a4-5e3a-4722-9b40-3176f32a641f from test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85 to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.668-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85 (f5f461a4-5e3a-4722-9b40-3176f32a641f) to test0_fsmdb0.agg_out and drop 3d78450b-1218-4164-a3d4-22cb658c3066.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.445-0500 I INDEX [conn85] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.659-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (3d78450b-1218-4164-a3d4-22cb658c3066)'. Ident: 'index-54--2310912778499990807', commit timestamp: 'Timestamp(1574796658, 3614)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.668-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test0_fsmdb0.agg_out (3d78450b-1218-4164-a3d4-22cb658c3066) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 3614), t: 1 } and commit timestamp Timestamp(1574796658, 3614)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.445-0500 I STORAGE [conn85] Index build initialized: cec3c590-bc1f-4e93-8f37-c251863b5304: test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85 (f5f461a4-5e3a-4722-9b40-3176f32a641f ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.659-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (3d78450b-1218-4164-a3d4-22cb658c3066)'. Ident: 'index-65--2310912778499990807', commit timestamp: 'Timestamp(1574796658, 3614)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.668-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test0_fsmdb0.agg_out (3d78450b-1218-4164-a3d4-22cb658c3066).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.445-0500 I COMMAND [conn77] renameCollectionForCommand: rename test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47 to test0_fsmdb0.agg_out and drop test0_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.659-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-53--2310912778499990807, commit timestamp: Timestamp(1574796658, 3614)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.668-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection f5f461a4-5e3a-4722-9b40-3176f32a641f from test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85 to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.445-0500 I STORAGE [conn77] dropCollection: test0_fsmdb0.agg_out (b94b968d-a0c7-4026-a629-39b3d74e6ef1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 1030), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.660-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f (a6dfb80e-0801-4b7b-8d26-00ad7c26e35a) to test0_fsmdb0.agg_out and drop f5f461a4-5e3a-4722-9b40-3176f32a641f.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.668-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (3d78450b-1218-4164-a3d4-22cb658c3066)'. Ident: 'index-54--7234316082034423155', commit timestamp: 'Timestamp(1574796658, 3614)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.445-0500 I STORAGE [conn77] Finishing collection drop for test0_fsmdb0.agg_out (b94b968d-a0c7-4026-a629-39b3d74e6ef1).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.660-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test0_fsmdb0.agg_out (f5f461a4-5e3a-4722-9b40-3176f32a641f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 3615), t: 1 } and commit timestamp Timestamp(1574796658, 3615)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.669-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (3d78450b-1218-4164-a3d4-22cb658c3066)'. Ident: 'index-65--7234316082034423155', commit timestamp: 'Timestamp(1574796658, 3614)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.445-0500 I STORAGE [conn77] renameCollection: renaming collection 8c546110-a600-42be-a2f8-58129a036e1b from test0_fsmdb0.tmp.agg_out.434b5ca3-fdd9-456e-8d8e-bb925b5f0d47 to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.660-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test0_fsmdb0.agg_out (f5f461a4-5e3a-4722-9b40-3176f32a641f).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.669-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-53--7234316082034423155, commit timestamp: Timestamp(1574796658, 3614)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.445-0500 I STORAGE [conn77] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (b94b968d-a0c7-4026-a629-39b3d74e6ef1)'. Ident: 'index-34--2588534479858262356', commit timestamp: 'Timestamp(1574796658, 1030)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.660-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection a6dfb80e-0801-4b7b-8d26-00ad7c26e35a from test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.727-0500 I SHARDING [conn22] distributed lock 'test0_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d745cde74b6784bb2bf
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.669-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f (a6dfb80e-0801-4b7b-8d26-00ad7c26e35a) to test0_fsmdb0.agg_out and drop f5f461a4-5e3a-4722-9b40-3176f32a641f.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.445-0500 I STORAGE [conn77] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (b94b968d-a0c7-4026-a629-39b3d74e6ef1)'. Ident: 'index-35--2588534479858262356', commit timestamp: 'Timestamp(1574796658, 1030)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.660-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (f5f461a4-5e3a-4722-9b40-3176f32a641f)'. Ident: 'index-52--2310912778499990807', commit timestamp: 'Timestamp(1574796658, 3615)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.727-0500 I SHARDING [conn22] Enabling sharding for database [test0_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.669-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test0_fsmdb0.agg_out (f5f461a4-5e3a-4722-9b40-3176f32a641f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 3615), t: 1 } and commit timestamp Timestamp(1574796658, 3615)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.445-0500 I STORAGE [conn77] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-33--2588534479858262356, commit timestamp: Timestamp(1574796658, 1030)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.660-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (f5f461a4-5e3a-4722-9b40-3176f32a641f)'. Ident: 'index-63--2310912778499990807', commit timestamp: 'Timestamp(1574796658, 3615)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.669-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test0_fsmdb0.agg_out (f5f461a4-5e3a-4722-9b40-3176f32a641f).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.445-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.660-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-51--2310912778499990807, commit timestamp: Timestamp(1574796658, 3615)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.669-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection a6dfb80e-0801-4b7b-8d26-00ad7c26e35a from test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.445-0500 I COMMAND [conn64] command test0_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("0cab2b8e-d3fc-4dad-b1d8-353217ca01c7"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8861593805058807273, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7543039478059373531, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test0_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796658307), clusterTime: Timestamp(1574796658, 516) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("0cab2b8e-d3fc-4dad-b1d8-353217ca01c7"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796658, 516), signature: { hash: BinData(0, 4EC9AFA4935ABE17D6EB91BA63A7DD3D91017881), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44482", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796658, 513), t: 1 } }, $db: "test0_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:385 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 136ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.678-0500 I INDEX [ReplWriterWorker-5] index build: starting on test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.669-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (f5f461a4-5e3a-4722-9b40-3176f32a641f)'. Ident: 'index-52--7234316082034423155', commit timestamp: 'Timestamp(1574796658, 3615)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.446-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 36c1fa80-2723-47b9-84b2-ff20b92d496a: test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed ( de7bc474-c0d1-4e53-be30-0838b3a89414 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.678-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.669-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (f5f461a4-5e3a-4722-9b40-3176f32a641f)'. Ident: 'index-63--7234316082034423155', commit timestamp: 'Timestamp(1574796658, 3615)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.728-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7d745cde74b6784bb2bf' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.446-0500 I INDEX [conn82] Index build completed: 36c1fa80-2723-47b9-84b2-ff20b92d496a
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.678-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 1ec5c4ea-728b-4ccf-b2b3-064a3140eac6: test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c (4fddd2b8-65d6-44a6-98ec-801e410db392 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.669-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-51--7234316082034423155, commit timestamp: Timestamp(1574796658, 3615)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.449-0500 I INDEX [conn85] Waiting for index build to complete: cec3c590-bc1f-4e93-8f37-c251863b5304
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.678-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.687-0500 I INDEX [ReplWriterWorker-8] index build: starting on test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.449-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.679-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.687-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.449-0500 I STORAGE [conn77] createCollection: test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f with generated UUID: a6dfb80e-0801-4b7b-8d26-00ad7c26e35a and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.680-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c (b6c880d1-9035-42f2-bb70-5f65e6a39010) to test0_fsmdb0.agg_out and drop a6dfb80e-0801-4b7b-8d26-00ad7c26e35a.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.687-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: c15b5347-a1e8-4dcf-983b-b18e3800868e: test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c (4fddd2b8-65d6-44a6-98ec-801e410db392 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.449-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.681-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.687-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.450-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.681-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test0_fsmdb0.agg_out (a6dfb80e-0801-4b7b-8d26-00ad7c26e35a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 4055), t: 1 } and commit timestamp Timestamp(1574796658, 4055)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.687-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.460-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.681-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test0_fsmdb0.agg_out (a6dfb80e-0801-4b7b-8d26-00ad7c26e35a).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.688-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c (b6c880d1-9035-42f2-bb70-5f65e6a39010) to test0_fsmdb0.agg_out and drop a6dfb80e-0801-4b7b-8d26-00ad7c26e35a.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.470-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.681-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection b6c880d1-9035-42f2-bb70-5f65e6a39010 from test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.690-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.477-0500 I INDEX [conn88] index build: starting on test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.681-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (a6dfb80e-0801-4b7b-8d26-00ad7c26e35a)'. Ident: 'index-60--2310912778499990807', commit timestamp: 'Timestamp(1574796658, 4055)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.690-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test0_fsmdb0.agg_out (a6dfb80e-0801-4b7b-8d26-00ad7c26e35a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 4055), t: 1 } and commit timestamp Timestamp(1574796658, 4055)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.477-0500 I INDEX [conn88] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.681-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (a6dfb80e-0801-4b7b-8d26-00ad7c26e35a)'. Ident: 'index-67--2310912778499990807', commit timestamp: 'Timestamp(1574796658, 4055)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.690-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test0_fsmdb0.agg_out (a6dfb80e-0801-4b7b-8d26-00ad7c26e35a).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.477-0500 I STORAGE [conn88] Index build initialized: c4a86f34-137c-472f-b6cd-e6308fa190f9: test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718 (3d78450b-1218-4164-a3d4-22cb658c3066 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.681-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-59--2310912778499990807, commit timestamp: Timestamp(1574796658, 4055)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.690-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection b6c880d1-9035-42f2-bb70-5f65e6a39010 from test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.477-0500 I INDEX [conn88] Waiting for index build to complete: c4a86f34-137c-472f-b6cd-e6308fa190f9
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.682-0500 I STORAGE [ReplWriterWorker-7] createCollection: test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb with provided UUID: bf3cdc90-36f7-41c4-a8c0-a6114d9633bb and options: { uuid: UUID("bf3cdc90-36f7-41c4-a8c0-a6114d9633bb"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.690-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (a6dfb80e-0801-4b7b-8d26-00ad7c26e35a)'. Ident: 'index-60--7234316082034423155', commit timestamp: 'Timestamp(1574796658, 4055)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.477-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.683-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 1ec5c4ea-728b-4ccf-b2b3-064a3140eac6: test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c ( 4fddd2b8-65d6-44a6-98ec-801e410db392 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.690-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (a6dfb80e-0801-4b7b-8d26-00ad7c26e35a)'. Ident: 'index-67--7234316082034423155', commit timestamp: 'Timestamp(1574796658, 4055)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.478-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: aac17a2c-437f-4aba-b5c6-bb2d271e1f64: test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447 ( a3fdf230-f6aa-432a-9b30-89f199e2c6c3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.698-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.690-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-59--7234316082034423155, commit timestamp: Timestamp(1574796658, 4055)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.479-0500 I INDEX [conn84] Index build completed: aac17a2c-437f-4aba-b5c6-bb2d271e1f64
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.698-0500 I STORAGE [ReplWriterWorker-9] createCollection: test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b with provided UUID: 7a897fba-4a89-4c44-b202-0f7f00ca4e4a and options: { uuid: UUID("7a897fba-4a89-4c44-b202-0f7f00ca4e4a"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.691-0500 I STORAGE [ReplWriterWorker-11] createCollection: test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb with provided UUID: bf3cdc90-36f7-41c4-a8c0-a6114d9633bb and options: { uuid: UUID("bf3cdc90-36f7-41c4-a8c0-a6114d9633bb"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.479-0500 I COMMAND [conn84] command test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f380e6e7-e7db-4d01-a770-8e3aea8758d2"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796658, 521), signature: { hash: BinData(0, 4EC9AFA4935ABE17D6EB91BA63A7DD3D91017881), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44480", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796658, 513), t: 1 } }, $db: "test0_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 123ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.711-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.692-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: c15b5347-a1e8-4dcf-983b-b18e3800868e: test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c ( 4fddd2b8-65d6-44a6-98ec-801e410db392 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.484-0500 I INDEX [conn77] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.712-0500 I STORAGE [ReplWriterWorker-10] createCollection: test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 with provided UUID: 2a92a479-79c8-4f0e-94d8-bb4228231baa and options: { uuid: UUID("2a92a479-79c8-4f0e-94d8-bb4228231baa"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.706-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.485-0500 I INDEX [conn77] Registering index build: 9cb31c4d-d221-44f4-b117-f7822cc5243e
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.725-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.707-0500 I STORAGE [ReplWriterWorker-3] createCollection: test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b with provided UUID: 7a897fba-4a89-4c44-b202-0f7f00ca4e4a and options: { uuid: UUID("7a897fba-4a89-4c44-b202-0f7f00ca4e4a"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.485-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: cec3c590-bc1f-4e93-8f37-c251863b5304: test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85 ( f5f461a4-5e3a-4722-9b40-3176f32a641f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.726-0500 I STORAGE [ReplWriterWorker-0] createCollection: test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 with provided UUID: a76156e7-eab0-4737-a0e0-cb79d918c04f and options: { uuid: UUID("a76156e7-eab0-4737-a0e0-cb79d918c04f"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.723-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.486-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.741-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.723-0500 I STORAGE [ReplWriterWorker-9] createCollection: test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 with provided UUID: 2a92a479-79c8-4f0e-94d8-bb4228231baa and options: { uuid: UUID("2a92a479-79c8-4f0e-94d8-bb4228231baa"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.494-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.746-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c (4fddd2b8-65d6-44a6-98ec-801e410db392) to test0_fsmdb0.agg_out and drop b6c880d1-9035-42f2-bb70-5f65e6a39010.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.737-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.502-0500 I INDEX [conn77] index build: starting on test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.746-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test0_fsmdb0.agg_out (b6c880d1-9035-42f2-bb70-5f65e6a39010) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 4561), t: 1 } and commit timestamp Timestamp(1574796658, 4561)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.738-0500 I STORAGE [ReplWriterWorker-7] createCollection: test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 with provided UUID: a76156e7-eab0-4737-a0e0-cb79d918c04f and options: { uuid: UUID("a76156e7-eab0-4737-a0e0-cb79d918c04f"), temp: true }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.732-0500 I SHARDING [conn22] distributed lock 'test0_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d745cde74b6784bb2c5
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.502-0500 I INDEX [conn77] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.746-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test0_fsmdb0.agg_out (b6c880d1-9035-42f2-bb70-5f65e6a39010).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.751-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.502-0500 I STORAGE [conn77] Index build initialized: 9cb31c4d-d221-44f4-b117-f7822cc5243e: test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f (a6dfb80e-0801-4b7b-8d26-00ad7c26e35a ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.746-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 4fddd2b8-65d6-44a6-98ec-801e410db392 from test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.757-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c (4fddd2b8-65d6-44a6-98ec-801e410db392) to test0_fsmdb0.agg_out and drop b6c880d1-9035-42f2-bb70-5f65e6a39010.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.502-0500 I INDEX [conn85] Index build completed: cec3c590-bc1f-4e93-8f37-c251863b5304
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.746-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (b6c880d1-9035-42f2-bb70-5f65e6a39010)'. Ident: 'index-70--2310912778499990807', commit timestamp: 'Timestamp(1574796658, 4561)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.757-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test0_fsmdb0.agg_out (b6c880d1-9035-42f2-bb70-5f65e6a39010) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 4561), t: 1 } and commit timestamp Timestamp(1574796658, 4561)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.502-0500 I INDEX [conn77] Waiting for index build to complete: 9cb31c4d-d221-44f4-b117-f7822cc5243e
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.746-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (b6c880d1-9035-42f2-bb70-5f65e6a39010)'. Ident: 'index-71--2310912778499990807', commit timestamp: 'Timestamp(1574796658, 4561)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.757-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test0_fsmdb0.agg_out (b6c880d1-9035-42f2-bb70-5f65e6a39010).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.502-0500 I COMMAND [conn82] renameCollectionForCommand: rename test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed to test0_fsmdb0.agg_out and drop test0_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.746-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-69--2310912778499990807, commit timestamp: Timestamp(1574796658, 4561)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.757-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 4fddd2b8-65d6-44a6-98ec-801e410db392 from test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.502-0500 I COMMAND [conn85] command test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85 appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("14894096-7669-49a5-82f4-aeca25550ea6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796658, 521), signature: { hash: BinData(0, 4EC9AFA4935ABE17D6EB91BA63A7DD3D91017881), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:57626", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796658, 513), t: 1 } }, $db: "test0_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 139ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.763-0500 I INDEX [ReplWriterWorker-14] index build: starting on test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.757-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (b6c880d1-9035-42f2-bb70-5f65e6a39010)'. Ident: 'index-70--7234316082034423155', commit timestamp: 'Timestamp(1574796658, 4561)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.502-0500 I STORAGE [conn82] dropCollection: test0_fsmdb0.agg_out (8c546110-a600-42be-a2f8-58129a036e1b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 1540), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.763-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.757-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (b6c880d1-9035-42f2-bb70-5f65e6a39010)'. Ident: 'index-71--7234316082034423155', commit timestamp: 'Timestamp(1574796658, 4561)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.502-0500 I STORAGE [conn82] Finishing collection drop for test0_fsmdb0.agg_out (8c546110-a600-42be-a2f8-58129a036e1b).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.763-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 2cbabb1e-1986-4d8b-9ae9-df066a8961cf: test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b (7a897fba-4a89-4c44-b202-0f7f00ca4e4a ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.757-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-69--7234316082034423155, commit timestamp: Timestamp(1574796658, 4561)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.502-0500 I STORAGE [conn82] renameCollection: renaming collection de7bc474-c0d1-4e53-be30-0838b3a89414 from test0_fsmdb0.tmp.agg_out.1cb5a98c-66a4-4a3a-8441-69153cbb48ed to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.763-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.772-0500 I INDEX [ReplWriterWorker-6] index build: starting on test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.502-0500 I STORAGE [conn82] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (8c546110-a600-42be-a2f8-58129a036e1b)'. Ident: 'index-42--2588534479858262356', commit timestamp: 'Timestamp(1574796658, 1540)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.763-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.772-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.734-0500 I SHARDING [conn22] distributed lock 'test0_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7d745cde74b6784bb2c7
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.502-0500 I STORAGE [conn82] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (8c546110-a600-42be-a2f8-58129a036e1b)'. Ident: 'index-47--2588534479858262356', commit timestamp: 'Timestamp(1574796658, 1540)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.763-0500 I STORAGE [ReplWriterWorker-3] createCollection: test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 with provided UUID: 51cfafba-28d0-43f4-963f-34639b1282a5 and options: { uuid: UUID("51cfafba-28d0-43f4-963f-34639b1282a5"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.772-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: f365e8ad-3f98-4546-a35c-de221f17fe3c: test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b (7a897fba-4a89-4c44-b202-0f7f00ca4e4a ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.502-0500 I STORAGE [conn82] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-37--2588534479858262356, commit timestamp: Timestamp(1574796658, 1540)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.765-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.772-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.502-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.774-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 2cbabb1e-1986-4d8b-9ae9-df066a8961cf: test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b ( 7a897fba-4a89-4c44-b202-0f7f00ca4e4a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.773-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.503-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: c4a86f34-137c-472f-b6cd-e6308fa190f9: test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718 ( 3d78450b-1218-4164-a3d4-22cb658c3066 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.781-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.773-0500 I STORAGE [ReplWriterWorker-4] createCollection: test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 with provided UUID: 51cfafba-28d0-43f4-963f-34639b1282a5 and options: { uuid: UUID("51cfafba-28d0-43f4-963f-34639b1282a5"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.503-0500 I COMMAND [conn62] command test0_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("4215d65a-2dd1-4ab2-bcf3-1c5ee3325733"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 106715760814429936, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2494244063472087423, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test0_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796658307), clusterTime: Timestamp(1574796658, 516) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("4215d65a-2dd1-4ab2-bcf3-1c5ee3325733"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796658, 516), signature: { hash: BinData(0, 4EC9AFA4935ABE17D6EB91BA63A7DD3D91017881), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44486", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796658, 513), t: 1 } }, $db: "test0_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 193ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.803-0500 I INDEX [ReplWriterWorker-11] index build: starting on test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.775-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.503-0500 I INDEX [conn88] Index build completed: c4a86f34-137c-472f-b6cd-e6308fa190f9
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.803-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.784-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: f365e8ad-3f98-4546-a35c-de221f17fe3c: test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b ( 7a897fba-4a89-4c44-b202-0f7f00ca4e4a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.503-0500 I COMMAND [conn88] command test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718 appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("5033fb0b-b223-4faf-974b-5b5f981f137e"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796658, 521), signature: { hash: BinData(0, 4EC9AFA4935ABE17D6EB91BA63A7DD3D91017881), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:57630", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796658, 513), t: 1 } }, $db: "test0_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 133ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.803-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 2bf281cc-37ba-4912-a4a4-3047ede12b4d: test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb (bf3cdc90-36f7-41c4-a8c0-a6114d9633bb ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.791-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.503-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.803-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.813-0500 I INDEX [ReplWriterWorker-6] index build: starting on test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.505-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.804-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.813-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.508-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 9cb31c4d-d221-44f4-b117-f7822cc5243e: test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f ( a6dfb80e-0801-4b7b-8d26-00ad7c26e35a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.807-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.813-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: d46ca23e-b218-44e3-9a40-7955da2ae269: test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb (bf3cdc90-36f7-41c4-a8c0-a6114d9633bb ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.508-0500 I INDEX [conn77] Index build completed: 9cb31c4d-d221-44f4-b117-f7822cc5243e
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.817-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 2bf281cc-37ba-4912-a4a4-3047ede12b4d: test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb ( bf3cdc90-36f7-41c4-a8c0-a6114d9633bb ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.813-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.510-0500 I STORAGE [conn82] createCollection: test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c with generated UUID: b6c880d1-9035-42f2-bb70-5f65e6a39010 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.825-0500 I INDEX [ReplWriterWorker-2] index build: starting on test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.813-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.526-0500 I INDEX [conn82] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.825-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.816-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.527-0500 I INDEX [conn82] Registering index build: dfeb75e8-40b8-47a5-94eb-230462fa2097
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.825-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 42c16521-4919-48e1-8a8e-d5e6bcc9099f: test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 (2a92a479-79c8-4f0e-94d8-bb4228231baa ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.824-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: d46ca23e-b218-44e3-9a40-7955da2ae269: test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb ( bf3cdc90-36f7-41c4-a8c0-a6114d9633bb ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.545-0500 I INDEX [conn82] index build: starting on test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.825-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.831-0500 I INDEX [ReplWriterWorker-8] index build: starting on test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.545-0500 I INDEX [conn82] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.825-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.831-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.545-0500 I STORAGE [conn82] Index build initialized: dfeb75e8-40b8-47a5-94eb-230462fa2097: test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c (b6c880d1-9035-42f2-bb70-5f65e6a39010 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.827-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b (7a897fba-4a89-4c44-b202-0f7f00ca4e4a) to test0_fsmdb0.agg_out and drop 4fddd2b8-65d6-44a6-98ec-801e410db392.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.831-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: be94e302-bab3-445f-ad33-60c2925eb01b: test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 (2a92a479-79c8-4f0e-94d8-bb4228231baa ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.545-0500 I INDEX [conn82] Waiting for index build to complete: dfeb75e8-40b8-47a5-94eb-230462fa2097
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.828-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.831-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.545-0500 I COMMAND [conn84] renameCollectionForCommand: rename test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447 to test0_fsmdb0.agg_out and drop test0_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.828-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test0_fsmdb0.agg_out (4fddd2b8-65d6-44a6-98ec-801e410db392) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 5072), t: 1 } and commit timestamp Timestamp(1574796658, 5072)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.832-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.545-0500 I STORAGE [conn84] dropCollection: test0_fsmdb0.agg_out (de7bc474-c0d1-4e53-be30-0838b3a89414) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 2301), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.828-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test0_fsmdb0.agg_out (4fddd2b8-65d6-44a6-98ec-801e410db392).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.834-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b (7a897fba-4a89-4c44-b202-0f7f00ca4e4a) to test0_fsmdb0.agg_out and drop 4fddd2b8-65d6-44a6-98ec-801e410db392.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.545-0500 I STORAGE [conn84] Finishing collection drop for test0_fsmdb0.agg_out (de7bc474-c0d1-4e53-be30-0838b3a89414).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.828-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 7a897fba-4a89-4c44-b202-0f7f00ca4e4a from test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.834-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.545-0500 I STORAGE [conn84] renameCollection: renaming collection a3fdf230-f6aa-432a-9b30-89f199e2c6c3 from test0_fsmdb0.tmp.agg_out.9a5b104c-fe3c-40c6-a50d-0ba1da25a447 to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.828-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (4fddd2b8-65d6-44a6-98ec-801e410db392)'. Ident: 'index-74--2310912778499990807', commit timestamp: 'Timestamp(1574796658, 5072)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.834-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test0_fsmdb0.agg_out (4fddd2b8-65d6-44a6-98ec-801e410db392) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 5072), t: 1 } and commit timestamp Timestamp(1574796658, 5072)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.545-0500 I STORAGE [conn84] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (de7bc474-c0d1-4e53-be30-0838b3a89414)'. Ident: 'index-43--2588534479858262356', commit timestamp: 'Timestamp(1574796658, 2301)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.828-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (4fddd2b8-65d6-44a6-98ec-801e410db392)'. Ident: 'index-75--2310912778499990807', commit timestamp: 'Timestamp(1574796658, 5072)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.834-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test0_fsmdb0.agg_out (4fddd2b8-65d6-44a6-98ec-801e410db392).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.545-0500 I STORAGE [conn84] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (de7bc474-c0d1-4e53-be30-0838b3a89414)'. Ident: 'index-49--2588534479858262356', commit timestamp: 'Timestamp(1574796658, 2301)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.828-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-73--2310912778499990807, commit timestamp: Timestamp(1574796658, 5072)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.834-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection 7a897fba-4a89-4c44-b202-0f7f00ca4e4a from test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.545-0500 I STORAGE [conn84] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-38--2588534479858262356, commit timestamp: Timestamp(1574796658, 2301)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.830-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 42c16521-4919-48e1-8a8e-d5e6bcc9099f: test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 ( 2a92a479-79c8-4f0e-94d8-bb4228231baa ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.834-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (4fddd2b8-65d6-44a6-98ec-801e410db392)'. Ident: 'index-74--7234316082034423155', commit timestamp: 'Timestamp(1574796658, 5072)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.545-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.988-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51366 #34 (10 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.834-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (4fddd2b8-65d6-44a6-98ec-801e410db392)'. Ident: 'index-75--7234316082034423155', commit timestamp: 'Timestamp(1574796658, 5072)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.545-0500 I COMMAND [conn65] command test0_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f380e6e7-e7db-4d01-a770-8e3aea8758d2"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 764890928983165333, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1525604351932510873, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test0_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796658307), clusterTime: Timestamp(1574796658, 516) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f380e6e7-e7db-4d01-a770-8e3aea8758d2"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796658, 516), signature: { hash: BinData(0, 4EC9AFA4935ABE17D6EB91BA63A7DD3D91017881), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44480", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796658, 513), t: 1 } }, $db: "test0_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:385 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 236ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:30:58.988-0500 I NETWORK [conn34] received client metadata from 127.0.0.1:51366 conn34: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.834-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-73--7234316082034423155, commit timestamp: Timestamp(1574796658, 5072)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.546-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.836-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: be94e302-bab3-445f-ad33-60c2925eb01b: test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 ( 2a92a479-79c8-4f0e-94d8-bb4228231baa ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.549-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.988-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34724 #35 (11 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.550-0500 I STORAGE [conn84] createCollection: test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c with generated UUID: 4fddd2b8-65d6-44a6-98ec-801e410db392 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:30:58.988-0500 I NETWORK [conn35] received client metadata from 127.0.0.1:34724 conn35: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.552-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: dfeb75e8-40b8-47a5-94eb-230462fa2097: test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c ( b6c880d1-9035-42f2-bb70-5f65e6a39010 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.552-0500 I INDEX [conn82] Index build completed: dfeb75e8-40b8-47a5-94eb-230462fa2097
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.569-0500 I INDEX [conn84] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.569-0500 I COMMAND [conn77] renameCollectionForCommand: rename test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718 to test0_fsmdb0.agg_out and drop test0_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.569-0500 I STORAGE [conn77] dropCollection: test0_fsmdb0.agg_out (a3fdf230-f6aa-432a-9b30-89f199e2c6c3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 3613), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.569-0500 I STORAGE [conn77] Finishing collection drop for test0_fsmdb0.agg_out (a3fdf230-f6aa-432a-9b30-89f199e2c6c3).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.569-0500 I STORAGE [conn77] renameCollection: renaming collection 3d78450b-1218-4164-a3d4-22cb658c3066 from test0_fsmdb0.tmp.agg_out.95b56b63-2a07-48c9-a3a3-2a5501ce2718 to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.569-0500 I STORAGE [conn77] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (a3fdf230-f6aa-432a-9b30-89f199e2c6c3)'. Ident: 'index-44--2588534479858262356', commit timestamp: 'Timestamp(1574796658, 3613)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.740-0500 D4 TXN [conn52] New transaction started with txnNumber: 0 on session with lsid 97ac7edc-dbdb-4a1f-8a81-3c8f466872dc
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.569-0500 I STORAGE [conn77] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (a3fdf230-f6aa-432a-9b30-89f199e2c6c3)'. Ident: 'index-51--2588534479858262356', commit timestamp: 'Timestamp(1574796658, 3613)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.569-0500 I STORAGE [conn77] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-39--2588534479858262356, commit timestamp: Timestamp(1574796658, 3613)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.569-0500 I COMMAND [conn88] renameCollectionForCommand: rename test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85 to test0_fsmdb0.agg_out and drop test0_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.569-0500 I COMMAND [conn80] command test0_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("5033fb0b-b223-4faf-974b-5b5f981f137e"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6468843243802141035, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8699173224764151138, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test0_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796658309), clusterTime: Timestamp(1574796658, 513) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("5033fb0b-b223-4faf-974b-5b5f981f137e"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796658, 517), signature: { hash: BinData(0, 4EC9AFA4935ABE17D6EB91BA63A7DD3D91017881), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:57630", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796658, 513), t: 1 } }, $db: "test0_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:385 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 256ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.569-0500 I STORAGE [conn88] dropCollection: test0_fsmdb0.agg_out (3d78450b-1218-4164-a3d4-22cb658c3066) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 3614), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.569-0500 I STORAGE [conn88] Finishing collection drop for test0_fsmdb0.agg_out (3d78450b-1218-4164-a3d4-22cb658c3066).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.569-0500 I STORAGE [conn88] renameCollection: renaming collection f5f461a4-5e3a-4722-9b40-3176f32a641f from test0_fsmdb0.tmp.agg_out.3ab27d8e-c2c3-4df9-8532-78c25bbabe85 to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.570-0500 I STORAGE [conn88] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (3d78450b-1218-4164-a3d4-22cb658c3066)'. Ident: 'index-46--2588534479858262356', commit timestamp: 'Timestamp(1574796658, 3614)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.570-0500 I STORAGE [conn88] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (3d78450b-1218-4164-a3d4-22cb658c3066)'. Ident: 'index-55--2588534479858262356', commit timestamp: 'Timestamp(1574796658, 3614)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.570-0500 I STORAGE [conn88] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-41--2588534479858262356, commit timestamp: Timestamp(1574796658, 3614)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.570-0500 I COMMAND [conn85] renameCollectionForCommand: rename test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f to test0_fsmdb0.agg_out and drop test0_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.570-0500 I COMMAND [conn81] command test0_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("14894096-7669-49a5-82f4-aeca25550ea6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 95926518217128548, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8697788170560116533, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test0_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796658309), clusterTime: Timestamp(1574796658, 513) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("14894096-7669-49a5-82f4-aeca25550ea6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796658, 517), signature: { hash: BinData(0, 4EC9AFA4935ABE17D6EB91BA63A7DD3D91017881), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:57626", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796658, 513), t: 1 } }, $db: "test0_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:385 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 257ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.570-0500 I STORAGE [conn85] dropCollection: test0_fsmdb0.agg_out (f5f461a4-5e3a-4722-9b40-3176f32a641f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 3615), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.570-0500 I STORAGE [conn85] Finishing collection drop for test0_fsmdb0.agg_out (f5f461a4-5e3a-4722-9b40-3176f32a641f).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.570-0500 I STORAGE [conn85] renameCollection: renaming collection a6dfb80e-0801-4b7b-8d26-00ad7c26e35a from test0_fsmdb0.tmp.agg_out.77a61352-bd29-4563-a442-0f72f8c3aa6f to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.570-0500 I STORAGE [conn85] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (f5f461a4-5e3a-4722-9b40-3176f32a641f)'. Ident: 'index-45--2588534479858262356', commit timestamp: 'Timestamp(1574796658, 3615)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.570-0500 I STORAGE [conn85] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (f5f461a4-5e3a-4722-9b40-3176f32a641f)'. Ident: 'index-53--2588534479858262356', commit timestamp: 'Timestamp(1574796658, 3615)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.741-0500 I INDEX [ReplWriterWorker-1] index build: starting on test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.741-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.741-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: c6126266-6588-4585-94f3-0da47aa14c8c: test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 (a76156e7-eab0-4737-a0e0-cb79d918c04f ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.741-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.570-0500 I STORAGE [conn85] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-40--2588534479858262356, commit timestamp: Timestamp(1574796658, 3615)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.570-0500 I INDEX [conn84] Registering index build: 1d8f5bb0-f9e9-4367-9ba7-918b007434ae
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.570-0500 I COMMAND [conn64] command test0_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("0cab2b8e-d3fc-4dad-b1d8-353217ca01c7"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7085412695896930862, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8324410379928191055, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test0_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796658448), clusterTime: Timestamp(1574796658, 1030) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("0cab2b8e-d3fc-4dad-b1d8-353217ca01c7"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796658, 1030), signature: { hash: BinData(0, 4EC9AFA4935ABE17D6EB91BA63A7DD3D91017881), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44482", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796658, 513), t: 1 } }, $db: "test0_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:385 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 121ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.572-0500 I COMMAND [conn64] CMD: dropIndexes test0_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.585-0500 I INDEX [conn84] index build: starting on test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.585-0500 I INDEX [conn84] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.742-0500 I INDEX [ReplWriterWorker-2] index build: starting on test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.586-0500 I STORAGE [conn84] Index build initialized: 1d8f5bb0-f9e9-4367-9ba7-918b007434ae: test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c (4fddd2b8-65d6-44a6-98ec-801e410db392 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.586-0500 I INDEX [conn84] Waiting for index build to complete: 1d8f5bb0-f9e9-4367-9ba7-918b007434ae
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.586-0500 I COMMAND [conn81] CMD: dropIndexes test0_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.586-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.586-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.588-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.742-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.588-0500 I COMMAND [conn82] renameCollectionForCommand: rename test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c to test0_fsmdb0.agg_out and drop test0_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.588-0500 I STORAGE [conn82] dropCollection: test0_fsmdb0.agg_out (a6dfb80e-0801-4b7b-8d26-00ad7c26e35a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 4055), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.742-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: d0c4d4fe-7c1b-4828-9ed0-f892935c03c5: test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 (a76156e7-eab0-4737-a0e0-cb79d918c04f ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.742-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.588-0500 I STORAGE [conn82] Finishing collection drop for test0_fsmdb0.agg_out (a6dfb80e-0801-4b7b-8d26-00ad7c26e35a).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.742-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.588-0500 I STORAGE [conn82] renameCollection: renaming collection b6c880d1-9035-42f2-bb70-5f65e6a39010 from test0_fsmdb0.tmp.agg_out.9e560e65-6aff-4059-bad0-4cc116acc58c to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.743-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.589-0500 I STORAGE [conn82] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (a6dfb80e-0801-4b7b-8d26-00ad7c26e35a)'. Ident: 'index-58--2588534479858262356', commit timestamp: 'Timestamp(1574796658, 4055)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.589-0500 I STORAGE [conn82] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (a6dfb80e-0801-4b7b-8d26-00ad7c26e35a)'. Ident: 'index-59--2588534479858262356', commit timestamp: 'Timestamp(1574796658, 4055)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.589-0500 I STORAGE [conn82] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-56--2588534479858262356, commit timestamp: Timestamp(1574796658, 4055)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.589-0500 I STORAGE [conn82] createCollection: test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb with generated UUID: bf3cdc90-36f7-41c4-a8c0-a6114d9633bb and options: { temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.590-0500 I STORAGE [conn85] createCollection: test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b with generated UUID: 7a897fba-4a89-4c44-b202-0f7f00ca4e4a and options: { temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.590-0500 I STORAGE [conn88] createCollection: test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 with generated UUID: 2a92a479-79c8-4f0e-94d8-bb4228231baa and options: { temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.591-0500 I STORAGE [conn77] createCollection: test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 with generated UUID: a76156e7-eab0-4737-a0e0-cb79d918c04f and options: { temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.593-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 1d8f5bb0-f9e9-4367-9ba7-918b007434ae: test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c ( 4fddd2b8-65d6-44a6-98ec-801e410db392 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.593-0500 I INDEX [conn84] Index build completed: 1d8f5bb0-f9e9-4367-9ba7-918b007434ae
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.614-0500 I INDEX [conn85] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.614-0500 I INDEX [conn85] Registering index build: 96cc8460-012b-4568-b7f7-c22f5d85caec
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.627-0500 I INDEX [conn82] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.628-0500 I INDEX [conn82] Registering index build: 04265563-739f-4262-a57e-840fbd8e2c79
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.650-0500 I INDEX [conn77] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.658-0500 I INDEX [conn88] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.720-0500 I INDEX [conn85] index build: starting on test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.720-0500 I INDEX [conn85] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.720-0500 I STORAGE [conn85] Index build initialized: 96cc8460-012b-4568-b7f7-c22f5d85caec: test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b (7a897fba-4a89-4c44-b202-0f7f00ca4e4a ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.720-0500 I INDEX [conn85] Waiting for index build to complete: 96cc8460-012b-4568-b7f7-c22f5d85caec
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.720-0500 I COMMAND [conn84] renameCollectionForCommand: rename test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c to test0_fsmdb0.agg_out and drop test0_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.720-0500 I STORAGE [conn84] dropCollection: test0_fsmdb0.agg_out (b6c880d1-9035-42f2-bb70-5f65e6a39010) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 4561), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.744-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb (bf3cdc90-36f7-41c4-a8c0-a6114d9633bb) to test0_fsmdb0.agg_out and drop 7a897fba-4a89-4c44-b202-0f7f00ca4e4a.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.720-0500 I STORAGE [conn84] Finishing collection drop for test0_fsmdb0.agg_out (b6c880d1-9035-42f2-bb70-5f65e6a39010).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.720-0500 I STORAGE [conn84] renameCollection: renaming collection 4fddd2b8-65d6-44a6-98ec-801e410db392 from test0_fsmdb0.tmp.agg_out.026806b9-5469-47c2-8ae5-ab50563f6c3c to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.720-0500 I STORAGE [conn84] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (b6c880d1-9035-42f2-bb70-5f65e6a39010)'. Ident: 'index-62--2588534479858262356', commit timestamp: 'Timestamp(1574796658, 4561)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.720-0500 I STORAGE [conn84] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (b6c880d1-9035-42f2-bb70-5f65e6a39010)'. Ident: 'index-63--2588534479858262356', commit timestamp: 'Timestamp(1574796658, 4561)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.720-0500 I STORAGE [conn84] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-61--2588534479858262356, commit timestamp: Timestamp(1574796658, 4561)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.721-0500 I INDEX [conn88] Registering index build: 50f9f632-01ff-4f38-b76b-b125da0f11cf
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.745-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb (bf3cdc90-36f7-41c4-a8c0-a6114d9633bb) to test0_fsmdb0.agg_out and drop 7a897fba-4a89-4c44-b202-0f7f00ca4e4a.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.721-0500 I INDEX [conn77] Registering index build: 33a2d355-5967-48ff-8cbe-0de175eec460
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.721-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.721-0500 I COMMAND [conn65] command test0_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f380e6e7-e7db-4d01-a770-8e3aea8758d2"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4786279076763088638, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6154948472850832301, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test0_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796658548), clusterTime: Timestamp(1574796658, 2493) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f380e6e7-e7db-4d01-a770-8e3aea8758d2"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796658, 2685), signature: { hash: BinData(0, 4EC9AFA4935ABE17D6EB91BA63A7DD3D91017881), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44480", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796658, 513), t: 1 } }, $db: "test0_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:385 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 171ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.721-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.724-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.724-0500 I STORAGE [conn84] createCollection: test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 with generated UUID: 51cfafba-28d0-43f4-963f-34639b1282a5 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.742-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 96cc8460-012b-4568-b7f7-c22f5d85caec: test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b ( 7a897fba-4a89-4c44-b202-0f7f00ca4e4a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.748-0500 I INDEX [conn82] index build: starting on test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.748-0500 I INDEX [conn82] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.748-0500 I STORAGE [conn82] Index build initialized: 04265563-739f-4262-a57e-840fbd8e2c79: test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb (bf3cdc90-36f7-41c4-a8c0-a6114d9633bb ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.748-0500 I INDEX [conn82] Waiting for index build to complete: 04265563-739f-4262-a57e-840fbd8e2c79
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.748-0500 I INDEX [conn85] Index build completed: 96cc8460-012b-4568-b7f7-c22f5d85caec
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.749-0500 I COMMAND [conn85] command test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("0cab2b8e-d3fc-4dad-b1d8-353217ca01c7"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796658, 4123), signature: { hash: BinData(0, 4EC9AFA4935ABE17D6EB91BA63A7DD3D91017881), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44482", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796658, 513), t: 1 } }, $db: "test0_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 134ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.756-0500 I INDEX [conn84] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.757-0500 I INDEX [conn84] Registering index build: b2346a4f-26aa-4e00-929b-89da0fbbe8dd
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.772-0500 I INDEX [conn88] index build: starting on test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.772-0500 I INDEX [conn88] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.772-0500 I STORAGE [conn88] Index build initialized: 50f9f632-01ff-4f38-b76b-b125da0f11cf: test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 (2a92a479-79c8-4f0e-94d8-bb4228231baa ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.772-0500 I INDEX [conn88] Waiting for index build to complete: 50f9f632-01ff-4f38-b76b-b125da0f11cf
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.772-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.772-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.773-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.774-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.785-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.787-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.795-0500 I INDEX [conn77] index build: starting on test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.795-0500 I INDEX [conn77] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.795-0500 I STORAGE [conn77] Index build initialized: 33a2d355-5967-48ff-8cbe-0de175eec460: test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 (a76156e7-eab0-4737-a0e0-cb79d918c04f ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.795-0500 I INDEX [conn77] Waiting for index build to complete: 33a2d355-5967-48ff-8cbe-0de175eec460
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.795-0500 I COMMAND [conn85] renameCollectionForCommand: rename test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b to test0_fsmdb0.agg_out and drop test0_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.795-0500 I STORAGE [conn85] dropCollection: test0_fsmdb0.agg_out (4fddd2b8-65d6-44a6-98ec-801e410db392) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796658, 5072), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.796-0500 I STORAGE [conn85] Finishing collection drop for test0_fsmdb0.agg_out (4fddd2b8-65d6-44a6-98ec-801e410db392).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.796-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 04265563-739f-4262-a57e-840fbd8e2c79: test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb ( bf3cdc90-36f7-41c4-a8c0-a6114d9633bb ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.796-0500 I STORAGE [conn85] renameCollection: renaming collection 7a897fba-4a89-4c44-b202-0f7f00ca4e4a from test0_fsmdb0.tmp.agg_out.f0ca6dbb-700e-43c3-ba3a-98668522af4b to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.796-0500 I INDEX [conn82] Index build completed: 04265563-739f-4262-a57e-840fbd8e2c79
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.796-0500 I STORAGE [conn85] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (4fddd2b8-65d6-44a6-98ec-801e410db392)'. Ident: 'index-66--2588534479858262356', commit timestamp: 'Timestamp(1574796658, 5072)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.796-0500 I COMMAND [conn82] command test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("5033fb0b-b223-4faf-974b-5b5f981f137e"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796658, 4507), signature: { hash: BinData(0, 4EC9AFA4935ABE17D6EB91BA63A7DD3D91017881), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:57630", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796658, 513), t: 1 } }, $db: "test0_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 406 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 167ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.796-0500 I STORAGE [conn85] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (4fddd2b8-65d6-44a6-98ec-801e410db392)'. Ident: 'index-67--2588534479858262356', commit timestamp: 'Timestamp(1574796658, 5072)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.796-0500 I STORAGE [conn85] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-65--2588534479858262356, commit timestamp: Timestamp(1574796658, 5072)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.796-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.796-0500 I COMMAND [conn64] command test0_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("0cab2b8e-d3fc-4dad-b1d8-353217ca01c7"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5170491983072354756, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2430740017831183641, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test0_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796658587), clusterTime: Timestamp(1574796658, 4052) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("0cab2b8e-d3fc-4dad-b1d8-353217ca01c7"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796658, 4055), signature: { hash: BinData(0, 4EC9AFA4935ABE17D6EB91BA63A7DD3D91017881), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44482", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796658, 513), t: 1 } }, $db: "test0_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 207ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.745-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.745-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test0_fsmdb0.agg_out (7a897fba-4a89-4c44-b202-0f7f00ca4e4a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796660, 4), t: 1 } and commit timestamp Timestamp(1574796660, 4)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.746-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test0_fsmdb0.agg_out (7a897fba-4a89-4c44-b202-0f7f00ca4e4a).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.746-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection bf3cdc90-36f7-41c4-a8c0-a6114d9633bb from test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.746-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (7a897fba-4a89-4c44-b202-0f7f00ca4e4a)'. Ident: 'index-80--7234316082034423155', commit timestamp: 'Timestamp(1574796660, 4)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.746-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (7a897fba-4a89-4c44-b202-0f7f00ca4e4a)'. Ident: 'index-85--7234316082034423155', commit timestamp: 'Timestamp(1574796660, 4)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.746-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-79--7234316082034423155, commit timestamp: Timestamp(1574796660, 4)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.745-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.746-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test0_fsmdb0.agg_out (7a897fba-4a89-4c44-b202-0f7f00ca4e4a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796660, 4), t: 1 } and commit timestamp Timestamp(1574796660, 4)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.746-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test0_fsmdb0.agg_out (7a897fba-4a89-4c44-b202-0f7f00ca4e4a).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.746-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection bf3cdc90-36f7-41c4-a8c0-a6114d9633bb from test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.746-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (7a897fba-4a89-4c44-b202-0f7f00ca4e4a)'. Ident: 'index-80--2310912778499990807', commit timestamp: 'Timestamp(1574796660, 4)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.746-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (7a897fba-4a89-4c44-b202-0f7f00ca4e4a)'. Ident: 'index-85--2310912778499990807', commit timestamp: 'Timestamp(1574796660, 4)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.746-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-79--2310912778499990807, commit timestamp: Timestamp(1574796660, 4)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.747-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: c6126266-6588-4585-94f3-0da47aa14c8c: test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 ( a76156e7-eab0-4737-a0e0-cb79d918c04f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.798-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 50f9f632-01ff-4f38-b76b-b125da0f11cf: test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 ( 2a92a479-79c8-4f0e-94d8-bb4228231baa ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.798-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.816-0500 I INDEX [conn84] index build: starting on test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.719-0500 I INDEX [conn84] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:30:58.988-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46170 #91 (38 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.719-0500 I STORAGE [conn84] Index build initialized: b2346a4f-26aa-4e00-929b-89da0fbbe8dd: test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 (51cfafba-28d0-43f4-963f-34639b1282a5 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.719-0500 I INDEX [conn84] Waiting for index build to complete: b2346a4f-26aa-4e00-929b-89da0fbbe8dd
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.719-0500 I INDEX [conn88] Index build completed: 50f9f632-01ff-4f38-b76b-b125da0f11cf
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.719-0500 I COMMAND [conn88] command test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("14894096-7669-49a5-82f4-aeca25550ea6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796658, 4559), signature: { hash: BinData(0, 4EC9AFA4935ABE17D6EB91BA63A7DD3D91017881), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:57626", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796658, 513), t: 1 } }, $db: "test0_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 61922 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2060ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.719-0500 I NETWORK [conn91] received client metadata from 127.0.0.1:46170 conn91: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.720-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46172 #92 (39 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.720-0500 I NETWORK [conn92] received client metadata from 127.0.0.1:46172 conn92: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.722-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.722-0500 I COMMAND [conn82] renameCollectionForCommand: rename test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb to test0_fsmdb0.agg_out and drop test0_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.722-0500 I STORAGE [conn82] dropCollection: test0_fsmdb0.agg_out (7a897fba-4a89-4c44-b202-0f7f00ca4e4a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796660, 4), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.722-0500 I STORAGE [conn82] Finishing collection drop for test0_fsmdb0.agg_out (7a897fba-4a89-4c44-b202-0f7f00ca4e4a).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.722-0500 I STORAGE [conn82] renameCollection: renaming collection bf3cdc90-36f7-41c4-a8c0-a6114d9633bb from test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb to test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.722-0500 I STORAGE [conn82] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (7a897fba-4a89-4c44-b202-0f7f00ca4e4a)'. Ident: 'index-74--2588534479858262356', commit timestamp: 'Timestamp(1574796660, 4)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.722-0500 I STORAGE [conn82] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (7a897fba-4a89-4c44-b202-0f7f00ca4e4a)'. Ident: 'index-75--2588534479858262356', commit timestamp: 'Timestamp(1574796660, 4)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.722-0500 I STORAGE [conn82] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-70--2588534479858262356, commit timestamp: Timestamp(1574796660, 4)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.722-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.722-0500 I COMMAND [conn82] command test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb appName: "tid:3" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test0_fsmdb0.tmp.agg_out.076fb413-c87b-429e-a488-ee98d14e57fb", to: "test0_fsmdb0.agg_out", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("5033fb0b-b223-4faf-974b-5b5f981f137e"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796658, 5572), signature: { hash: BinData(0, 4EC9AFA4935ABE17D6EB91BA63A7DD3D91017881), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:57630", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796658, 513), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 1893244 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 1893ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.723-0500 I COMMAND [conn80] command test0_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("5033fb0b-b223-4faf-974b-5b5f981f137e"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4092555696799165071, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1165203055917570108, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test0_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796658572), clusterTime: Timestamp(1574796658, 3615) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("5033fb0b-b223-4faf-974b-5b5f981f137e"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796658, 3679), signature: { hash: BinData(0, 4EC9AFA4935ABE17D6EB91BA63A7DD3D91017881), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:57630", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796658, 513), t: 1 } }, $db: "test0_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:385 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 12012 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2148ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.723-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 33a2d355-5967-48ff-8cbe-0de175eec460: test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 ( a76156e7-eab0-4737-a0e0-cb79d918c04f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.723-0500 I INDEX [conn77] Index build completed: 33a2d355-5967-48ff-8cbe-0de175eec460
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.723-0500 I COMMAND [conn77] command test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("4215d65a-2dd1-4ab2-bcf3-1c5ee3325733"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796658, 4559), signature: { hash: BinData(0, 4EC9AFA4935ABE17D6EB91BA63A7DD3D91017881), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44486", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796658, 513), t: 1 } }, $db: "test0_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 69647 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2072ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.723-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.723-0500 I STORAGE [conn77] createCollection: test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 with generated UUID: e3d2a940-0a98-4e67-925d-5d4fe0fc0ecc and options: { temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.725-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.734-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: b2346a4f-26aa-4e00-929b-89da0fbbe8dd: test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 ( 51cfafba-28d0-43f4-963f-34639b1282a5 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.734-0500 I INDEX [conn84] Index build completed: b2346a4f-26aa-4e00-929b-89da0fbbe8dd
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.734-0500 I COMMAND [conn84] command test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f380e6e7-e7db-4d01-a770-8e3aea8758d2"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796658, 4565), signature: { hash: BinData(0, 4EC9AFA4935ABE17D6EB91BA63A7DD3D91017881), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44480", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796658, 513), t: 1 } }, $db: "test0_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 1018 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 1977ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.737-0500 I SHARDING [conn55] CMD: shardcollection: { _shardsvrShardCollection: "test0_fsmdb0.agg_out", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("5033fb0b-b223-4faf-974b-5b5f981f137e"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796660, 135), signature: { hash: BinData(0, EF73DDC6DD0D4DB8517FD69A1BBAF2428F2E8242), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:57630", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796660, 8), t: 1 } }, $db: "admin" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.748-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: d0c4d4fe-7c1b-4828-9ed0-f892935c03c5: test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 ( a76156e7-eab0-4737-a0e0-cb79d918c04f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.737-0500 I SHARDING [conn55] about to log metadata event into changelog: { _id: "nz_desktop:20004-2019-11-26T14:31:00.737-0500-5ddd7d74cf8184c2e14932e5", server: "nz_desktop:20004", shard: "shard-rs1", clientAddr: "127.0.0.1:46028", time: new Date(1574796660737), what: "shardCollection.start", ns: "test0_fsmdb0.agg_out", details: { shardKey: { _id: "hashed" }, collection: "test0_fsmdb0.agg_out", uuid: UUID("bf3cdc90-36f7-41c4-a8c0-a6114d9633bb"), empty: false, fromMapReduce: false, primary: "shard-rs1:shard-rs1/localhost:20004,localhost:20005,localhost:20006", numChunks: 1 } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.744-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test0_fsmdb0.agg_out to version 1|0||5ddd7d74cf8184c2e14932e8 took 1 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.744-0500 I SHARDING [conn55] Marking collection test0_fsmdb0.agg_out as collection version: 1|0||5ddd7d74cf8184c2e14932e8, shard version: 1|0||5ddd7d74cf8184c2e14932e8
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.744-0500 I SHARDING [conn55] Created 1 chunk(s) for: test0_fsmdb0.agg_out, producing collection version 1|0||5ddd7d74cf8184c2e14932e8
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.744-0500 I SHARDING [conn55] about to log metadata event into changelog: { _id: "nz_desktop:20004-2019-11-26T14:31:00.744-0500-5ddd7d74cf8184c2e14932f3", server: "nz_desktop:20004", shard: "shard-rs1", clientAddr: "127.0.0.1:46028", time: new Date(1574796660744), what: "shardCollection.end", ns: "test0_fsmdb0.agg_out", details: { version: "1|0||5ddd7d74cf8184c2e14932e8" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.744-0500 I INDEX [conn77] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.744-0500 I STORAGE [ShardServerCatalogCacheLoader-1] createCollection: config.cache.chunks.test0_fsmdb0.agg_out with provided UUID: b53e5b23-cfff-452a-9863-a2ca857d4f54 and options: { uuid: UUID("b53e5b23-cfff-452a-9863-a2ca857d4f54") }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.744-0500 I INDEX [conn77] Registering index build: df67cbfe-50c0-4ab8-8a82-703c13936346
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.749-0500 I STORAGE [ReplWriterWorker-10] createCollection: test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 with provided UUID: e3d2a940-0a98-4e67-925d-5d4fe0fc0ecc and options: { uuid: UUID("e3d2a940-0a98-4e67-925d-5d4fe0fc0ecc"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.749-0500 I STORAGE [ReplWriterWorker-4] createCollection: test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 with provided UUID: e3d2a940-0a98-4e67-925d-5d4fe0fc0ecc and options: { uuid: UUID("e3d2a940-0a98-4e67-925d-5d4fe0fc0ecc"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.763-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.764-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.767-0500 I INDEX [ShardServerCatalogCacheLoader-1] index build: done building index _id_ on ns config.cache.chunks.test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.767-0500 I INDEX [ShardServerCatalogCacheLoader-1] Registering index build: d784f533-8979-4ddd-8285-7936abbc37f6
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.774-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:00.774-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44566 #43 (8 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:00.774-0500 I NETWORK [conn43] received client metadata from 127.0.0.1:44566 conn43: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.775-0500 I INDEX [conn77] index build: starting on test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.775-0500 I INDEX [conn77] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.775-0500 I STORAGE [conn77] Index build initialized: df67cbfe-50c0-4ab8-8a82-703c13936346: test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 (e3d2a940-0a98-4e67-925d-5d4fe0fc0ecc ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.775-0500 I INDEX [conn77] Waiting for index build to complete: df67cbfe-50c0-4ab8-8a82-703c13936346
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.775-0500 I COMMAND [conn88] renameCollectionForCommand: rename test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 to test0_fsmdb0.agg_out and drop test0_fsmdb0.agg_out.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.776-0500 Implicit session: session { "id" : UUID("5b9ffb02-5488-465e-85ae-6f29e5ecde10") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.777-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:00.777-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44568 #44 (9 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.778-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:00.778-0500 I NETWORK [conn44] received client metadata from 127.0.0.1:44568 conn44: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.779-0500 I INDEX [ReplWriterWorker-11] index build: starting on test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.779-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.779-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 74aeaf3d-1674-406c-9b52-47f9ca937fe6: test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 (51cfafba-28d0-43f4-963f-34639b1282a5 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.780-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.780-0500 Implicit session: session { "id" : UUID("4f36c170-1b42-44ed-a234-c813d4c07a73") }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.780-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.781-0500 I INDEX [ReplWriterWorker-12] index build: starting on test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.781-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.781-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 54f08890-6019-41ca-9629-6151f9f53837: test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 (51cfafba-28d0-43f4-963f-34639b1282a5 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.781-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.781-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.781-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.782-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38712 #76 (34 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.783-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.782-0500 I NETWORK [conn76] received client metadata from 127.0.0.1:38712 conn76: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.783-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.785-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38714 #77 (35 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.785-0500 I NETWORK [conn77] received client metadata from 127.0.0.1:38714 conn77: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.785-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46182 #93 (40 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:00.786-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51752 #39 (13 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.786-0500 I NETWORK [conn93] received client metadata from 127.0.0.1:46182 conn93: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:00.786-0500 I NETWORK [conn39] received client metadata from 127.0.0.1:51752 conn39: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.786-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 54f08890-6019-41ca-9629-6151f9f53837: test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 ( 51cfafba-28d0-43f4-963f-34639b1282a5 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.786-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 74aeaf3d-1674-406c-9b52-47f9ca937fe6: test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 ( 51cfafba-28d0-43f4-963f-34639b1282a5 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:00.786-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52642 #39 (13 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:00.787-0500 I NETWORK [conn39] received client metadata from 127.0.0.1:52642 conn39: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.790-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46188 #94 (41 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.790-0500 I NETWORK [conn94] received client metadata from 127.0.0.1:46188 conn94: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.791-0500 I INDEX [ShardServerCatalogCacheLoader-1] index build: starting on config.cache.chunks.test0_fsmdb0.agg_out properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.791-0500 I INDEX [ShardServerCatalogCacheLoader-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.791-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Index build initialized: d784f533-8979-4ddd-8285-7936abbc37f6: config.cache.chunks.test0_fsmdb0.agg_out (b53e5b23-cfff-452a-9863-a2ca857d4f54 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.791-0500 I INDEX [ShardServerCatalogCacheLoader-1] Waiting for index build to complete: d784f533-8979-4ddd-8285-7936abbc37f6
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.791-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.791-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51388 #35 (11 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.791-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.791-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.791-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.791-0500 I NETWORK [conn35] received client metadata from 127.0.0.1:51388 conn35: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.791-0500 [jsTest] New session started with sessionID: { "id" : UUID("fcbf78b4-178d-46f4-ab84-13a7ca98861e") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.791-0500 I COMMAND [conn88] CMD: drop test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.792-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.791-0500 I STORAGE [ReplWriterWorker-3] createCollection: config.cache.chunks.test0_fsmdb0.agg_out with provided UUID: b53e5b23-cfff-452a-9863-a2ca857d4f54 and options: { uuid: UUID("b53e5b23-cfff-452a-9863-a2ca857d4f54") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.792-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.792-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34750 #36 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.792-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.792-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.792-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.792-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.792-0500 [jsTest] New session started with sessionID: { "id" : UUID("539a99f4-332c-4f99-946c-f3dd53bf44a1") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.792-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.793-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.792-0500 I STORAGE [ReplWriterWorker-2] createCollection: config.cache.chunks.test0_fsmdb0.agg_out with provided UUID: b53e5b23-cfff-452a-9863-a2ca857d4f54 and options: { uuid: UUID("b53e5b23-cfff-452a-9863-a2ca857d4f54") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.793-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.793-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.793-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.793-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.793-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.793-0500 [jsTest] New session started with sessionID: { "id" : UUID("e56f87a0-23ce-405f-a0ea-3c7dd78d4462") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.793-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.793-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.793-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.792-0500 I COMMAND [conn82] renameCollectionForCommand: rename test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 to test0_fsmdb0.agg_out and drop test0_fsmdb0.agg_out.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.793-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.792-0500 I NETWORK [conn36] received client metadata from 127.0.0.1:34750 conn36: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.794-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.794-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.794-0500 [jsTest] New session started with sessionID: { "id" : UUID("b738e6f1-1a99-4521-9847-d10cda5e6539") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.794-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.794-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.794-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.794-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.794-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.794-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.794-0500 [jsTest] New session started with sessionID: { "id" : UUID("39f480a8-8080-4da3-b2f9-d86b826c6167") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.794-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.794-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.794-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.794-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.794-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.795-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.795-0500 [jsTest] New session started with sessionID: { "id" : UUID("0bd790d7-9961-42cf-8756-f1a66377bf19") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.795-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.795-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.795-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.795-0500 Running data consistency checks for replica set: shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.792-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:00.793-0500 I COMMAND [conn14] command test0_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("14894096-7669-49a5-82f4-aeca25550ea6") }, $clusterTime: { clusterTime: Timestamp(1574796658, 4052), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test0_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266\", to: \"test0_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: cannot rename to a sharded collection" errName:IllegalOperation errCode:20 reslen:575 protocol:op_msg 2206ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:00.794-0500 I COMMAND [conn37] command test0_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("4215d65a-2dd1-4ab2-bcf3-1c5ee3325733") }, $clusterTime: { clusterTime: Timestamp(1574796658, 4055), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test0_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095\", to: \"test0_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: cannot rename to a sharded collection" errName:IllegalOperation errCode:20 reslen:575 protocol:op_msg 2203ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.792-0500 I STORAGE [conn88] dropCollection: test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 (2a92a479-79c8-4f0e-94d8-bb4228231baa) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.794-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test0_fsmdb0 from version {} to version { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:00.795-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test0_fsmdb0 from version {} to version { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.793-0500 I STORAGE [conn88] Finishing collection drop for test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 (2a92a479-79c8-4f0e-94d8-bb4228231baa).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.795-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test0_fsmdb0.agg_out to version 1|0||5ddd7d74cf8184c2e14932e8 took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.793-0500 I STORAGE [conn88] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 (2a92a479-79c8-4f0e-94d8-bb4228231baa)'. Ident: 'index-77--2588534479858262356', commit timestamp: 'Timestamp(1574796660, 1268)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.793-0500 I STORAGE [conn88] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 (2a92a479-79c8-4f0e-94d8-bb4228231baa)'. Ident: 'index-83--2588534479858262356', commit timestamp: 'Timestamp(1574796660, 1268)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.793-0500 I STORAGE [conn88] Deferring table drop for collection 'test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266'. Ident: collection-71--2588534479858262356, commit timestamp: Timestamp(1574796660, 1268)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.793-0500 I COMMAND [conn82] CMD: drop test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.793-0500 I COMMAND [conn81] command test0_fsmdb0.agg_out appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("14894096-7669-49a5-82f4-aeca25550ea6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1222306069334554780, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3257157698943400265, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test0_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796658587), clusterTime: Timestamp(1574796658, 4052) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("14894096-7669-49a5-82f4-aeca25550ea6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796658, 4055), signature: { hash: BinData(0, 4EC9AFA4935ABE17D6EB91BA63A7DD3D91017881), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:57626", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796658, 513), t: 1 } }, $db: "test0_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266\", to: \"test0_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: cannot rename to a sharded collection" errName:IllegalOperation errCode:20 reslen:745 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2203ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:00.796-0500 Running data consistency checks for replica set: shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.796-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7d745cde74b6784bb2c7' unlocked.
[fsm_workload_test:agg_out] 2019-11-26T14:31:01.438-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:03.693-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:03.693-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:03.693-0500 [jsTest] New session started with sessionID: { "id" : UUID("5ce822ba-d17a-4802-b1e2-31791b6b82df") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:03.693-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:03.693-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:03.693-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:00.796-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test0_fsmdb0.agg_out to version 1|0||5ddd7d74cf8184c2e14932e8 took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.694-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.694-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.694-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.694-0500 [jsTest] New session started with sessionID: { "id" : UUID("a34afc7a-2ba5-418a-b513-7d374c6aa75b") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.694-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.694-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.694-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.694-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.694-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.694-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.694-0500 [jsTest] New session started with sessionID: { "id" : UUID("494d1aa8-c946-46e7-a436-b109e67857ee") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.694-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.694-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.694-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.694-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.694-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.694-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.694-0500 [jsTest] New session started with sessionID: { "id" : UUID("1f4a370b-ea0e-4cd2-82ba-6c3942212409") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.694-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.694-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.694-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.695-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.695-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.695-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.695-0500 [jsTest] New session started with sessionID: { "id" : UUID("14091967-041c-439c-b21c-6f27cbd6453a") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.695-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.695-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.695-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.695-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.695-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.695-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.695-0500 [jsTest] New session started with sessionID: { "id" : UUID("95b2f03a-7c40-4119-a599-de2810c39838") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.695-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.695-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.695-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.695-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.695-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.695-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.695-0500 [jsTest] New session started with sessionID: { "id" : UUID("ef120f08-f38e-4461-a9c2-6453152f7fa6") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.800-0500 W CONTROL [conn77] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 0 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:03.695-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:00.800-0500 W CONTROL [conn39] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 0 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.695-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:00.801-0500 W CONTROL [conn39] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 0 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:03.696-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.802-0500 W CONTROL [conn35] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 4 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.696-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.802-0500 W CONTROL [conn36] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 4 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:03.696-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:00.819-0500 I NETWORK [conn43] end connection 127.0.0.1:44566 (8 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.696-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:03.696-0500 [jsTest] Workload(s) completed in 3449 ms: jstests/concurrency/fsm_workloads/agg_out.js
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.793-0500 I STORAGE [conn82] dropCollection: test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 (a76156e7-eab0-4737-a0e0-cb79d918c04f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[fsm_workload_test:agg_out] 2019-11-26T14:31:03.697-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:00.912-0500 I NETWORK [conn15] end connection 127.0.0.1:57630 (2 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.697-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js finished.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.798-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7d745cde74b6784bb2c5' unlocked.
[fsm_workload_test:agg_out] 2019-11-26T14:31:03.697-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.816-0500 W CONTROL [conn77] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 0 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.698-0500 Starting JSTest jstests/hooks/run_check_repl_dbhash_background.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_check_repl_dbhash_background"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_check_repl_dbhash_background.js
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:00.805-0500 I SHARDING [conn39] Marking collection admin.run_check_repl_dbhash_background as collection version:
[fsm_workload_test:agg_out] 2019-11-26T14:31:03.699-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:00.806-0500 I SHARDING [conn39] Marking collection admin.run_check_repl_dbhash_background as collection version:
[fsm_workload_test:agg_out] 2019-11-26T14:31:03.703-0500 FSM workload jstests/concurrency/fsm_workloads/agg_out.js finished.
[executor:fsm_workload_test:job0] 2019-11-26T14:31:03.704-0500 agg_out.js ran in 6.07 seconds: no failures detected.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.807-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns config.cache.chunks.test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.807-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns config.cache.chunks.test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:00.822-0500 I COMMAND [conn35] command test0_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f380e6e7-e7db-4d01-a770-8e3aea8758d2") }, $clusterTime: { clusterTime: Timestamp(1574796658, 4561), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test0_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2\", to: \"test0_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test0_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:713 protocol:op_msg 2099ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.793-0500 I STORAGE [conn82] Finishing collection drop for test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 (a76156e7-eab0-4737-a0e0-cb79d918c04f).
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:01.361-0500 I COMMAND [conn14] command test0_fsmdb0 appName: "tid:1" command: enableSharding { enableSharding: "test0_fsmdb0", lsid: { id: UUID("14894096-7669-49a5-82f4-aeca25550ea6") }, $clusterTime: { clusterTime: Timestamp(1574796660, 2031), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:163 protocol:op_msg 505ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.825-0500 I SHARDING [conn19] distributed lock 'test0_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d745cde74b6784bb2e8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.819-0500 I NETWORK [conn76] end connection 127.0.0.1:38712 (34 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.705-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js started with pid 15126.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:00.817-0500 W CONTROL [conn39] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:00.817-0500 W CONTROL [conn39] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.813-0500 I COMMAND [ReplWriterWorker-14] CMD: drop test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266
[CheckReplDBHashInBackground:job0] Pausing the background check repl dbhash thread.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.813-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:00.846-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test0_fsmdb0 from version {} to version { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.793-0500 I STORAGE [conn82] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 (a76156e7-eab0-4737-a0e0-cb79d918c04f)'. Ident: 'index-76--2588534479858262356', commit timestamp: 'Timestamp(1574796660, 1269)'
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:01.370-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test0_fsmdb0 from version {} to version { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.826-0500 I SHARDING [conn19] Enabling sharding for database [test0_fsmdb0] in config db
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.820-0500 I NETWORK [conn77] end connection 127.0.0.1:38714 (33 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:00.820-0500 I NETWORK [conn39] end connection 127.0.0.1:51752 (12 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:00.820-0500 I NETWORK [conn39] end connection 127.0.0.1:52642 (12 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.813-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 (2a92a479-79c8-4f0e-94d8-bb4228231baa) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796660, 1268), t: 1 } and commit timestamp Timestamp(1574796660, 1268)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.814-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 (2a92a479-79c8-4f0e-94d8-bb4228231baa) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796660, 1268), t: 1 } and commit timestamp Timestamp(1574796660, 1268)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:00.848-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test0_fsmdb0.agg_out to version 1|0||5ddd7d74cf8184c2e14932e8 took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.793-0500 I STORAGE [conn82] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 (a76156e7-eab0-4737-a0e0-cb79d918c04f)'. Ident: 'index-85--2588534479858262356', commit timestamp: 'Timestamp(1574796660, 1269)'
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:01.372-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test0_fsmdb0.agg_out to version 1|0||5ddd7d74cf8184c2e14932e8 took 0 ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.827-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d745cde74b6784bb2e8' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.824-0500 I COMMAND [conn71] CMD: dropIndexes test0_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:00.958-0500 I NETWORK [conn38] end connection 127.0.0.1:51730 (11 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:00.958-0500 I NETWORK [conn38] end connection 127.0.0.1:52614 (11 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.813-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 (2a92a479-79c8-4f0e-94d8-bb4228231baa).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.814-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 (2a92a479-79c8-4f0e-94d8-bb4228231baa).
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:00.848-0500 I COMMAND [conn36] command test0_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("0cab2b8e-d3fc-4dad-b1d8-353217ca01c7") }, $clusterTime: { clusterTime: Timestamp(1574796658, 5072), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test0_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37\", to: \"test0_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test0_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:714 protocol:op_msg 128ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.793-0500 I STORAGE [conn82] Deferring table drop for collection 'test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095'. Ident: collection-72--2588534479858262356, commit timestamp: Timestamp(1574796660, 1269)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:01.429-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test0_fsmdb0 from version {} to version { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.831-0500 I SHARDING [conn19] distributed lock 'test0_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d745cde74b6784bb2ee
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.850-0500 I COMMAND [conn65] CMD: dropIndexes test0_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:01.445-0500 I NETWORK [conn34] end connection 127.0.0.1:51578 (10 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:01.445-0500 I NETWORK [conn34] end connection 127.0.0.1:52468 (10 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.813-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 (2a92a479-79c8-4f0e-94d8-bb4228231baa)'. Ident: 'index-82--2310912778499990807', commit timestamp: 'Timestamp(1574796660, 1268)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.814-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 (2a92a479-79c8-4f0e-94d8-bb4228231baa)'. Ident: 'index-82--7234316082034423155', commit timestamp: 'Timestamp(1574796660, 1268)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:00.854-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test0_fsmdb0 from version {} to version { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.794-0500 I COMMAND [conn62] command test0_fsmdb0.agg_out appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("4215d65a-2dd1-4ab2-bcf3-1c5ee3325733"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3900896996414575327, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 390876421217070420, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test0_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796658590), clusterTime: Timestamp(1574796658, 4055) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("4215d65a-2dd1-4ab2-bcf3-1c5ee3325733"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796658, 4058), signature: { hash: BinData(0, 4EC9AFA4935ABE17D6EB91BA63A7DD3D91017881), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44486", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796658, 513), t: 1 } }, $db: "test0_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095\", to: \"test0_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: cannot rename to a sharded collection" errName:IllegalOperation errCode:20 reslen:745 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2202ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:01.430-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test0_fsmdb0.agg_out to version 1|0||5ddd7d74cf8184c2e14932e8 took 0 ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.833-0500 I SHARDING [conn19] distributed lock 'test0_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7d745cde74b6784bb2f0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.851-0500 I COMMAND [conn71] CMD: dropIndexes test0_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:01.456-0500 I NETWORK [conn33] end connection 127.0.0.1:51534 (9 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:01.456-0500 I NETWORK [conn33] end connection 127.0.0.1:52424 (9 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.813-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 (2a92a479-79c8-4f0e-94d8-bb4228231baa)'. Ident: 'index-91--2310912778499990807', commit timestamp: 'Timestamp(1574796660, 1268)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.814-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266 (2a92a479-79c8-4f0e-94d8-bb4228231baa)'. Ident: 'index-91--7234316082034423155', commit timestamp: 'Timestamp(1574796660, 1268)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:00.856-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test0_fsmdb0.agg_out to version 1|0||5ddd7d74cf8184c2e14932e8 took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.794-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:01.435-0500 I NETWORK [conn14] end connection 127.0.0.1:57626 (1 connection now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.841-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test0_fsmdb0 from version {} to version { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.859-0500 I COMMAND [conn71] CMD: dropIndexes test0_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.813-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266'. Ident: collection-81--2310912778499990807, commit timestamp: Timestamp(1574796660, 1268)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.814-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266'. Ident: collection-81--7234316082034423155, commit timestamp: Timestamp(1574796660, 1268)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:00.920-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test0_fsmdb0 from version {} to version { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.795-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:01.445-0500 I NETWORK [conn13] end connection 127.0.0.1:57534 (0 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.841-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test0_fsmdb0.agg_out to version 1|0||5ddd7d74cf8184c2e14932e8 took 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.860-0500 I COMMAND [conn65] CMD: dropIndexes test0_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.814-0500 I COMMAND [ReplWriterWorker-3] CMD: drop test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.814-0500 I COMMAND [ReplWriterWorker-2] CMD: drop test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:00.921-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test0_fsmdb0.agg_out to version 1|0||5ddd7d74cf8184c2e14932e8 took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.800-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: d784f533-8979-4ddd-8285-7936abbc37f6: config.cache.chunks.test0_fsmdb0.agg_out ( b53e5b23-cfff-452a-9863-a2ca857d4f54 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.844-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d745cde74b6784bb2f0' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.864-0500 I COMMAND [conn65] CMD: dropIndexes test0_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.814-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 (a76156e7-eab0-4737-a0e0-cb79d918c04f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796660, 1269), t: 1 } and commit timestamp Timestamp(1574796660, 1269)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.814-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 (a76156e7-eab0-4737-a0e0-cb79d918c04f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796660, 1269), t: 1 } and commit timestamp Timestamp(1574796660, 1269)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:00.937-0500 I NETWORK [conn35] end connection 127.0.0.1:44480 (7 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.800-0500 I INDEX [ShardServerCatalogCacheLoader-1] Index build completed: d784f533-8979-4ddd-8285-7936abbc37f6
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.845-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d745cde74b6784bb2ee' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.866-0500 I COMMAND [conn65] CMD: dropIndexes test0_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.814-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 (a76156e7-eab0-4737-a0e0-cb79d918c04f).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.814-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 (a76156e7-eab0-4737-a0e0-cb79d918c04f).
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:00.947-0500 I NETWORK [conn44] end connection 127.0.0.1:44568 (6 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.800-0500 I SHARDING [ShardServerCatalogCacheLoader-1] Marking collection config.cache.chunks.test0_fsmdb0.agg_out as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.851-0500 I SHARDING [conn19] distributed lock 'test0_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d745cde74b6784bb2ff
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.866-0500 I COMMAND [conn68] CMD: dropIndexes test0_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.814-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 (a76156e7-eab0-4737-a0e0-cb79d918c04f)'. Ident: 'index-84--2310912778499990807', commit timestamp: 'Timestamp(1574796660, 1269)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.815-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 (a76156e7-eab0-4737-a0e0-cb79d918c04f)'. Ident: 'index-84--7234316082034423155', commit timestamp: 'Timestamp(1574796660, 1269)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:00.947-0500 I NETWORK [conn37] end connection 127.0.0.1:44486 (5 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.801-0500 W CONTROL [conn94] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 4 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.852-0500 I SHARDING [conn19] Enabling sharding for database [test0_fsmdb0] in config db
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.869-0500 I COMMAND [conn68] CMD: dropIndexes test0_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.814-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 (a76156e7-eab0-4737-a0e0-cb79d918c04f)'. Ident: 'index-93--2310912778499990807', commit timestamp: 'Timestamp(1574796660, 1269)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.815-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095 (a76156e7-eab0-4737-a0e0-cb79d918c04f)'. Ident: 'index-93--7234316082034423155', commit timestamp: 'Timestamp(1574796660, 1269)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:00.950-0500 I NETWORK [conn42] end connection 127.0.0.1:44544 (4 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.802-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.853-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d745cde74b6784bb2ff' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.874-0500 I COMMAND [conn68] CMD: dropIndexes test0_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.814-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095'. Ident: collection-83--2310912778499990807, commit timestamp: Timestamp(1574796660, 1269)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.815-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095'. Ident: collection-83--7234316082034423155, commit timestamp: Timestamp(1574796660, 1269)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:00.955-0500 I NETWORK [conn36] end connection 127.0.0.1:44482 (3 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.803-0500 I STORAGE [conn82] createCollection: test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 with generated UUID: 591add96-2a86-498b-a685-4a16bd1b825c and options: { temp: true }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.856-0500 I SHARDING [conn19] distributed lock 'test0_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d745cde74b6784bb306
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.875-0500 I COMMAND [conn68] CMD: dropIndexes test0_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.823-0500 I NETWORK [conn35] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.824-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34756 #37 (13 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:01.443-0500 I NETWORK [conn14] end connection 127.0.0.1:44338 (2 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.805-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: df67cbfe-50c0-4ab8-8a82-703c13936346: test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 ( e3d2a940-0a98-4e67-925d-5d4fe0fc0ecc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.857-0500 I SHARDING [conn19] distributed lock 'test0_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7d745cde74b6784bb30e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.893-0500 I COMMAND [conn65] CMD: dropIndexes test0_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.823-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.824-0500 I NETWORK [conn37] received client metadata from 127.0.0.1:34756 conn37: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:01.444-0500 I NETWORK [conn16] end connection 127.0.0.1:44382 (1 connection now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.805-0500 I INDEX [conn77] Index build completed: df67cbfe-50c0-4ab8-8a82-703c13936346
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.859-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test0_fsmdb0 from version {} to version { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.895-0500 I COMMAND [conn65] CMD: dropIndexes test0_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.823-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.832-0500 I INDEX [ReplWriterWorker-5] index build: starting on config.cache.chunks.test0_fsmdb0.agg_out properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:01.446-0500 I NETWORK [conn17] end connection 127.0.0.1:44390 (0 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.820-0500 I INDEX [conn82] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.860-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test0_fsmdb0.agg_out to version 1|0||5ddd7d74cf8184c2e14932e8 took 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.897-0500 I COMMAND [conn65] CMD: dropIndexes test0_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.823-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.832-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.821-0500 I COMMAND [conn84] CMD: drop test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.860-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d745cde74b6784bb30e' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.899-0500 I COMMAND [conn65] CMD: dropIndexes test0_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.824-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51392 #36 (12 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.832-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 5607714d-de1d-424e-9618-8bc325bda75c: config.cache.chunks.test0_fsmdb0.agg_out (b53e5b23-cfff-452a-9863-a2ca857d4f54 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.821-0500 I INDEX [conn82] Registering index build: 899dbd2f-6729-414d-8fba-be8b53680202
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.862-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d745cde74b6784bb306' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.899-0500 I COMMAND [conn68] CMD: dropIndexes test0_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.824-0500 I NETWORK [conn36] received client metadata from 127.0.0.1:51392 conn36: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.832-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.821-0500 I STORAGE [conn84] dropCollection: test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 (51cfafba-28d0-43f4-963f-34639b1282a5) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.917-0500 I SHARDING [conn19] distributed lock 'test0_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d745cde74b6784bb31e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.902-0500 I COMMAND [conn68] CMD: dropIndexes test0_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.825-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.833-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.821-0500 I STORAGE [conn84] Finishing collection drop for test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 (51cfafba-28d0-43f4-963f-34639b1282a5).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.917-0500 I SHARDING [conn19] Enabling sharding for database [test0_fsmdb0] in config db
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.914-0500 I COMMAND [conn68] CMD: dropIndexes test0_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.825-0500 I SHARDING [updateShardIdentityConfigString] Updating config server with confirmed set shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.835-0500 I SHARDING [ReplWriterWorker-5] Marking collection config.cache.chunks.test0_fsmdb0.agg_out as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.821-0500 I STORAGE [conn84] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 (51cfafba-28d0-43f4-963f-34639b1282a5)'. Ident: 'index-82--2588534479858262356', commit timestamp: 'Timestamp(1574796660, 1524)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.919-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d745cde74b6784bb31e' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.916-0500 I COMMAND [conn68] CMD: dropIndexes test0_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.832-0500 I INDEX [ReplWriterWorker-8] index build: starting on config.cache.chunks.test0_fsmdb0.agg_out properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.836-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: drain applied 1 side writes (inserted: 1, deleted: 0) for 'lastmod_1' in 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.821-0500 I STORAGE [conn84] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 (51cfafba-28d0-43f4-963f-34639b1282a5)'. Ident: 'index-87--2588534479858262356', commit timestamp: 'Timestamp(1574796660, 1524)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.922-0500 I SHARDING [conn19] distributed lock 'test0_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d745cde74b6784bb325
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.917-0500 I COMMAND [conn68] CMD: dropIndexes test0_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.832-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.836-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index lastmod_1 on ns config.cache.chunks.test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.821-0500 I STORAGE [conn84] Deferring table drop for collection 'test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2'. Ident: collection-80--2588534479858262356, commit timestamp: Timestamp(1574796660, 1524)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.925-0500 I SHARDING [conn19] distributed lock 'test0_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7d745cde74b6784bb32a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.925-0500 I COMMAND [conn68] CMD: dropIndexes test0_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.832-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: a9116a37-945f-4ef1-b71a-c6b46c6e6a95: config.cache.chunks.test0_fsmdb0.agg_out (b53e5b23-cfff-452a-9863-a2ca857d4f54 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.839-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 5607714d-de1d-424e-9618-8bc325bda75c: config.cache.chunks.test0_fsmdb0.agg_out ( b53e5b23-cfff-452a-9863-a2ca857d4f54 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.822-0500 I COMMAND [conn65] command test0_fsmdb0.agg_out appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f380e6e7-e7db-4d01-a770-8e3aea8758d2"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3195221918134952140, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3631621058727923157, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test0_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796658722), clusterTime: Timestamp(1574796658, 4561) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f380e6e7-e7db-4d01-a770-8e3aea8758d2"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796658, 4561), signature: { hash: BinData(0, 4EC9AFA4935ABE17D6EB91BA63A7DD3D91017881), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44480", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796658, 513), t: 1 } }, $db: "test0_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2\", to: \"test0_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test0_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:863 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2098ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.926-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test0_fsmdb0 from version {} to version { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.928-0500 I COMMAND [conn68] CMD: dropIndexes test0_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.832-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.854-0500 I INDEX [ReplWriterWorker-15] index build: starting on test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.824-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46196 #95 (42 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.927-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test0_fsmdb0.agg_out to version 1|0||5ddd7d74cf8184c2e14932e8 took 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.934-0500 I COMMAND [conn68] CMD: dropIndexes test0_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.833-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.854-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.824-0500 I NETWORK [conn95] received client metadata from 127.0.0.1:46196 conn95: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.929-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d745cde74b6784bb32a' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.942-0500 I COMMAND [conn65] CMD: dropIndexes test0_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.834-0500 I SHARDING [ReplWriterWorker-5] Marking collection config.cache.chunks.test0_fsmdb0.agg_out as collection version:
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.854-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 5c5f4ab6-cef7-46fa-87bb-0c4c8debf8fb: test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 (e3d2a940-0a98-4e67-925d-5d4fe0fc0ecc ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.824-0500 I COMMAND [conn80] CMD: dropIndexes test0_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.930-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d745cde74b6784bb325' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.951-0500 I NETWORK [conn75] end connection 127.0.0.1:38698 (32 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.836-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 1 side writes (inserted: 1, deleted: 0) for 'lastmod_1' in 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.855-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.825-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46200 #96 (43 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.951-0500 I NETWORK [conn78] end connection 127.0.0.1:55950 (38 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:00.958-0500 I NETWORK [conn74] end connection 127.0.0.1:38694 (31 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.836-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.855-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.825-0500 I NETWORK [conn96] received client metadata from 127.0.0.1:46200 conn96: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:00.958-0500 I NETWORK [conn77] end connection 127.0.0.1:55948 (37 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:01.380-0500 I COMMAND [conn71] CMD: dropIndexes test0_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.845-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: a9116a37-945f-4ef1-b71a-c6b46c6e6a95: config.cache.chunks.test0_fsmdb0.agg_out ( b53e5b23-cfff-452a-9863-a2ca857d4f54 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.859-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.839-0500 I INDEX [conn82] index build: starting on test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.359-0500 I SHARDING [conn22] distributed lock 'test0_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d745cde74b6784bb30b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:01.382-0500 I COMMAND [conn71] CMD: dropIndexes test0_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.853-0500 I INDEX [ReplWriterWorker-8] index build: starting on test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.860-0500 I STORAGE [ReplWriterWorker-13] createCollection: test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 with provided UUID: 591add96-2a86-498b-a685-4a16bd1b825c and options: { uuid: UUID("591add96-2a86-498b-a685-4a16bd1b825c"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.839-0500 I INDEX [conn82] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.359-0500 I SHARDING [conn22] Enabling sharding for database [test0_fsmdb0] in config db
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:01.388-0500 I COMMAND [conn71] CMD: dropIndexes test0_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.853-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.860-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 5c5f4ab6-cef7-46fa-87bb-0c4c8debf8fb: test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 ( e3d2a940-0a98-4e67-925d-5d4fe0fc0ecc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.840-0500 I STORAGE [conn82] Index build initialized: 899dbd2f-6729-414d-8fba-be8b53680202: test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 (591add96-2a86-498b-a685-4a16bd1b825c ): indexes: 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.360-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7d745cde74b6784bb30b' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:01.402-0500 I COMMAND [conn71] CMD: dropIndexes test0_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.853-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: f7bd8406-a49e-40a7-9709-6f4dfc7d2c32: test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 (e3d2a940-0a98-4e67-925d-5d4fe0fc0ecc ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.874-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.840-0500 I INDEX [conn82] Waiting for index build to complete: 899dbd2f-6729-414d-8fba-be8b53680202
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.360-0500 I COMMAND [conn22] command admin.$cmd appName: "tid:1" command: _configsvrEnableSharding { _configsvrEnableSharding: "test0_fsmdb0", writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("14894096-7669-49a5-82f4-aeca25550ea6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1574796660, 2031), signature: { hash: BinData(0, EF73DDC6DD0D4DB8517FD69A1BBAF2428F2E8242), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:57626", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796660, 1784), t: 1 } }, $db: "admin" } numYields:0 reslen:505 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 5 } }, ReplicationStateTransition: { acquireCount: { w: 9 } }, Global: { acquireCount: { r: 5, w: 4 } }, Database: { acquireCount: { r: 4, w: 4 } }, Collection: { acquireCount: { r: 3, w: 4 } }, Mutex: { acquireCount: { r: 10 } }, oplog: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 4 } storage:{} protocol:op_msg 505ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:01.409-0500 I COMMAND [conn71] CMD: dropIndexes test0_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.853-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.877-0500 I COMMAND [ReplWriterWorker-8] CMD: drop test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.840-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.363-0500 I SHARDING [conn22] distributed lock 'test0_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d755cde74b6784bb33c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:01.444-0500 I NETWORK [conn52] end connection 127.0.0.1:38506 (30 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.854-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.878-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 (51cfafba-28d0-43f4-963f-34639b1282a5) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796660, 1524), t: 1 } and commit timestamp Timestamp(1574796660, 1524)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.840-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.364-0500 I SHARDING [conn22] distributed lock 'test0_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7d755cde74b6784bb33e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:01.444-0500 I NETWORK [conn53] end connection 127.0.0.1:38528 (29 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.857-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.878-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 (51cfafba-28d0-43f4-963f-34639b1282a5).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.844-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.366-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test0_fsmdb0 from version {} to version { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:01.445-0500 I NETWORK [conn54] end connection 127.0.0.1:38540 (28 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.858-0500 I STORAGE [ReplWriterWorker-11] createCollection: test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 with provided UUID: 591add96-2a86-498b-a685-4a16bd1b825c and options: { uuid: UUID("591add96-2a86-498b-a685-4a16bd1b825c"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.878-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 (51cfafba-28d0-43f4-963f-34639b1282a5)'. Ident: 'index-88--7234316082034423155', commit timestamp: 'Timestamp(1574796660, 1524)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.847-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 899dbd2f-6729-414d-8fba-be8b53680202: test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 ( 591add96-2a86-498b-a685-4a16bd1b825c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.367-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test0_fsmdb0.agg_out to version 1|0||5ddd7d74cf8184c2e14932e8 took 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:01.445-0500 I NETWORK [conn56] end connection 127.0.0.1:38548 (27 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.860-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: f7bd8406-a49e-40a7-9709-6f4dfc7d2c32: test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 ( e3d2a940-0a98-4e67-925d-5d4fe0fc0ecc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.878-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 (51cfafba-28d0-43f4-963f-34639b1282a5)'. Ident: 'index-97--7234316082034423155', commit timestamp: 'Timestamp(1574796660, 1524)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.847-0500 I INDEX [conn82] Index build completed: 899dbd2f-6729-414d-8fba-be8b53680202
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.368-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7d755cde74b6784bb33e' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:01.445-0500 I NETWORK [conn55] end connection 127.0.0.1:38542 (26 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.873-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.878-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2'. Ident: collection-87--7234316082034423155, commit timestamp: Timestamp(1574796660, 1524)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.847-0500 I COMMAND [conn82] CMD: drop test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.369-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7d755cde74b6784bb33c' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:01.456-0500 I NETWORK [conn50] end connection 127.0.0.1:38498 (25 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.877-0500 I COMMAND [ReplWriterWorker-14] CMD: drop test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.900-0500 I INDEX [ReplWriterWorker-6] index build: starting on test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.847-0500 I STORAGE [conn82] dropCollection: test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 (e3d2a940-0a98-4e67-925d-5d4fe0fc0ecc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.419-0500 I SHARDING [conn22] distributed lock 'test0_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d755cde74b6784bb34d
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.877-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 (51cfafba-28d0-43f4-963f-34639b1282a5) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796660, 1524), t: 1 } and commit timestamp Timestamp(1574796660, 1524)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.900-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.848-0500 I STORAGE [conn82] Finishing collection drop for test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 (e3d2a940-0a98-4e67-925d-5d4fe0fc0ecc).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.419-0500 I SHARDING [conn22] Enabling sharding for database [test0_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.877-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 (51cfafba-28d0-43f4-963f-34639b1282a5).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.900-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 0b1ccb75-f55a-4b4c-9652-80aecd5b66cf: test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 (591add96-2a86-498b-a685-4a16bd1b825c ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.848-0500 I STORAGE [conn82] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 (e3d2a940-0a98-4e67-925d-5d4fe0fc0ecc)'. Ident: 'index-90--2588534479858262356', commit timestamp: 'Timestamp(1574796660, 2031)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.420-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7d755cde74b6784bb34d' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.877-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 (51cfafba-28d0-43f4-963f-34639b1282a5)'. Ident: 'index-88--2310912778499990807', commit timestamp: 'Timestamp(1574796660, 1524)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.900-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.848-0500 I STORAGE [conn82] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 (e3d2a940-0a98-4e67-925d-5d4fe0fc0ecc)'. Ident: 'index-92--2588534479858262356', commit timestamp: 'Timestamp(1574796660, 2031)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.422-0500 I SHARDING [conn22] distributed lock 'test0_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d755cde74b6784bb353
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.877-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2 (51cfafba-28d0-43f4-963f-34639b1282a5)'. Ident: 'index-97--2310912778499990807', commit timestamp: 'Timestamp(1574796660, 1524)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.901-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.848-0500 I STORAGE [conn82] Deferring table drop for collection 'test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37'. Ident: collection-89--2588534479858262356, commit timestamp: Timestamp(1574796660, 2031)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.423-0500 I SHARDING [conn22] distributed lock 'test0_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7d755cde74b6784bb355
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.877-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2'. Ident: collection-87--2310912778499990807, commit timestamp: Timestamp(1574796660, 1524)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.904-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.848-0500 I COMMAND [conn64] command test0_fsmdb0.agg_out appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("0cab2b8e-d3fc-4dad-b1d8-353217ca01c7"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4835274491843002641, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4361139262870369814, ns: "test0_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test0_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796660720), clusterTime: Timestamp(1574796658, 5072) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("0cab2b8e-d3fc-4dad-b1d8-353217ca01c7"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796660, 4), signature: { hash: BinData(0, EF73DDC6DD0D4DB8517FD69A1BBAF2428F2E8242), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44482", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796658, 513), t: 1 } }, $db: "test0_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37\", to: \"test0_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test0_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:884 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 125ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.424-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test0_fsmdb0 from version {} to version { uuid: UUID("50bac46b-d129-4149-aa81-48f1b27975b4"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.899-0500 I INDEX [ReplWriterWorker-2] index build: starting on test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.907-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 0b1ccb75-f55a-4b4c-9652-80aecd5b66cf: test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 ( 591add96-2a86-498b-a685-4a16bd1b825c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.850-0500 I COMMAND [conn64] CMD: dropIndexes test0_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.425-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test0_fsmdb0.agg_out to version 1|0||5ddd7d74cf8184c2e14932e8 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.899-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.909-0500 I COMMAND [ReplWriterWorker-13] CMD: drop test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.851-0500 I COMMAND [conn81] CMD: dropIndexes test0_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.426-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7d755cde74b6784bb355' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.899-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: b76b0928-6234-4a18-8df3-ca35977a872f: test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 (591add96-2a86-498b-a685-4a16bd1b825c ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.909-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 (e3d2a940-0a98-4e67-925d-5d4fe0fc0ecc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796660, 2031), t: 1 } and commit timestamp Timestamp(1574796660, 2031)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.859-0500 I COMMAND [conn81] CMD: dropIndexes test0_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.428-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7d755cde74b6784bb353' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.899-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.909-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 (e3d2a940-0a98-4e67-925d-5d4fe0fc0ecc).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.860-0500 I COMMAND [conn64] CMD: dropIndexes test0_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.444-0500 I NETWORK [conn72] end connection 127.0.0.1:55750 (36 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.900-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.909-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 (e3d2a940-0a98-4e67-925d-5d4fe0fc0ecc)'. Ident: 'index-96--7234316082034423155', commit timestamp: 'Timestamp(1574796660, 2031)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.863-0500 I COMMAND [conn77] CMD: drop test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.444-0500 I NETWORK [conn73] end connection 127.0.0.1:55786 (35 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.903-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.909-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 (e3d2a940-0a98-4e67-925d-5d4fe0fc0ecc)'. Ident: 'index-103--7234316082034423155', commit timestamp: 'Timestamp(1574796660, 2031)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.863-0500 I STORAGE [conn77] dropCollection: test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 (591add96-2a86-498b-a685-4a16bd1b825c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.445-0500 I NETWORK [conn74] end connection 127.0.0.1:55796 (34 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.904-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: b76b0928-6234-4a18-8df3-ca35977a872f: test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 ( 591add96-2a86-498b-a685-4a16bd1b825c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.909-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37'. Ident: collection-95--7234316082034423155, commit timestamp: Timestamp(1574796660, 2031)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.863-0500 I STORAGE [conn77] Finishing collection drop for test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 (591add96-2a86-498b-a685-4a16bd1b825c).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.446-0500 I NETWORK [conn75] end connection 127.0.0.1:55798 (33 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.908-0500 I COMMAND [ReplWriterWorker-3] CMD: drop test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.916-0500 I COMMAND [ReplWriterWorker-0] CMD: drop test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.863-0500 I STORAGE [conn77] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 (591add96-2a86-498b-a685-4a16bd1b825c)'. Ident: 'index-98--2588534479858262356', commit timestamp: 'Timestamp(1574796660, 2533)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:01.456-0500 I NETWORK [conn71] end connection 127.0.0.1:55748 (32 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.908-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 (e3d2a940-0a98-4e67-925d-5d4fe0fc0ecc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796660, 2031), t: 1 } and commit timestamp Timestamp(1574796660, 2031)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.916-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 (591add96-2a86-498b-a685-4a16bd1b825c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796660, 2533), t: 1 } and commit timestamp Timestamp(1574796660, 2533)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.863-0500 I STORAGE [conn77] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 (591add96-2a86-498b-a685-4a16bd1b825c)'. Ident: 'index-99--2588534479858262356', commit timestamp: 'Timestamp(1574796660, 2533)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.908-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 (e3d2a940-0a98-4e67-925d-5d4fe0fc0ecc).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.916-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 (591add96-2a86-498b-a685-4a16bd1b825c).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.863-0500 I STORAGE [conn77] Deferring table drop for collection 'test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0'. Ident: collection-97--2588534479858262356, commit timestamp: Timestamp(1574796660, 2533)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.908-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 (e3d2a940-0a98-4e67-925d-5d4fe0fc0ecc)'. Ident: 'index-96--2310912778499990807', commit timestamp: 'Timestamp(1574796660, 2031)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.916-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 (591add96-2a86-498b-a685-4a16bd1b825c)'. Ident: 'index-106--7234316082034423155', commit timestamp: 'Timestamp(1574796660, 2533)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.864-0500 I COMMAND [conn62] CMD: dropIndexes test0_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.908-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37 (e3d2a940-0a98-4e67-925d-5d4fe0fc0ecc)'. Ident: 'index-103--2310912778499990807', commit timestamp: 'Timestamp(1574796660, 2031)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.916-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 (591add96-2a86-498b-a685-4a16bd1b825c)'. Ident: 'index-107--7234316082034423155', commit timestamp: 'Timestamp(1574796660, 2533)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.866-0500 I COMMAND [conn62] CMD: dropIndexes test0_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.908-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37'. Ident: collection-95--2310912778499990807, commit timestamp: Timestamp(1574796660, 2031)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.916-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0'. Ident: collection-105--7234316082034423155, commit timestamp: Timestamp(1574796660, 2533)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.866-0500 I COMMAND [conn64] CMD: dropIndexes test0_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.916-0500 I COMMAND [ReplWriterWorker-9] CMD: drop test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.945-0500 W CONTROL [conn36] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 43 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.869-0500 I COMMAND [conn64] CMD: dropIndexes test0_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.917-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 (591add96-2a86-498b-a685-4a16bd1b825c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796660, 2533), t: 1 } and commit timestamp Timestamp(1574796660, 2533)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.947-0500 I NETWORK [conn36] end connection 127.0.0.1:34750 (12 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.874-0500 I COMMAND [conn64] CMD: dropIndexes test0_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.917-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 (591add96-2a86-498b-a685-4a16bd1b825c).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:00.958-0500 I NETWORK [conn35] end connection 127.0.0.1:34724 (11 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.875-0500 I COMMAND [conn62] CMD: dropIndexes test0_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.917-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 (591add96-2a86-498b-a685-4a16bd1b825c)'. Ident: 'index-106--2310912778499990807', commit timestamp: 'Timestamp(1574796660, 2533)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:01.446-0500 I NETWORK [conn29] end connection 127.0.0.1:34580 (10 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.893-0500 I COMMAND [conn64] CMD: dropIndexes test0_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.917-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0 (591add96-2a86-498b-a685-4a16bd1b825c)'. Ident: 'index-107--2310912778499990807', commit timestamp: 'Timestamp(1574796660, 2533)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:01.456-0500 I NETWORK [conn28] end connection 127.0.0.1:34538 (9 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.895-0500 I COMMAND [conn64] CMD: dropIndexes test0_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.917-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0'. Ident: collection-105--2310912778499990807, commit timestamp: Timestamp(1574796660, 2533)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:03.170-0500 I NETWORK [conn4] end connection 127.0.0.1:34204 (8 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.897-0500 I COMMAND [conn64] CMD: dropIndexes test0_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.945-0500 W CONTROL [conn35] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 40 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.899-0500 I COMMAND [conn64] CMD: dropIndexes test0_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.947-0500 I NETWORK [conn35] end connection 127.0.0.1:51388 (11 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.899-0500 I COMMAND [conn62] CMD: dropIndexes test0_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:00.958-0500 I NETWORK [conn34] end connection 127.0.0.1:51366 (10 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.902-0500 I COMMAND [conn62] CMD: dropIndexes test0_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:01.446-0500 I NETWORK [conn28] end connection 127.0.0.1:51218 (9 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.914-0500 I COMMAND [conn62] CMD: dropIndexes test0_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:01.456-0500 I NETWORK [conn27] end connection 127.0.0.1:51180 (8 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.916-0500 I COMMAND [conn62] CMD: dropIndexes test0_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.917-0500 I COMMAND [conn62] CMD: dropIndexes test0_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.925-0500 I COMMAND [conn62] CMD: dropIndexes test0_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.928-0500 I COMMAND [conn62] CMD: dropIndexes test0_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.934-0500 I COMMAND [conn62] CMD: dropIndexes test0_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.942-0500 I COMMAND [conn64] CMD: dropIndexes test0_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.944-0500 W CONTROL [conn94] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 47 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.947-0500 I NETWORK [conn93] end connection 127.0.0.1:46182 (42 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.947-0500 I NETWORK [conn94] end connection 127.0.0.1:46188 (41 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.951-0500 I NETWORK [conn92] end connection 127.0.0.1:46172 (40 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:00.958-0500 I NETWORK [conn91] end connection 127.0.0.1:46170 (39 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:01.380-0500 I COMMAND [conn81] CMD: dropIndexes test0_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:01.382-0500 I COMMAND [conn81] CMD: dropIndexes test0_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:01.388-0500 I COMMAND [conn81] CMD: dropIndexes test0_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:01.402-0500 I COMMAND [conn81] CMD: dropIndexes test0_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:01.409-0500 I COMMAND [conn81] CMD: dropIndexes test0_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:01.444-0500 I NETWORK [conn50] end connection 127.0.0.1:45988 (38 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:01.445-0500 I NETWORK [conn51] end connection 127.0.0.1:45996 (37 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:01.445-0500 I NETWORK [conn52] end connection 127.0.0.1:46016 (36 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:01.445-0500 I NETWORK [conn54] end connection 127.0.0.1:46024 (35 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:01.446-0500 I NETWORK [conn53] end connection 127.0.0.1:46018 (34 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:01.456-0500 I NETWORK [conn48] end connection 127.0.0.1:45984 (33 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:03.170-0500 I CONNPOOL [ReplNetwork] Ending connection to host localhost:20006 due to bad connection status: CallbackCanceled: Callback was canceled; 1 connections to that host remain open
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.727-0500 MongoDB shell version v0.0.0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.778-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:03.778-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44594 #45 (1 connection now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:03.778-0500 I NETWORK [conn45] received client metadata from 127.0.0.1:44594 conn45: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.780-0500 Implicit session: session { "id" : UUID("67f55571-caf0-4b48-8087-f0e6564cc198") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.782-0500 MongoDB server version: 0.0.0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.784-0500 true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.787-0500 2019-11-26T14:31:03.787-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.787-0500 2019-11-26T14:31:03.787-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:03.788-0500 I NETWORK [listener] connection accepted from 127.0.0.1:55998 #79 (33 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:03.788-0500 I NETWORK [conn79] received client metadata from 127.0.0.1:55998 conn79: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.788-0500 2019-11-26T14:31:03.788-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:03.789-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56000 #80 (34 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:03.789-0500 I NETWORK [conn80] received client metadata from 127.0.0.1:56000 conn80: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.790-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.790-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.790-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.790-0500 [jsTest] New session started with sessionID: { "id" : UUID("95fc2a06-6f1d-4909-8912-05d6d56abaaf") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.790-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.790-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.790-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.791-0500 2019-11-26T14:31:03.791-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.792-0500 2019-11-26T14:31:03.792-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.792-0500 2019-11-26T14:31:03.792-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.792-0500 2019-11-26T14:31:03.792-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:03.792-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51776 #40 (10 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:03.792-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38744 #78 (26 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:03.792-0500 I NETWORK [conn40] received client metadata from 127.0.0.1:51776 conn40: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:03.792-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52668 #40 (10 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:03.792-0500 I NETWORK [conn78] received client metadata from 127.0.0.1:38744 conn78: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.793-0500 2019-11-26T14:31:03.792-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:03.792-0500 I NETWORK [conn40] received client metadata from 127.0.0.1:52668 conn40: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:03.793-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38748 #79 (27 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:03.793-0500 I NETWORK [conn79] received client metadata from 127.0.0.1:38748 conn79: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.793-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.793-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.793-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.794-0500 [jsTest] New session started with sessionID: { "id" : UUID("dcfd59c3-7981-4bd0-9b7f-5863f2da552f") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.794-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.794-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.794-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.794-0500 2019-11-26T14:31:03.794-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.794-0500 2019-11-26T14:31:03.794-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.794-0500 2019-11-26T14:31:03.794-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.794-0500 2019-11-26T14:31:03.794-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:03.794-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51414 #41 (9 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:03.794-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34776 #38 (9 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:03.794-0500 I NETWORK [conn41] received client metadata from 127.0.0.1:51414 conn41: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:03.794-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46220 #97 (34 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:03.795-0500 I NETWORK [conn38] received client metadata from 127.0.0.1:34776 conn38: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:03.795-0500 I NETWORK [conn97] received client metadata from 127.0.0.1:46220 conn97: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.795-0500 2019-11-26T14:31:03.795-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:03.795-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46222 #98 (35 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:03.796-0500 I NETWORK [conn98] received client metadata from 127.0.0.1:46222 conn98: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.796-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.796-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.796-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.796-0500 [jsTest] New session started with sessionID: { "id" : UUID("bdbbd11d-dc3b-47bb-b8e6-0fe76657f287") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.796-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.796-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.796-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.797-0500 Skipping data consistency checks for 1-node CSRS: { "type" : "replica set", "primary" : "localhost:20000", "nodes" : [ "localhost:20000" ] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.848-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:03.849-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44616 #46 (2 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:03.849-0500 I NETWORK [conn46] received client metadata from 127.0.0.1:44616 conn46: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.851-0500 Implicit session: session { "id" : UUID("47d7286f-9c41-40d3-9286-9363259858bc") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.852-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:03.852-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44618 #47 (3 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.852-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:03.852-0500 I NETWORK [conn47] received client metadata from 127.0.0.1:44618 conn47: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.854-0500 Implicit session: session { "id" : UUID("51b2b13f-24e2-4271-bb72-060271dae45f") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.855-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:03.856-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38762 #80 (28 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:03.856-0500 I NETWORK [conn80] received client metadata from 127.0.0.1:38762 conn80: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:03.859-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38764 #81 (29 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:03.859-0500 I NETWORK [conn81] received client metadata from 127.0.0.1:38764 conn81: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:03.859-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46232 #99 (36 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:03.859-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51802 #41 (11 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:03.860-0500 I NETWORK [conn99] received client metadata from 127.0.0.1:46232 conn99: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:03.860-0500 I NETWORK [conn41] received client metadata from 127.0.0.1:51802 conn41: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:03.860-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52692 #41 (11 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:03.860-0500 I NETWORK [conn41] received client metadata from 127.0.0.1:52692 conn41: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.861-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.861-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.861-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.861-0500 [jsTest] New session started with sessionID: { "id" : UUID("e96beddc-d348-491a-90e7-051af60195f4") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.861-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.861-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.861-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.862-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.862-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.862-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.862-0500 [jsTest] New session started with sessionID: { "id" : UUID("e164e33c-1f8b-48b4-b320-ae0398091fa7") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.862-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.862-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.862-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:03.862-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46238 #100 (37 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:03.862-0500 I NETWORK [conn100] received client metadata from 127.0.0.1:46238 conn100: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:03.863-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51438 #42 (10 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.863-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:03.863-0500 I NETWORK [conn42] received client metadata from 127.0.0.1:51438 conn42: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.863-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.863-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.863-0500 [jsTest] New session started with sessionID: { "id" : UUID("dd9ed948-63f3-406e-86cb-31d1b0ebfd7f") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.863-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.863-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.863-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:03.863-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34800 #39 (10 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:03.863-0500 I NETWORK [conn39] received client metadata from 127.0.0.1:34800 conn39: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.864-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.865-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.865-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.865-0500 [jsTest] New session started with sessionID: { "id" : UUID("de016c06-cb05-47ac-9f76-96a73b251c62") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.865-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.865-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.865-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.865-0500 Running data consistency checks for replica set: shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.865-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.865-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.865-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.865-0500 [jsTest] New session started with sessionID: { "id" : UUID("f1e4d0ea-6429-41e8-8686-f239a316f05c") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.865-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.865-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.865-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.866-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.866-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.866-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.866-0500 [jsTest] New session started with sessionID: { "id" : UUID("18dea1cd-6e1e-4e60-a672-97a30b384023") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.866-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.866-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.866-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.868-0500 Running data consistency checks for replica set: shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.870-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.870-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.870-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.870-0500 [jsTest] New session started with sessionID: { "id" : UUID("5b5b82af-1565-476e-845b-92d44e070ddf") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.870-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.870-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.870-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.870-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.870-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.870-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.870-0500 [jsTest] New session started with sessionID: { "id" : UUID("dd679f2c-6b6f-4ed2-b75e-e6dbaaf826c3") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.870-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.870-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.870-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.870-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.870-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.870-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.870-0500 [jsTest] New session started with sessionID: { "id" : UUID("b64fecb7-dfde-4f51-a760-4467fd4f9d77") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.870-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.870-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.870-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:03.870-0500 W CONTROL [conn81] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:03.871-0500 W CONTROL [conn41] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:03.871-0500 W CONTROL [conn41] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 0 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.872-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.872-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.872-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.872-0500 [jsTest] New session started with sessionID: { "id" : UUID("e11cf28e-2636-40e6-a832-b45c71b4be08") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.872-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.872-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.872-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.872-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.872-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.873-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.873-0500 [jsTest] New session started with sessionID: { "id" : UUID("68716bd5-a519-41d8-bcb1-ca69efa33570") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.873-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.873-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.873-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.873-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.873-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.873-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.873-0500 [jsTest] New session started with sessionID: { "id" : UUID("49d7f9b3-a795-446d-b192-d19771af5b09") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.873-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.873-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.873-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:03.873-0500 W CONTROL [conn100] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 47 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:03.873-0500 W CONTROL [conn42] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 40 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:03.874-0500 W CONTROL [conn39] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 43 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:03.887-0500 W CONTROL [conn81] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:03.888-0500 W CONTROL [conn41] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:03.888-0500 W CONTROL [conn41] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 0 }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:03.890-0500 I NETWORK [conn46] end connection 127.0.0.1:44616 (2 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:03.890-0500 I NETWORK [conn80] end connection 127.0.0.1:38762 (28 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:03.890-0500 I NETWORK [conn81] end connection 127.0.0.1:38764 (27 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:03.890-0500 I NETWORK [conn41] end connection 127.0.0.1:51802 (10 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:03.890-0500 I NETWORK [conn41] end connection 127.0.0.1:52692 (10 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:03.896-0500 W CONTROL [conn100] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 47 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:03.896-0500 W CONTROL [conn42] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 40 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:03.897-0500 W CONTROL [conn39] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 43 }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:03.898-0500 I NETWORK [conn47] end connection 127.0.0.1:44618 (1 connection now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:03.898-0500 I NETWORK [conn99] end connection 127.0.0.1:46232 (36 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:03.898-0500 I NETWORK [conn100] end connection 127.0.0.1:46238 (35 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:03.898-0500 I NETWORK [conn42] end connection 127.0.0.1:51438 (9 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:03.899-0500 I NETWORK [conn39] end connection 127.0.0.1:34800 (9 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:03.901-0500 I NETWORK [conn45] end connection 127.0.0.1:44594 (0 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:03.901-0500 I NETWORK [conn80] end connection 127.0.0.1:56000 (33 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:03.902-0500 I NETWORK [conn79] end connection 127.0.0.1:38748 (26 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:03.902-0500 I NETWORK [conn98] end connection 127.0.0.1:46222 (34 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:03.909-0500 I NETWORK [conn97] end connection 127.0.0.1:46220 (33 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:03.909-0500 I NETWORK [conn78] end connection 127.0.0.1:38744 (25 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:03.909-0500 I NETWORK [conn38] end connection 127.0.0.1:34776 (8 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:03.909-0500 I NETWORK [conn40] end connection 127.0.0.1:51776 (9 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:03.909-0500 I NETWORK [conn79] end connection 127.0.0.1:55998 (32 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:03.909-0500 I NETWORK [conn40] end connection 127.0.0.1:52668 (9 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:03.909-0500 I NETWORK [conn41] end connection 127.0.0.1:51414 (8 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:03.910-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js finished.
[executor:fsm_workload_test:job0] 2019-11-26T14:31:03.910-0500 agg_out:CheckReplDBHashInBackground ran in 6.27 seconds: no failures detected.
[executor:fsm_workload_test:job0] 2019-11-26T14:31:03.911-0500 Running agg_out:CheckReplDBHash...
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:03.912-0500 Starting JSTest jstests/hooks/run_check_repl_dbhash.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_check_repl_dbhash"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_check_repl_dbhash.js
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:03.918-0500 JSTest jstests/hooks/run_check_repl_dbhash.js started with pid 15157.
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:03.940-0500 MongoDB shell version v0.0.0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:03.991-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:03.992-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44636 #48 (1 connection now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:03.992-0500 I NETWORK [conn48] received client metadata from 127.0.0.1:44636 conn48: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:03.994-0500 Implicit session: session { "id" : UUID("f5f1fa32-3fdd-4fd3-acbd-79f1130d934b") }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:03.996-0500 MongoDB server version: 0.0.0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:03.997-0500 true
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.001-0500 2019-11-26T14:31:04.001-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.001-0500 2019-11-26T14:31:04.001-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:04.001-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56040 #81 (33 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:04.001-0500 I NETWORK [conn81] received client metadata from 127.0.0.1:56040 conn81: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.002-0500 2019-11-26T14:31:04.002-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:04.002-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56042 #82 (34 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:04.002-0500 I NETWORK [conn82] received client metadata from 127.0.0.1:56042 conn82: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.003-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.003-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.003-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.003-0500 [jsTest] New session started with sessionID: { "id" : UUID("35dc566c-94cf-44ba-bef6-75a1d9af4cc5") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.003-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.003-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.003-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.005-0500 2019-11-26T14:31:04.005-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.005-0500 2019-11-26T14:31:04.005-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.005-0500 2019-11-26T14:31:04.005-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.005-0500 2019-11-26T14:31:04.005-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:04.006-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51818 #42 (10 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:04.006-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38786 #82 (26 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:04.006-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52710 #42 (10 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:04.006-0500 I NETWORK [conn42] received client metadata from 127.0.0.1:51818 conn42: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:04.006-0500 I NETWORK [conn42] received client metadata from 127.0.0.1:52710 conn42: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:04.006-0500 I NETWORK [conn82] received client metadata from 127.0.0.1:38786 conn82: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.006-0500 2019-11-26T14:31:04.006-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:04.006-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38790 #83 (27 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:04.006-0500 I NETWORK [conn83] received client metadata from 127.0.0.1:38790 conn83: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.007-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.007-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.007-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.007-0500 [jsTest] New session started with sessionID: { "id" : UUID("a8d52288-bc1a-427b-85f0-2f1ecc9e03ca") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.007-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.007-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.007-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.008-0500 2019-11-26T14:31:04.008-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.008-0500 2019-11-26T14:31:04.008-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.008-0500 2019-11-26T14:31:04.008-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.008-0500 2019-11-26T14:31:04.008-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:04.008-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34816 #40 (9 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.008-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46260 #101 (34 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:04.008-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51460 #43 (9 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:04.008-0500 I NETWORK [conn40] received client metadata from 127.0.0.1:34816 conn40: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.008-0500 I NETWORK [conn101] received client metadata from 127.0.0.1:46260 conn101: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:04.009-0500 I NETWORK [conn43] received client metadata from 127.0.0.1:51460 conn43: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.009-0500 2019-11-26T14:31:04.009-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.009-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46264 #102 (35 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.009-0500 I NETWORK [conn102] received client metadata from 127.0.0.1:46264 conn102: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.010-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.010-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.010-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.010-0500 [jsTest] New session started with sessionID: { "id" : UUID("6a23a522-c38c-4f6e-aefe-fcc4420e6346") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.010-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.010-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.010-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.011-0500 Skipping data consistency checks for 1-node CSRS: { "type" : "sharded cluster", "configsvr" : { "type" : "replica set", "primary" : "localhost:20000", "nodes" : [ "localhost:20000" ] }, "shards" : { "shard-rs0" : { "type" : "replica set", "primary" : "localhost:20001", "nodes" : [ "localhost:20001", "localhost:20002", "localhost:20003" ] }, "shard-rs1" : { "type" : "replica set", "primary" : "localhost:20004", "nodes" : [ "localhost:20004", "localhost:20005", "localhost:20006" ] } }, "mongos" : { "type" : "mongos router", "nodes" : [ "localhost:20007", "localhost:20008" ] } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.061-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:04.061-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44658 #49 (2 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:04.062-0500 I NETWORK [conn49] received client metadata from 127.0.0.1:44658 conn49: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.063-0500 Implicit session: session { "id" : UUID("a29a7f2d-d313-4958-aa87-8cee9a3ec515") }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.064-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:04.064-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44660 #50 (3 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:04.064-0500 I NETWORK [conn50] received client metadata from 127.0.0.1:44660 conn50: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.065-0500 MongoDB server version: 0.0.0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.066-0500 Implicit session: session { "id" : UUID("a18c5c55-46e4-4890-8d69-ab4118e2671e") }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.068-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:04.069-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38804 #84 (28 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:04.069-0500 I NETWORK [conn84] received client metadata from 127.0.0.1:38804 conn84: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.070-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.070-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.070-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.070-0500 [jsTest] New session started with sessionID: { "id" : UUID("cfb80674-d213-42e8-9c4b-c92da54886d9") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.070-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.070-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.070-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.071-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46272 #103 (36 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.071-0500 I NETWORK [conn103] received client metadata from 127.0.0.1:46272 conn103: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.072-0500 Recreating replica set from config {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.072-0500 "_id" : "shard-rs0",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.072-0500 "version" : 2,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.072-0500 "protocolVersion" : NumberLong(1),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.072-0500 "writeConcernMajorityJournalDefault" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.072-0500 "members" : [
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.072-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.072-0500 "_id" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.072-0500 "host" : "localhost:20001",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.072-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.072-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.072-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.072-0500 "priority" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.072-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.072-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.072-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.072-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500 "_id" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500 "host" : "localhost:20002",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500 "priority" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500 "_id" : 2,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500 "host" : "localhost:20003",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500 "priority" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.073-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.074-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.074-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.074-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.074-0500 ],
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.074-0500 "settings" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.074-0500 "chainingAllowed" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.074-0500 "heartbeatIntervalMillis" : 2000,
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:04.072-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51844 #43 (11 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.074-0500 "heartbeatTimeoutSecs" : 10,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.074-0500 "electionTimeoutMillis" : 86400000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.074-0500 "catchUpTimeoutMillis" : -1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.074-0500 "catchUpTakeoverDelayMillis" : 30000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.074-0500 "getLastErrorModes" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.074-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.074-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.074-0500 "getLastErrorDefaults" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500 "w" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500 "wtimeout" : 0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500 "replicaSetId" : ObjectId("5ddd7d683bbfe7fa5630d3b8")
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500 [jsTest] New session started with sessionID: { "id" : UUID("0029f0e2-08a7-4795-a0a8-036709f3805a") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500 [jsTest] New session started with sessionID: { "id" : UUID("86fd7148-6eba-41b7-8637-111d4906cdda") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500 Recreating replica set from config {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500 "_id" : "shard-rs1",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500 "version" : 2,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.075-0500 "protocolVersion" : NumberLong(1),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500 "writeConcernMajorityJournalDefault" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500 "members" : [
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500 "_id" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500 "host" : "localhost:20004",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500 "priority" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500 "_id" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500 "host" : "localhost:20005",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500 "priority" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.076-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 "_id" : 2,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 "host" : "localhost:20006",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 "priority" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 ],
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 "settings" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 "chainingAllowed" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 "heartbeatIntervalMillis" : 2000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 "heartbeatTimeoutSecs" : 10,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 "electionTimeoutMillis" : 86400000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 "catchUpTimeoutMillis" : -1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 "catchUpTakeoverDelayMillis" : 30000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.077-0500 "getLastErrorModes" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500 "getLastErrorDefaults" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500 "w" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500 "wtimeout" : 0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500 "replicaSetId" : ObjectId("5ddd7d6bcf8184c2e1492eba")
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500 [jsTest] New session started with sessionID: { "id" : UUID("4b0e8bc5-b887-42f3-b4b7-7a2ec87c145b") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500 [jsTest] New session started with sessionID: { "id" : UUID("0e16e7d8-67f4-4573-82c8-9f40d82ac1f3") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.078-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.079-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.079-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.079-0500 [jsTest] New session started with sessionID: { "id" : UUID("e976b3d0-a6c9-4c73-bbbe-d78a08b35516") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.079-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.079-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.079-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.079-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.079-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.079-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.079-0500 [jsTest] New session started with sessionID: { "id" : UUID("f658d12a-1ccf-408e-b99c-7780e9b80c87") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.079-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.079-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.079-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.079-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.079-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.079-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.079-0500 [jsTest] New session started with sessionID: { "id" : UUID("1b7ab559-ff45-428b-bbe8-c20ada9a1e24") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.079-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.079-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.079-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:04.072-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38808 #85 (29 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:04.073-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52734 #43 (11 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:04.073-0500 I NETWORK [conn43] received client metadata from 127.0.0.1:51844 conn43: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:04.076-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34842 #41 (10 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.074-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46280 #104 (37 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:04.075-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51480 #44 (10 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:04.072-0500 I NETWORK [conn85] received client metadata from 127.0.0.1:38808 conn85: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:04.073-0500 I NETWORK [conn43] received client metadata from 127.0.0.1:52734 conn43: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:04.076-0500 I NETWORK [conn41] received client metadata from 127.0.0.1:34842 conn41: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:04.087-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.075-0500 I NETWORK [conn104] received client metadata from 127.0.0.1:46280 conn104: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.049-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.049-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.049-0500 [jsTest] Freezing nodes: [localhost:20002,localhost:20003]
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.049-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.049-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.049-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:04.075-0500 I NETWORK [conn44] received client metadata from 127.0.0.1:51480 conn44: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:04.087-0500 I COMMAND [conn43] Attempting to step down in response to replSetStepDown command
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:04.089-0500 I COMMAND [conn43] Attempting to step down in response to replSetStepDown command
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:04.091-0500 I COMMAND [conn85] CMD fsync: sync:1 lock:1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.050-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.050-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.050-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.050-0500 [jsTest] Freezing nodes: [localhost:20005,localhost:20006]
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.050-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.050-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:04.286-0500 I NETWORK [conn49] end connection 127.0.0.1:44658 (2 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.050-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.050-0500 ReplSetTest awaitReplication: going to check only localhost:20002,localhost:20003
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.050-0500 ReplSetTest awaitReplication: starting: optime for primary, localhost:20001, is { "ts" : Timestamp(1574796664, 6), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.051-0500 ReplSetTest awaitReplication: checking secondaries against latest primary optime { "ts" : Timestamp(1574796664, 6), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.051-0500 ReplSetTest awaitReplication: checking secondary #0: localhost:20002
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.051-0500 ReplSetTest awaitReplication: secondary #0, localhost:20002, is synced
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.051-0500 ReplSetTest awaitReplication: checking secondary #1: localhost:20003
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.051-0500 ReplSetTest awaitReplication: secondary #1, localhost:20003, is synced
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.051-0500 ReplSetTest awaitReplication: finished: all 2 secondaries synced at optime { "ts" : Timestamp(1574796664, 6), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.051-0500 checkDBHashesForReplSet checking data hashes against primary: localhost:20001
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.051-0500 checkDBHashesForReplSet going to check data hashes on secondary: localhost:20002
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.051-0500 checkDBHashesForReplSet going to check data hashes on secondary: localhost:20003
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.051-0500 ReplSetTest awaitReplication: going to check only localhost:20005,localhost:20006
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.051-0500 ReplSetTest awaitReplication: starting: optime for primary, localhost:20004, is { "ts" : Timestamp(1574796664, 8), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.051-0500 ReplSetTest awaitReplication: checking secondaries against latest primary optime { "ts" : Timestamp(1574796664, 8), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.051-0500 ReplSetTest awaitReplication: checking secondary #0: localhost:20005
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.051-0500 ReplSetTest awaitReplication: secondary #0, localhost:20005, is synced
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:04.406-0500 I NETWORK [conn82] end connection 127.0.0.1:56042 (33 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.052-0500 ReplSetTest awaitReplication: checking secondary #1: localhost:20006
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:04.093-0500 I COMMAND [conn41] Attempting to step down in response to replSetStepDown command
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.052-0500 ReplSetTest awaitReplication: secondary #1, localhost:20006, is synced
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.096-0500 I COMMAND [conn104] CMD fsync: sync:1 lock:1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.052-0500 ReplSetTest awaitReplication: finished: all 2 secondaries synced at optime { "ts" : Timestamp(1574796664, 8), "t" : NumberLong(1) }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:04.092-0500 I COMMAND [conn44] Attempting to step down in response to replSetStepDown command
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.052-0500 checkDBHashesForReplSet checking data hashes against primary: localhost:20004
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:04.088-0500 I REPL [conn43] 'freezing' for 86400 seconds
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.052-0500 checkDBHashesForReplSet going to check data hashes on secondary: localhost:20005
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:04.089-0500 I REPL [conn43] 'freezing' for 86400 seconds
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.053-0500 checkDBHashesForReplSet going to check data hashes on secondary: localhost:20006
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:04.154-0500 W COMMAND [fsyncLockWorker] WARNING: instance is locked, blocking all writes. The fsync command has finished execution, remember to unlock the instance using fsyncUnlock().
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.053-0500 Finished data consistency checks for cluster in 406 ms.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:04.402-0500 I NETWORK [conn50] end connection 127.0.0.1:44660 (1 connection now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:04.414-0500 I NETWORK [conn81] end connection 127.0.0.1:56040 (32 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:04.094-0500 I REPL [conn41] 'freezing' for 86400 seconds
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:06.053-0500 JSTest jstests/hooks/run_check_repl_dbhash.js finished.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.251-0500 W COMMAND [fsyncLockWorker] WARNING: instance is locked, blocking all writes. The fsync command has finished execution, remember to unlock the instance using fsyncUnlock().
[executor:fsm_workload_test:job0] 2019-11-26T14:31:06.054-0500 agg_out:CheckReplDBHash ran in 2.14 seconds: no failures detected.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:04.093-0500 I REPL [conn44] 'freezing' for 86400 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:04.284-0500 I REPL [conn43] 'unfreezing'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:04.285-0500 I REPL [conn43] 'unfreezing'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:04.154-0500 I COMMAND [conn85] mongod is locked and no writes are allowed. db.fsyncUnlock() to unlock
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:04.406-0500 I NETWORK [conn48] end connection 127.0.0.1:44636 (0 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:04.401-0500 I REPL [conn41] 'unfreezing'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.251-0500 I COMMAND [conn104] mongod is locked and no writes are allowed. db.fsyncUnlock() to unlock
[executor:fsm_workload_test:job0] 2019-11-26T14:31:06.055-0500 Running agg_out:ValidateCollections...
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:04.400-0500 I REPL [conn44] 'unfreezing'
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.056-0500 Starting JSTest jstests/hooks/run_validate_collections.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_validate_collections"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_validate_collections.js
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:04.286-0500 I NETWORK [conn43] end connection 127.0.0.1:51844 (10 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:04.286-0500 I NETWORK [conn43] end connection 127.0.0.1:52734 (10 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:04.155-0500 I COMMAND [conn85] Lock count is 1
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:05.536-0500 I SHARDING [Uptime-reporter] ShouldAutoSplit changing from 1 to 0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:04.402-0500 I NETWORK [conn41] end connection 127.0.0.1:34842 (9 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.251-0500 I COMMAND [conn104] Lock count is 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:04.402-0500 I NETWORK [conn44] end connection 127.0.0.1:51480 (9 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:04.414-0500 I NETWORK [conn42] end connection 127.0.0.1:51818 (9 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:04.414-0500 I NETWORK [conn42] end connection 127.0.0.1:52710 (9 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:04.155-0500 I COMMAND [conn85] For more info see http://dochub.mongodb.org/core/fsynccommand
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:04.414-0500 I NETWORK [conn40] end connection 127.0.0.1:34816 (8 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.251-0500 I COMMAND [conn104] For more info see http://dochub.mongodb.org/core/fsynccommand
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:04.414-0500 I NETWORK [conn43] end connection 127.0.0.1:51460 (8 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:04.282-0500 I COMMAND [conn85] command: unlock requested
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.251-0500 I COMMAND [conn104] command admin.$cmd appName: "MongoDB Shell" command: fsync { fsync: 1.0, lock: 1.0, allowFsyncFailure: true, lsid: { id: UUID("e976b3d0-a6c9-4c73-bbbe-d78a08b35516") }, $clusterTime: { clusterTime: Timestamp(1574796664, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:477 locks:{ Mutex: { acquireCount: { W: 1 } } } protocol:op_msg 154ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:04.284-0500 I COMMAND [conn85] fsyncUnlock completed. mongod is now unlocked and free to accept writes
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.397-0500 I COMMAND [conn104] command: unlock requested
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:04.286-0500 I NETWORK [conn84] end connection 127.0.0.1:38804 (28 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.400-0500 I COMMAND [conn104] fsyncUnlock completed. mongod is now unlocked and free to accept writes
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:04.286-0500 I NETWORK [conn85] end connection 127.0.0.1:38808 (27 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.402-0500 I NETWORK [conn103] end connection 127.0.0.1:46272 (36 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:04.406-0500 I NETWORK [conn83] end connection 127.0.0.1:38790 (26 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.402-0500 I NETWORK [conn104] end connection 127.0.0.1:46280 (35 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:04.414-0500 I NETWORK [conn82] end connection 127.0.0.1:38786 (25 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.406-0500 I NETWORK [conn102] end connection 127.0.0.1:46264 (34 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.414-0500 I NETWORK [conn101] end connection 127.0.0.1:46260 (33 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.609-0500 I STORAGE [TimestampMonitor] Removing drop-pending idents with drop timestamps before timestamp Timestamp(1574796659, 8)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.609-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-34--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 1030)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.612-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-35--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 1030)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.613-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-33--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 1030)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.614-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-42--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 1540)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.615-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-47--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 1540)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.615-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-37--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 1540)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.617-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-43--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 2301)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.618-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-49--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 2301)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.619-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-38--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 2301)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.620-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-44--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3613)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.621-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-51--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3613)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.622-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-39--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3613)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.623-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-46--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3614)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.624-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-55--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3614)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.625-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-41--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3614)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.626-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-45--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3615)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.628-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-53--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3615)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.629-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-40--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3615)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.630-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-58--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 4055)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.631-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-59--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 4055)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.632-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-56--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 4055)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.633-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-62--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 4561)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.634-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-63--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 4561)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.635-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-61--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 4561)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.637-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-66--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 5072)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.637-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-67--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 5072)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:04.638-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-65--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 5072)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.063-0500 JSTest jstests/hooks/run_validate_collections.js started with pid 15200.
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.086-0500 MongoDB shell version v0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.137-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.137-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44678 #51 (1 connection now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.137-0500 I NETWORK [conn51] received client metadata from 127.0.0.1:44678 conn51: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.139-0500 Implicit session: session { "id" : UUID("d1abc88d-809a-466d-8038-5589d9f98097") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.141-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.142-0500 true
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.146-0500 2019-11-26T14:31:06.146-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.146-0500 2019-11-26T14:31:06.146-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.146-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56082 #83 (33 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.146-0500 I NETWORK [conn83] received client metadata from 127.0.0.1:56082 conn83: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.147-0500 2019-11-26T14:31:06.147-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.147-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56084 #84 (34 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.147-0500 I NETWORK [conn84] received client metadata from 127.0.0.1:56084 conn84: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.148-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.148-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.148-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.148-0500 [jsTest] New session started with sessionID: { "id" : UUID("bbf24c5d-05d4-4605-933e-110254b4df60") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.148-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.148-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.148-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.150-0500 2019-11-26T14:31:06.150-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.150-0500 2019-11-26T14:31:06.150-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.150-0500 2019-11-26T14:31:06.150-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.150-0500 2019-11-26T14:31:06.150-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.150-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51860 #44 (10 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.150-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52750 #44 (10 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.151-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38830 #86 (26 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.151-0500 2019-11-26T14:31:06.151-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.151-0500 I NETWORK [conn44] received client metadata from 127.0.0.1:51860 conn44: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.151-0500 I NETWORK [conn44] received client metadata from 127.0.0.1:52750 conn44: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.151-0500 I NETWORK [conn86] received client metadata from 127.0.0.1:38830 conn86: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.151-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38832 #87 (27 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.151-0500 I NETWORK [conn87] received client metadata from 127.0.0.1:38832 conn87: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.152-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.152-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.152-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.152-0500 [jsTest] New session started with sessionID: { "id" : UUID("e756a4f7-90a3-4a70-92c3-3542ca6cee82") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.152-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.152-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.152-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.152-0500 2019-11-26T14:31:06.152-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.153-0500 2019-11-26T14:31:06.153-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.153-0500 2019-11-26T14:31:06.153-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.153-0500 2019-11-26T14:31:06.153-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.153-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51498 #45 (9 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.153-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46302 #105 (34 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.153-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34862 #42 (9 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.153-0500 I NETWORK [conn45] received client metadata from 127.0.0.1:51498 conn45: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.153-0500 I NETWORK [conn105] received client metadata from 127.0.0.1:46302 conn105: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.153-0500 I NETWORK [conn42] received client metadata from 127.0.0.1:34862 conn42: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.154-0500 2019-11-26T14:31:06.153-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.154-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46306 #106 (35 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.154-0500 I NETWORK [conn106] received client metadata from 127.0.0.1:46306 conn106: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.154-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.154-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.155-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.155-0500 [jsTest] New session started with sessionID: { "id" : UUID("9daa96db-3727-4e06-9079-38f5c0eccc52") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.155-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.155-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.155-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.224-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.224-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.224-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44700 #52 (2 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.224-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44701 #53 (3 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.224-0500 I NETWORK [conn52] received client metadata from 127.0.0.1:44700 conn52: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.224-0500 I NETWORK [conn53] received client metadata from 127.0.0.1:44701 conn53: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.225-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.225-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44704 #54 (4 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.226-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.226-0500 I NETWORK [conn54] received client metadata from 127.0.0.1:44704 conn54: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.226-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44706 #55 (5 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.226-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.226-0500 I NETWORK [conn55] received client metadata from 127.0.0.1:44706 conn55: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.226-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44708 #56 (6 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.226-0500 Implicit session: session { "id" : UUID("478a83d7-f270-490a-9e44-44d12b023e74") }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.226-0500 I NETWORK [conn56] received client metadata from 127.0.0.1:44708 conn56: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.228-0500 Implicit session: session { "id" : UUID("d9fa2c08-5ce7-47ea-93e0-3271ce95fc20") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.228-0500 Implicit session: session { "id" : UUID("f1e5eea3-c269-4688-8b14-705d08874376") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.228-0500 Implicit session: session { "id" : UUID("406269d2-c799-41a5-b43e-0265e92b1e64") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.228-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.228-0500 Implicit session: session { "id" : UUID("881f94ee-1a51-475b-a2b5-b4cf18155ee9") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.229-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.229-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.229-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.230-0500 Running validate() on localhost:20000
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.230-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.230-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56112 #85 (35 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.230-0500 I NETWORK [conn85] received client metadata from 127.0.0.1:56112 conn85: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.231-0500 Running validate() on localhost:20003
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.231-0500 Running validate() on localhost:20002
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.231-0500 Running validate() on localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.231-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52776 #45 (11 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.231-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38858 #88 (28 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.231-0500 I NETWORK [conn88] received client metadata from 127.0.0.1:38858 conn88: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.231-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.231-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51890 #45 (11 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.232-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.232-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.232-0500 [jsTest] New session started with sessionID: { "id" : UUID("6965b1e5-06dc-46c7-bc33-bf842f06bed3") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.232-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.232-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.232-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.232-0500 Running validate() on localhost:20005
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.231-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51524 #46 (10 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.231-0500 I NETWORK [conn45] received client metadata from 127.0.0.1:52776 conn45: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.232-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.232-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.232-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.232-0500 [jsTest] New session started with sessionID: { "id" : UUID("37f95b5a-7aaa-4bf4-ba5e-c0145b2a5275") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.233-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.233-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.233-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.233-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.233-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.233-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.233-0500 [jsTest] New session started with sessionID: { "id" : UUID("a0039e2a-aa27-41a3-ad0d-f3303e551841") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.233-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.233-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.233-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.233-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.233-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.233-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.233-0500 [jsTest] New session started with sessionID: { "id" : UUID("f137ab8a-4b65-4a57-a9f3-3cf9547a4bec") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.233-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.233-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.233-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.233-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.233-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.233-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.233-0500 [jsTest] New session started with sessionID: { "id" : UUID("9a0346f5-9e1c-4143-8444-278affbd9a25") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.232-0500 I NETWORK [conn45] received client metadata from 127.0.0.1:51890 conn45: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.233-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.234-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.234-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.232-0500 I NETWORK [conn46] received client metadata from 127.0.0.1:51524 conn46: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.236-0500 I COMMAND [conn85] CMD: validate admin.system.keys, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.236-0500 W STORAGE [conn85] Could not complete validation of table:collection-41-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.236-0500 I INDEX [conn85] validating the internal structure of index _id_ on collection admin.system.keys
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.236-0500 W STORAGE [conn85] Could not complete validation of table:index-42-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.236-0500 I INDEX [conn85] validating collection admin.system.keys (UUID: 807238e6-a72f-4ef0-b305-4bab60afd0e6)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.236-0500 I INDEX [conn85] validating index consistency _id_ on collection admin.system.keys
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.236-0500 I INDEX [conn85] Validation complete for collection admin.system.keys (UUID: 807238e6-a72f-4ef0-b305-4bab60afd0e6). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.236-0500 I COMMAND [conn45] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.237-0500 W STORAGE [conn45] Could not complete validation of table:collection-17--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.237-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.237-0500 I COMMAND [conn46] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.237-0500 W STORAGE [conn46] Could not complete validation of table:collection-17--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.237-0500 W STORAGE [conn45] Could not complete validation of table:index-18--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.237-0500 I INDEX [conn46] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.237-0500 W STORAGE [conn46] Could not complete validation of table:index-18--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.237-0500 I COMMAND [conn88] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.237-0500 I INDEX [conn45] validating collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.237-0500 I COMMAND [conn85] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.237-0500 I INDEX [conn46] validating collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.237-0500 I COMMAND [conn45] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.237-0500 I INDEX [conn45] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.237-0500 I INDEX [conn46] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.237-0500 I INDEX [conn45] Validation complete for collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.237-0500 W STORAGE [conn45] Could not complete validation of table:collection-17--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.237-0500 I INDEX [conn46] Validation complete for collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.237-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.237-0500 W STORAGE [conn45] Could not complete validation of table:index-18--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.238-0500 I INDEX [conn45] validating collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.238-0500 I INDEX [conn45] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.238-0500 I INDEX [conn45] Validation complete for collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.238-0500 I INDEX [conn85] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.238-0500 I INDEX [conn88] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.239-0500 I COMMAND [conn46] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.239-0500 W STORAGE [conn46] Could not complete validation of table:collection-29--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.239-0500 I INDEX [conn46] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.239-0500 W STORAGE [conn46] Could not complete validation of table:index-30--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.239-0500 I INDEX [conn46] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.239-0500 W STORAGE [conn46] Could not complete validation of table:index-31--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.239-0500 I INDEX [conn46] validating collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.239-0500 I INDEX [conn46] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.239-0500 I INDEX [conn46] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.239-0500 I INDEX [conn46] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.239-0500 I COMMAND [conn45] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.239-0500 W STORAGE [conn45] Could not complete validation of table:collection-31--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.239-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.239-0500 W STORAGE [conn45] Could not complete validation of table:index-32--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.239-0500 I INDEX [conn45] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.239-0500 W STORAGE [conn45] Could not complete validation of table:index-35--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.239-0500 I INDEX [conn45] validating collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.239-0500 I INDEX [conn45] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.239-0500 I INDEX [conn45] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.239-0500 I INDEX [conn45] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.240-0500 I COMMAND [conn45] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.240-0500 I COMMAND [conn46] CMD: validate config.cache.chunks.test0_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.240-0500 W STORAGE [conn45] Could not complete validation of table:collection-31--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.240-0500 I COMMAND [conn45] CMD: validate config.cache.chunks.test0_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.240-0500 W STORAGE [conn46] Could not complete validation of table:collection-99--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.240-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.240-0500 I INDEX [conn85] validating collection admin.system.version (UUID: 1b1834a4-71ee-49e7-abbc-7ae09d5089b2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.240-0500 I INDEX [conn46] validating the internal structure of index _id_ on collection config.cache.chunks.test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.240-0500 W STORAGE [conn45] Could not complete validation of table:collection-41--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.240-0500 I INDEX [conn85] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.240-0500 W STORAGE [conn45] Could not complete validation of table:index-32--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.240-0500 W STORAGE [conn46] Could not complete validation of table:index-100--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.240-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.240-0500 I INDEX [conn85] Validation complete for collection admin.system.version (UUID: 1b1834a4-71ee-49e7-abbc-7ae09d5089b2). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.240-0500 I INDEX [conn45] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.240-0500 I INDEX [conn46] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.240-0500 W STORAGE [conn45] Could not complete validation of table:index-42--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.240-0500 W STORAGE [conn45] Could not complete validation of table:index-35--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.240-0500 W STORAGE [conn46] Could not complete validation of table:index-101--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.240-0500 I INDEX [conn45] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.240-0500 I INDEX [conn45] validating collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.240-0500 I INDEX [conn46] validating collection config.cache.chunks.test0_fsmdb0.agg_out (UUID: b53e5b23-cfff-452a-9863-a2ca857d4f54)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.241-0500 I INDEX [conn88] validating collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.240-0500 W STORAGE [conn45] Could not complete validation of table:index-43--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.240-0500 I INDEX [conn45] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.240-0500 I INDEX [conn46] validating index consistency _id_ on collection config.cache.chunks.test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.241-0500 I INDEX [conn88] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.240-0500 I INDEX [conn45] validating collection config.cache.chunks.test0_fsmdb0.fsmcoll0 (UUID: 44049d48-fa0f-4a8e-b7c3-56550b94d236)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.240-0500 I INDEX [conn46] validating index consistency lastmod_1 on collection config.cache.chunks.test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.240-0500 I INDEX [conn45] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.241-0500 I INDEX [conn88] Validation complete for collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.240-0500 I INDEX [conn45] validating index consistency _id_ on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.240-0500 I INDEX [conn46] Validation complete for collection config.cache.chunks.test0_fsmdb0.agg_out (UUID: b53e5b23-cfff-452a-9863-a2ca857d4f54). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.240-0500 I INDEX [conn45] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.240-0500 I INDEX [conn45] validating index consistency lastmod_1 on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.240-0500 I COMMAND [conn46] CMD: validate config.cache.chunks.test0_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.240-0500 W STORAGE [conn46] Could not complete validation of table:collection-37--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.240-0500 I INDEX [conn45] Validation complete for collection config.cache.chunks.test0_fsmdb0.fsmcoll0 (UUID: 44049d48-fa0f-4a8e-b7c3-56550b94d236). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.241-0500 I COMMAND [conn45] CMD: validate config.cache.chunks.test0_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.240-0500 I INDEX [conn46] validating the internal structure of index _id_ on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.242-0500 I COMMAND [conn85] CMD: validate config.actionlog, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.241-0500 I COMMAND [conn45] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.241-0500 W STORAGE [conn45] Could not complete validation of table:collection-41--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.241-0500 W STORAGE [conn46] Could not complete validation of table:index-38--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.242-0500 W STORAGE [conn85] Could not complete validation of table:collection-47-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.241-0500 W STORAGE [conn45] Could not complete validation of table:collection-29--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.241-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.241-0500 I INDEX [conn46] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.242-0500 I INDEX [conn85] validating the internal structure of index _id_ on collection config.actionlog
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.241-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.243-0500 I COMMAND [conn88] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.241-0500 W STORAGE [conn45] Could not complete validation of table:index-42--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.241-0500 W STORAGE [conn46] Could not complete validation of table:index-39--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.242-0500 W STORAGE [conn85] Could not complete validation of table:index-48-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.241-0500 W STORAGE [conn45] Could not complete validation of table:index-30--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.241-0500 I INDEX [conn45] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.241-0500 I INDEX [conn46] validating collection config.cache.chunks.test0_fsmdb0.fsmcoll0 (UUID: dad6441c-7462-448b-9e35-8123157c4429)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.242-0500 I INDEX [conn85] validating collection config.actionlog (UUID: ff427093-1de4-4a9f-83c9-6b01392e1aea)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.241-0500 I INDEX [conn45] validating collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.241-0500 W STORAGE [conn45] Could not complete validation of table:index-43--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.241-0500 I INDEX [conn46] validating index consistency _id_ on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.242-0500 I INDEX [conn85] validating index consistency _id_ on collection config.actionlog
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.241-0500 I INDEX [conn45] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.241-0500 I INDEX [conn45] validating collection config.cache.chunks.test0_fsmdb0.fsmcoll0 (UUID: 44049d48-fa0f-4a8e-b7c3-56550b94d236)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.241-0500 I INDEX [conn46] validating index consistency lastmod_1 on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.242-0500 I INDEX [conn85] Validation complete for collection config.actionlog (UUID: ff427093-1de4-4a9f-83c9-6b01392e1aea). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.244-0500 I INDEX [conn88] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.241-0500 I INDEX [conn45] Validation complete for collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.241-0500 I INDEX [conn45] validating index consistency _id_ on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.241-0500 I INDEX [conn46] Validation complete for collection config.cache.chunks.test0_fsmdb0.fsmcoll0 (UUID: dad6441c-7462-448b-9e35-8123157c4429). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.243-0500 I COMMAND [conn85] CMD: validate config.changelog, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.243-0500 W STORAGE [conn85] Could not complete validation of table:collection-49-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.241-0500 I COMMAND [conn45] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.241-0500 I INDEX [conn45] validating index consistency lastmod_1 on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.243-0500 I INDEX [conn85] validating the internal structure of index _id_ on collection config.changelog
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.241-0500 I COMMAND [conn46] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.241-0500 W STORAGE [conn45] Could not complete validation of table:collection-27--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.241-0500 I INDEX [conn45] Validation complete for collection config.cache.chunks.test0_fsmdb0.fsmcoll0 (UUID: 44049d48-fa0f-4a8e-b7c3-56550b94d236). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.243-0500 W STORAGE [conn85] Could not complete validation of table:index-50-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.241-0500 W STORAGE [conn46] Could not complete validation of table:collection-27--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.241-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.242-0500 I COMMAND [conn45] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.243-0500 I INDEX [conn85] validating collection config.changelog (UUID: 65b892c8-48e9-4ca9-8300-743a486a361f)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.241-0500 I INDEX [conn46] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.242-0500 W STORAGE [conn45] Could not complete validation of table:index-28--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.242-0500 W STORAGE [conn45] Could not complete validation of table:collection-29--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.243-0500 I INDEX [conn85] validating index consistency _id_ on collection config.changelog
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.241-0500 W STORAGE [conn46] Could not complete validation of table:index-28--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.242-0500 I INDEX [conn45] validating collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.242-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.243-0500 I INDEX [conn85] Validation complete for collection config.changelog (UUID: 65b892c8-48e9-4ca9-8300-743a486a361f). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.241-0500 I INDEX [conn46] validating collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.242-0500 I INDEX [conn45] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.242-0500 W STORAGE [conn45] Could not complete validation of table:index-30--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.244-0500 I COMMAND [conn85] CMD: validate config.chunks, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.241-0500 I INDEX [conn46] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.246-0500 I INDEX [conn88] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.242-0500 I INDEX [conn45] Validation complete for collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.242-0500 I INDEX [conn45] validating collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.244-0500 W STORAGE [conn85] Could not complete validation of table:collection-17-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.241-0500 I INDEX [conn46] Validation complete for collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.242-0500 I COMMAND [conn45] CMD: validate config.system.sessions, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.242-0500 I INDEX [conn45] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.244-0500 I INDEX [conn85] validating the internal structure of index _id_ on collection config.chunks
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.242-0500 I COMMAND [conn46] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.242-0500 W STORAGE [conn45] Could not complete validation of table:collection-25--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.242-0500 I INDEX [conn45] Validation complete for collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.244-0500 W STORAGE [conn85] Could not complete validation of table:index-18-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.242-0500 W STORAGE [conn46] Could not complete validation of table:collection-25--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.242-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.243-0500 I COMMAND [conn45] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.244-0500 I INDEX [conn85] validating the internal structure of index ns_1_min_1 on collection config.chunks
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.242-0500 I INDEX [conn46] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.242-0500 W STORAGE [conn45] Could not complete validation of table:index-26--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.243-0500 W STORAGE [conn45] Could not complete validation of table:collection-27--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.244-0500 W STORAGE [conn85] Could not complete validation of table:index-19-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.242-0500 W STORAGE [conn46] Could not complete validation of table:index-26--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.242-0500 I INDEX [conn45] validating the internal structure of index lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.243-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.244-0500 I INDEX [conn85] validating the internal structure of index ns_1_shard_1_min_1 on collection config.chunks
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.242-0500 I INDEX [conn46] validating collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.242-0500 W STORAGE [conn45] Could not complete validation of table:index-33--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.243-0500 W STORAGE [conn45] Could not complete validation of table:index-28--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.244-0500 W STORAGE [conn85] Could not complete validation of table:index-20-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.242-0500 I INDEX [conn46] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.242-0500 I INDEX [conn45] validating collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.243-0500 I INDEX [conn45] validating collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.244-0500 I INDEX [conn85] validating the internal structure of index ns_1_lastmod_1 on collection config.chunks
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.242-0500 I INDEX [conn46] Validation complete for collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.243-0500 I INDEX [conn45] validating index consistency _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.243-0500 I INDEX [conn45] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.244-0500 W STORAGE [conn85] Could not complete validation of table:index-21-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.243-0500 I COMMAND [conn46] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.243-0500 I INDEX [conn45] validating index consistency lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.243-0500 I INDEX [conn45] Validation complete for collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.243-0500 I INDEX [conn46] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.244-0500 I INDEX [conn85] validating collection config.chunks (UUID: e7035d0b-a892-4426-b520-83da62bcbda6)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.243-0500 I INDEX [conn45] Validation complete for collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.244-0500 I COMMAND [conn45] CMD: validate config.system.sessions, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.243-0500 W STORAGE [conn46] Could not complete validation of table:index-22--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.244-0500 I INDEX [conn85] validating index consistency _id_ on collection config.chunks
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.243-0500 I COMMAND [conn45] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.244-0500 W STORAGE [conn45] Could not complete validation of table:collection-25--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.244-0500 I INDEX [conn46] validating collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.244-0500 I INDEX [conn85] validating index consistency ns_1_min_1 on collection config.chunks
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.248-0500 I INDEX [conn88] validating collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.244-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.244-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.244-0500 I INDEX [conn46] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.244-0500 I INDEX [conn46] Validation complete for collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.249-0500 I INDEX [conn88] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.244-0500 W STORAGE [conn45] Could not complete validation of table:index-22--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.244-0500 W STORAGE [conn45] Could not complete validation of table:index-26--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.244-0500 I INDEX [conn85] validating index consistency ns_1_shard_1_min_1 on collection config.chunks
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.246-0500 I COMMAND [conn46] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.249-0500 I INDEX [conn88] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.244-0500 I INDEX [conn45] validating the internal structure of index lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.245-0500 I INDEX [conn45] validating collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.244-0500 I INDEX [conn85] validating index consistency ns_1_lastmod_1 on collection config.chunks
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.249-0500 I INDEX [conn88] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.244-0500 W STORAGE [conn45] Could not complete validation of table:index-33--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.245-0500 I INDEX [conn45] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.244-0500 I INDEX [conn85] Validation complete for collection config.chunks (UUID: e7035d0b-a892-4426-b520-83da62bcbda6). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.249-0500 I COMMAND [conn88] CMD: validate config.cache.chunks.test0_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.244-0500 I INDEX [conn45] validating collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.245-0500 I INDEX [conn45] Validation complete for collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.245-0500 I COMMAND [conn85] CMD: validate config.collections, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.244-0500 I INDEX [conn45] validating index consistency _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.247-0500 I COMMAND [conn45] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.245-0500 W STORAGE [conn85] Could not complete validation of table:collection-51-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.244-0500 I INDEX [conn45] validating index consistency lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.247-0500 W STORAGE [conn45] Could not complete validation of table:collection-16--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.245-0500 I INDEX [conn85] validating the internal structure of index _id_ on collection config.collections
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.244-0500 I INDEX [conn45] Validation complete for collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.247-0500 I INDEX [conn45] validating collection local.oplog.rs (UUID: 6d43bede-f05f-41b1-b7ac-5a32b66b8140)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.245-0500 W STORAGE [conn85] Could not complete validation of table:index-52-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.247-0500 I INDEX [conn45] Validation complete for collection local.oplog.rs (UUID: 6d43bede-f05f-41b1-b7ac-5a32b66b8140). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.245-0500 I COMMAND [conn45] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.245-0500 I INDEX [conn85] validating collection config.collections (UUID: c846d630-16e0-4675-b90f-3cd769544ef0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.251-0500 I INDEX [conn88] validating the internal structure of index _id_ on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.248-0500 I COMMAND [conn45] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.246-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.245-0500 I INDEX [conn85] validating index consistency _id_ on collection config.collections
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.249-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.246-0500 W STORAGE [conn45] Could not complete validation of table:index-22--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.245-0500 I INDEX [conn85] Validation complete for collection config.collections (UUID: c846d630-16e0-4675-b90f-3cd769544ef0). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.247-0500 I INDEX [conn45] validating collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.251-0500 I INDEX [conn45] validating collection local.replset.election (UUID: bf7b5380-e70a-475e-ad1b-16751bee6907)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.246-0500 I COMMAND [conn85] CMD: validate config.databases, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.247-0500 I INDEX [conn45] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.246-0500 W STORAGE [conn85] Could not complete validation of table:collection-55-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.251-0500 I INDEX [conn45] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.247-0500 I INDEX [conn45] Validation complete for collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.246-0500 I INDEX [conn85] validating the internal structure of index _id_ on collection config.databases
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.251-0500 W STORAGE [conn46] Could not complete validation of table:collection-16--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.249-0500 I COMMAND [conn45] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.251-0500 I INDEX [conn45] Validation complete for collection local.replset.election (UUID: bf7b5380-e70a-475e-ad1b-16751bee6907). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.246-0500 W STORAGE [conn85] Could not complete validation of table:index-56-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.250-0500 W STORAGE [conn45] Could not complete validation of table:collection-16--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.251-0500 I INDEX [conn46] validating collection local.oplog.rs (UUID: 6c707c3f-4064-4e35-98fb-b2fff8245539)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.246-0500 I INDEX [conn85] validating collection config.databases (UUID: 1c31f9a7-ee46-41d3-a296-2e1f323b51b8)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.250-0500 I INDEX [conn45] validating collection local.oplog.rs (UUID: 88962763-38f7-4965-bfd6-b2a62304ae0e)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.246-0500 I INDEX [conn85] validating index consistency _id_ on collection config.databases
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.252-0500 I COMMAND [conn45] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.246-0500 I INDEX [conn85] Validation complete for collection config.databases (UUID: 1c31f9a7-ee46-41d3-a296-2e1f323b51b8). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.251-0500 I INDEX [conn45] Validation complete for collection local.oplog.rs (UUID: 88962763-38f7-4965-bfd6-b2a62304ae0e). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.252-0500 W STORAGE [conn45] Could not complete validation of table:collection-4--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.247-0500 I COMMAND [conn85] CMD: validate config.lockpings, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.252-0500 I COMMAND [conn45] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.252-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.247-0500 W STORAGE [conn85] Could not complete validation of table:collection-32-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.252-0500 I INDEX [conn88] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.253-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.247-0500 I INDEX [conn85] validating the internal structure of index _id_ on collection config.lockpings
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.247-0500 W STORAGE [conn85] Could not complete validation of table:index-33-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.247-0500 I INDEX [conn85] validating the internal structure of index ping_1 on collection config.lockpings
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.247-0500 W STORAGE [conn85] Could not complete validation of table:index-34-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.247-0500 I INDEX [conn85] validating collection config.lockpings (UUID: f662f115-623a-496b-9953-7132cdf8c056)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.247-0500 I INDEX [conn85] validating index consistency _id_ on collection config.lockpings
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.247-0500 I INDEX [conn85] validating index consistency ping_1 on collection config.lockpings
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.247-0500 I INDEX [conn85] Validation complete for collection config.lockpings (UUID: f662f115-623a-496b-9953-7132cdf8c056). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.248-0500 I COMMAND [conn85] CMD: validate config.locks, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.248-0500 W STORAGE [conn85] Could not complete validation of table:collection-28-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.248-0500 I INDEX [conn85] validating the internal structure of index _id_ on collection config.locks
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.248-0500 W STORAGE [conn85] Could not complete validation of table:index-29-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.248-0500 I INDEX [conn85] validating the internal structure of index ts_1 on collection config.locks
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.248-0500 W STORAGE [conn85] Could not complete validation of table:index-30-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.248-0500 I INDEX [conn85] validating the internal structure of index state_1_process_1 on collection config.locks
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.248-0500 W STORAGE [conn85] Could not complete validation of table:index-31-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.248-0500 I INDEX [conn85] validating collection config.locks (UUID: dbde06c7-d8ac-4f80-ab9f-cae486f16451)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.248-0500 I INDEX [conn85] validating index consistency _id_ on collection config.locks
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.248-0500 I INDEX [conn85] validating index consistency ts_1 on collection config.locks
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.248-0500 I INDEX [conn85] validating index consistency state_1_process_1 on collection config.locks
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.248-0500 I INDEX [conn85] Validation complete for collection config.locks (UUID: dbde06c7-d8ac-4f80-ab9f-cae486f16451). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.249-0500 I COMMAND [conn85] CMD: validate config.migrations, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.250-0500 I INDEX [conn85] validating the internal structure of index _id_ on collection config.migrations
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.251-0500 I INDEX [conn85] validating the internal structure of index ns_1_min_1 on collection config.migrations
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.253-0500 I INDEX [conn85] validating collection config.migrations (UUID: 550e32ef-0dd4-48f9-bb5e-9e21bec0734f)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.253-0500 I INDEX [conn85] validating index consistency _id_ on collection config.migrations
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.253-0500 I INDEX [conn85] validating index consistency ns_1_min_1 on collection config.migrations
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.253-0500 I INDEX [conn85] Validation complete for collection config.migrations (UUID: 550e32ef-0dd4-48f9-bb5e-9e21bec0734f). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.254-0500 I COMMAND [conn85] CMD: validate config.mongos, full:true
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:06.259-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.254-0500 I INDEX [conn45] validating collection local.replset.minvalid (UUID: 6654b1c2-f323-4c78-9165-5ff31d331960)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:07.605-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.256-0500 I INDEX [conn88] validating collection config.cache.chunks.test0_fsmdb0.fsmcoll0 (UUID: 44049d48-fa0f-4a8e-b7c3-56550b94d236)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.256-0500 I INDEX [conn45] validating collection local.replset.election (UUID: d0928956-d7fc-46fe-a9bc-1f07f2435457)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:07.606-0500 Implicit session: session { "id" : UUID("2ca86599-a32d-49ee-ab4e-7619d8c6b2c1") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:07.606-0500 Implicit session: session { "id" : UUID("c323a989-fa5a-41d3-b73b-eb163333a63c") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:07.606-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:07.606-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:07.606-0500 Running validate() on localhost:20004
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:07.606-0500 Running validate() on localhost:20006
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:07.606-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:07.606-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:07.606-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:07.606-0500 [jsTest] New session started with sessionID: { "id" : UUID("5c52a6eb-e181-48ea-a318-92e5d8b14928") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:07.606-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:07.606-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:07.606-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:07.606-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:07.606-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:07.606-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:07.606-0500 [jsTest] New session started with sessionID: { "id" : UUID("4839001d-617c-47ed-8816-d117f409a632") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:07.607-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:07.607-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:07.607-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.257-0500 I INDEX [conn46] Validation complete for collection local.oplog.rs (UUID: 6c707c3f-4064-4e35-98fb-b2fff8245539). No corruption found.
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:07.607-0500 JSTest jstests/hooks/run_validate_collections.js finished.
[executor:fsm_workload_test:job0] 2019-11-26T14:31:07.608-0500 agg_out:ValidateCollections ran in 1.55 seconds: no failures detected.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.259-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44721 #57 (7 connections now open)
[executor:fsm_workload_test:job0] 2019-11-26T14:31:07.608-0500 Running agg_out:CleanupConcurrencyWorkloads...
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.267-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46332 #107 (36 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.267-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34890 #43 (10 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:07.140-0500 I SHARDING [Uptime-reporter] ShouldAutoSplit changing from 1 to 0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.254-0500 W STORAGE [conn85] Could not complete validation of table:collection-43-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.254-0500 I INDEX [conn45] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.256-0500 I INDEX [conn88] validating index consistency _id_ on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.256-0500 I INDEX [conn45] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.257-0500 I COMMAND [conn46] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.259-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44720 #58 (8 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.268-0500 I NETWORK [conn107] received client metadata from 127.0.0.1:46332 conn107: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:07.610-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57870 #26 (1 connection now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.267-0500 I NETWORK [conn43] received client metadata from 127.0.0.1:34890 conn43: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.254-0500 I INDEX [conn85] validating the internal structure of index _id_ on collection config.mongos
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.254-0500 I INDEX [conn45] Validation complete for collection local.replset.minvalid (UUID: 6654b1c2-f323-4c78-9165-5ff31d331960). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.256-0500 I INDEX [conn88] validating index consistency lastmod_1 on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.256-0500 I INDEX [conn45] Validation complete for collection local.replset.election (UUID: d0928956-d7fc-46fe-a9bc-1f07f2435457). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.258-0500 I INDEX [conn46] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.259-0500 I NETWORK [conn57] received client metadata from 127.0.0.1:44721 conn57: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.275-0500 I COMMAND [conn107] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:07.611-0500 I NETWORK [conn26] received client metadata from 127.0.0.1:57870 conn26: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.274-0500 I COMMAND [conn43] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.254-0500 W STORAGE [conn85] Could not complete validation of table:index-44-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.254-0500 I COMMAND [conn45] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.256-0500 I INDEX [conn88] Validation complete for collection config.cache.chunks.test0_fsmdb0.fsmcoll0 (UUID: 44049d48-fa0f-4a8e-b7c3-56550b94d236). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.257-0500 I COMMAND [conn88] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.257-0500 I INDEX [conn88] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.260-0500 I NETWORK [conn58] received client metadata from 127.0.0.1:44720 conn58: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.276-0500 I INDEX [conn107] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:07.612-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57874 #27 (2 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.274-0500 W STORAGE [conn43] Could not complete validation of table:collection-17--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.254-0500 I INDEX [conn85] validating collection config.mongos (UUID: 57207abe-6d8d-4102-a526-bc847dba6c09)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.259-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.257-0500 I COMMAND [conn45] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.260-0500 I INDEX [conn46] validating collection local.replset.election (UUID: 6a83721b-d0f2-438c-a2e3-ec6a11e75236)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.259-0500 I INDEX [conn88] validating collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.277-0500 I NETWORK [conn55] end connection 127.0.0.1:44706 (7 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.278-0500 I INDEX [conn107] validating collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:07.613-0500 I NETWORK [conn27] received client metadata from 127.0.0.1:57874 conn27: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.274-0500 I INDEX [conn43] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.254-0500 I INDEX [conn85] validating index consistency _id_ on collection config.mongos
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.261-0500 I INDEX [conn45] validating collection local.replset.oplogTruncateAfterPoint (UUID: fe211210-ae1b-4ab2-81d6-86b025cc1404)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.257-0500 W STORAGE [conn45] Could not complete validation of table:collection-4--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.260-0500 I INDEX [conn46] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.260-0500 I INDEX [conn88] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.281-0500 I NETWORK [conn54] end connection 127.0.0.1:44704 (6 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.278-0500 I INDEX [conn107] validating index consistency _id_ on collection admin.system.version
[CleanupConcurrencyWorkloads:job0:agg_out:CleanupConcurrencyWorkloads] 2019-11-26T14:31:07.616-0500 Dropping all databases except for ['config', 'local', '$external', 'admin']
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:07.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[CleanupConcurrencyWorkloads:job0:agg_out:CleanupConcurrencyWorkloads] 2019-11-26T14:31:07.616-0500 Dropping database test0_fsmdb0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.274-0500 W STORAGE [conn43] Could not complete validation of table:index-18--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.254-0500 I INDEX [conn85] Validation complete for collection config.mongos (UUID: 57207abe-6d8d-4102-a526-bc847dba6c09). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.261-0500 I INDEX [conn45] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.257-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.260-0500 I INDEX [conn46] Validation complete for collection local.replset.election (UUID: 6a83721b-d0f2-438c-a2e3-ec6a11e75236). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.260-0500 I INDEX [conn88] Validation complete for collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b). No corruption found.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.288-0500 I NETWORK [conn56] end connection 127.0.0.1:44708 (5 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.278-0500 I INDEX [conn107] Validation complete for collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5). No corruption found.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:07.614-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.274-0500 I INDEX [conn43] validating collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.254-0500 I COMMAND [conn85] CMD: validate config.settings, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.261-0500 I INDEX [conn45] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: fe211210-ae1b-4ab2-81d6-86b025cc1404). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.258-0500 I INDEX [conn45] validating collection local.replset.minvalid (UUID: 6eb6e647-60c7-450a-a905-f04052287b8a)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.261-0500 I COMMAND [conn46] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.260-0500 I COMMAND [conn88] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.298-0500 I NETWORK [conn52] end connection 127.0.0.1:44700 (4 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.280-0500 I COMMAND [conn107] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.274-0500 I INDEX [conn43] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.254-0500 W STORAGE [conn85] Could not complete validation of table:collection-45-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.262-0500 I COMMAND [conn45] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.258-0500 I INDEX [conn45] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.261-0500 W STORAGE [conn46] Could not complete validation of table:collection-4--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.261-0500 I INDEX [conn88] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.308-0500 I NETWORK [conn53] end connection 127.0.0.1:44701 (3 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.282-0500 I INDEX [conn107] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.274-0500 I INDEX [conn43] Validation complete for collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.254-0500 I INDEX [conn85] validating the internal structure of index _id_ on collection config.settings
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.263-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.258-0500 I INDEX [conn45] Validation complete for collection local.replset.minvalid (UUID: 6eb6e647-60c7-450a-a905-f04052287b8a). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.261-0500 I INDEX [conn46] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.263-0500 I INDEX [conn88] validating collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.337-0500 I NETWORK [conn57] end connection 127.0.0.1:44721 (2 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.284-0500 I INDEX [conn107] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.276-0500 I COMMAND [conn43] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.254-0500 W STORAGE [conn85] Could not complete validation of table:index-46-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.264-0500 I INDEX [conn45] validating collection local.startup_log (UUID: 7b6988ea-0c65-41a6-9855-5680c2c711a1)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.259-0500 I COMMAND [conn45] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.262-0500 I INDEX [conn46] validating collection local.replset.minvalid (UUID: 3f481e27-9697-4b6d-b77b-0bd9b43c5dfa)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.263-0500 I INDEX [conn88] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.356-0500 I NETWORK [conn58] end connection 127.0.0.1:44720 (1 connection now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.287-0500 I INDEX [conn107] validating collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.276-0500 W STORAGE [conn43] Could not complete validation of table:collection-29--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.254-0500 I INDEX [conn85] validating collection config.settings (UUID: 6d167d1d-0483-49b9-9ac8-ee5b66996698)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.265-0500 I INDEX [conn45] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.263-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.262-0500 I INDEX [conn46] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.263-0500 I INDEX [conn88] Validation complete for collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f). No corruption found.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:06.361-0500 I NETWORK [conn51] end connection 127.0.0.1:44678 (0 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.287-0500 I INDEX [conn107] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.276-0500 I INDEX [conn43] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.255-0500 I INDEX [conn85] validating index consistency _id_ on collection config.settings
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.265-0500 I INDEX [conn45] Validation complete for collection local.startup_log (UUID: 7b6988ea-0c65-41a6-9855-5680c2c711a1). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.265-0500 I INDEX [conn45] validating collection local.replset.oplogTruncateAfterPoint (UUID: 5d41bfc8-ebca-43f3-a038-30023495a91a)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.262-0500 I INDEX [conn46] Validation complete for collection local.replset.minvalid (UUID: 3f481e27-9697-4b6d-b77b-0bd9b43c5dfa). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.263-0500 I COMMAND [conn88] CMD: validate config.system.sessions, full:true
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:07.611-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44730 #59 (1 connection now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.287-0500 I INDEX [conn107] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.276-0500 W STORAGE [conn43] Could not complete validation of table:index-30--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.276-0500 I INDEX [conn43] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.265-0500 I COMMAND [conn45] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.265-0500 I INDEX [conn45] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.263-0500 I COMMAND [conn46] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.264-0500 I INDEX [conn88] validating the internal structure of index _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:07.611-0500 I NETWORK [conn59] received client metadata from 127.0.0.1:44730 conn59: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.287-0500 I INDEX [conn107] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.287-0500 I COMMAND [conn107] CMD: validate config.cache.chunks.test0_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.287-0500 W STORAGE [conn107] Could not complete validation of table:collection-91--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.288-0500 I INDEX [conn107] validating the internal structure of index _id_ on collection config.cache.chunks.test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.288-0500 W STORAGE [conn107] Could not complete validation of table:index-93--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.267-0500 I INDEX [conn46] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.266-0500 I INDEX [conn88] validating the internal structure of index lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.255-0500 I INDEX [conn85] Validation complete for collection config.settings (UUID: 6d167d1d-0483-49b9-9ac8-ee5b66996698). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.276-0500 W STORAGE [conn43] Could not complete validation of table:index-31--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.266-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.265-0500 I INDEX [conn45] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 5d41bfc8-ebca-43f3-a038-30023495a91a). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.288-0500 I INDEX [conn107] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.288-0500 W STORAGE [conn107] Could not complete validation of table:index-95--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.268-0500 I INDEX [conn88] validating collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.255-0500 I COMMAND [conn85] CMD: validate config.shards, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.276-0500 I INDEX [conn43] validating collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.268-0500 I INDEX [conn45] validating collection local.system.replset (UUID: 920cbf66-0930-4ef5-82e9-10d7319f0fda)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.265-0500 I COMMAND [conn45] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.269-0500 I INDEX [conn46] validating collection local.replset.oplogTruncateAfterPoint (UUID: ae67a1b2-b2be-4d7e-8242-18f3082bc280)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.288-0500 I INDEX [conn107] validating collection config.cache.chunks.test0_fsmdb0.agg_out (UUID: b53e5b23-cfff-452a-9863-a2ca857d4f54)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.268-0500 I INDEX [conn88] validating index consistency _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.255-0500 W STORAGE [conn85] Could not complete validation of table:collection-25-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.277-0500 I INDEX [conn43] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.268-0500 I INDEX [conn45] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.266-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.269-0500 I INDEX [conn46] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.288-0500 I INDEX [conn107] validating index consistency _id_ on collection config.cache.chunks.test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.268-0500 I INDEX [conn88] validating index consistency lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.255-0500 I INDEX [conn85] validating the internal structure of index _id_ on collection config.shards
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.277-0500 I INDEX [conn43] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.268-0500 I INDEX [conn45] Validation complete for collection local.system.replset (UUID: 920cbf66-0930-4ef5-82e9-10d7319f0fda). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.268-0500 I INDEX [conn45] validating collection local.startup_log (UUID: e0cc0511-0005-4584-a461-5ae30058b4c6)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.269-0500 I INDEX [conn46] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: ae67a1b2-b2be-4d7e-8242-18f3082bc280). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.288-0500 I INDEX [conn107] validating index consistency lastmod_1 on collection config.cache.chunks.test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.268-0500 I INDEX [conn88] Validation complete for collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.255-0500 W STORAGE [conn85] Could not complete validation of table:index-26-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.277-0500 I INDEX [conn43] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.269-0500 I COMMAND [conn45] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.268-0500 I INDEX [conn45] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.270-0500 I COMMAND [conn46] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.288-0500 I INDEX [conn107] Validation complete for collection config.cache.chunks.test0_fsmdb0.agg_out (UUID: b53e5b23-cfff-452a-9863-a2ca857d4f54). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.269-0500 I COMMAND [conn88] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.255-0500 I INDEX [conn85] validating the internal structure of index host_1 on collection config.shards
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.277-0500 I COMMAND [conn43] CMD: validate config.cache.chunks.test0_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.270-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.268-0500 I INDEX [conn45] Validation complete for collection local.startup_log (UUID: e0cc0511-0005-4584-a461-5ae30058b4c6). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.270-0500 I INDEX [conn46] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.288-0500 I COMMAND [conn107] CMD: validate config.cache.chunks.test0_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.270-0500 I INDEX [conn88] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.255-0500 W STORAGE [conn85] Could not complete validation of table:index-27-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.277-0500 W STORAGE [conn43] Could not complete validation of table:collection-99--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.271-0500 I INDEX [conn45] validating collection local.system.rollback.id (UUID: 9434a858-83b3-4d87-8d66-64bde405790b)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.269-0500 I COMMAND [conn45] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.272-0500 I INDEX [conn46] validating collection local.startup_log (UUID: fb2ea5d2-ac7b-4697-a368-9f5d41483423)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.289-0500 I INDEX [conn107] validating the internal structure of index _id_ on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.270-0500 W STORAGE [conn88] Could not complete validation of table:index-16-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.255-0500 I INDEX [conn85] validating collection config.shards (UUID: ed6a2b77-0788-4ad3-a1b0-ccd61535c24f)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.277-0500 I INDEX [conn43] validating the internal structure of index _id_ on collection config.cache.chunks.test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.272-0500 I INDEX [conn45] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.272-0500 I INDEX [conn45] Validation complete for collection local.system.rollback.id (UUID: 9434a858-83b3-4d87-8d66-64bde405790b). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.273-0500 I COMMAND [conn45] CMD: validate test0_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.291-0500 I INDEX [conn107] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.271-0500 I INDEX [conn88] validating collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.255-0500 I INDEX [conn85] validating index consistency _id_ on collection config.shards
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.277-0500 W STORAGE [conn43] Could not complete validation of table:index-100--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.270-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.272-0500 I INDEX [conn46] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.273-0500 W STORAGE [conn45] Could not complete validation of table:collection-37--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.293-0500 I INDEX [conn107] validating collection config.cache.chunks.test0_fsmdb0.fsmcoll0 (UUID: dad6441c-7462-448b-9e35-8123157c4429)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.271-0500 I INDEX [conn88] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.271-0500 I INDEX [conn88] Validation complete for collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.277-0500 I INDEX [conn43] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.277-0500 W STORAGE [conn43] Could not complete validation of table:index-101--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.277-0500 I INDEX [conn43] validating collection config.cache.chunks.test0_fsmdb0.agg_out (UUID: b53e5b23-cfff-452a-9863-a2ca857d4f54)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.273-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.293-0500 I INDEX [conn107] validating index consistency _id_ on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.255-0500 I INDEX [conn85] validating index consistency host_1 on collection config.shards
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.272-0500 I COMMAND [conn88] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.272-0500 I INDEX [conn45] validating collection local.system.replset (UUID: 3b8c02e8-ec29-4e79-912d-3e315d1d851c)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.272-0500 I INDEX [conn46] Validation complete for collection local.startup_log (UUID: fb2ea5d2-ac7b-4697-a368-9f5d41483423). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.277-0500 I INDEX [conn43] validating index consistency _id_ on collection config.cache.chunks.test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.273-0500 W STORAGE [conn45] Could not complete validation of table:index-38--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.293-0500 I INDEX [conn107] validating index consistency lastmod_1 on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.255-0500 I INDEX [conn85] Validation complete for collection config.shards (UUID: ed6a2b77-0788-4ad3-a1b0-ccd61535c24f). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.272-0500 W STORAGE [conn88] Could not complete validation of table:collection-10-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.272-0500 I INDEX [conn88] validating collection local.oplog.rs (UUID: 5f1b9ff7-2fef-4590-8e90-0f3704b0f5df)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.273-0500 I COMMAND [conn46] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.277-0500 I INDEX [conn43] validating index consistency lastmod_1 on collection config.cache.chunks.test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.274-0500 I INDEX [conn45] validating the internal structure of index _id_hashed on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.293-0500 I INDEX [conn107] Validation complete for collection config.cache.chunks.test0_fsmdb0.fsmcoll0 (UUID: dad6441c-7462-448b-9e35-8123157c4429). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.256-0500 I COMMAND [conn85] CMD: validate config.system.sessions, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.272-0500 I INDEX [conn45] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.273-0500 I INDEX [conn88] Validation complete for collection local.oplog.rs (UUID: 5f1b9ff7-2fef-4590-8e90-0f3704b0f5df). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.274-0500 I INDEX [conn46] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.277-0500 I INDEX [conn43] Validation complete for collection config.cache.chunks.test0_fsmdb0.agg_out (UUID: b53e5b23-cfff-452a-9863-a2ca857d4f54). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.274-0500 W STORAGE [conn45] Could not complete validation of table:index-39--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.294-0500 I COMMAND [conn107] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.256-0500 I INDEX [conn85] validating the internal structure of index _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.258-0500 I INDEX [conn85] validating collection config.system.sessions (UUID: 9014747b-5aa2-462f-9e13-1e6b27298390)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.274-0500 I COMMAND [conn88] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.276-0500 I INDEX [conn46] validating collection local.system.replset (UUID: 2b695a66-e9c6-4bba-a36e-eb0a5cf356ba)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.278-0500 I COMMAND [conn43] CMD: validate config.cache.chunks.test0_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.274-0500 I INDEX [conn45] validating collection test0_fsmdb0.fsmcoll0 (UUID: d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.294-0500 W STORAGE [conn107] Could not complete validation of table:collection-18--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.294-0500 I INDEX [conn107] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.258-0500 I INDEX [conn85] validating index consistency _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.274-0500 I INDEX [conn88] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.276-0500 I INDEX [conn46] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.278-0500 W STORAGE [conn43] Could not complete validation of table:collection-37--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.275-0500 I INDEX [conn45] validating index consistency _id_ on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.272-0500 I INDEX [conn45] Validation complete for collection local.system.replset (UUID: 3b8c02e8-ec29-4e79-912d-3e315d1d851c). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.294-0500 W STORAGE [conn107] Could not complete validation of table:index-20--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.258-0500 I INDEX [conn85] Validation complete for collection config.system.sessions (UUID: 9014747b-5aa2-462f-9e13-1e6b27298390). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.277-0500 I INDEX [conn88] validating collection local.replset.election (UUID: 801ad0de-17c3-44b2-a878-e91b8de004c5)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.276-0500 I INDEX [conn46] Validation complete for collection local.system.replset (UUID: 2b695a66-e9c6-4bba-a36e-eb0a5cf356ba). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.278-0500 I INDEX [conn43] validating the internal structure of index _id_ on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.275-0500 I INDEX [conn45] validating index consistency _id_hashed on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.272-0500 I COMMAND [conn45] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.294-0500 I INDEX [conn107] validating collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.259-0500 I COMMAND [conn85] CMD: validate config.tags, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.277-0500 I INDEX [conn88] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.276-0500 I COMMAND [conn46] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.278-0500 W STORAGE [conn43] Could not complete validation of table:index-38--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.276-0500 I INDEX [conn45] Validation complete for collection test0_fsmdb0.fsmcoll0 (UUID: d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.273-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.294-0500 I INDEX [conn107] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.259-0500 I INDEX [conn85] validating the internal structure of index _id_ on collection config.tags
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.277-0500 I INDEX [conn88] Validation complete for collection local.replset.election (UUID: 801ad0de-17c3-44b2-a878-e91b8de004c5). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.277-0500 I INDEX [conn46] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.278-0500 I INDEX [conn43] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.277-0500 I NETWORK [conn45] end connection 127.0.0.1:52776 (10 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.275-0500 I INDEX [conn45] validating collection local.system.rollback.id (UUID: 1099f6d7-f170-471c-a0ac-dc97bd7e42b0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.294-0500 I INDEX [conn107] Validation complete for collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.261-0500 I INDEX [conn85] validating the internal structure of index ns_1_min_1 on collection config.tags
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.277-0500 I COMMAND [conn88] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.279-0500 I INDEX [conn46] validating collection local.system.rollback.id (UUID: d6027364-802b-4e8d-ae7f-556bc4252840)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.278-0500 W STORAGE [conn43] Could not complete validation of table:index-39--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:06.373-0500 I NETWORK [conn44] end connection 127.0.0.1:52750 (9 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.275-0500 I INDEX [conn45] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.294-0500 I COMMAND [conn107] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.261-0500 W STORAGE [conn85] Could not complete validation of table:index-37-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.278-0500 I INDEX [conn88] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.279-0500 I INDEX [conn46] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.278-0500 I INDEX [conn43] validating collection config.cache.chunks.test0_fsmdb0.fsmcoll0 (UUID: dad6441c-7462-448b-9e35-8123157c4429)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.275-0500 I INDEX [conn45] Validation complete for collection local.system.rollback.id (UUID: 1099f6d7-f170-471c-a0ac-dc97bd7e42b0). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.295-0500 I INDEX [conn107] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.261-0500 I INDEX [conn85] validating the internal structure of index ns_1_tag_1 on collection config.tags
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.280-0500 I INDEX [conn88] validating collection local.replset.minvalid (UUID: a96fd08c-e1c8-43e5-868a-0849697b175e)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.279-0500 I INDEX [conn46] Validation complete for collection local.system.rollback.id (UUID: d6027364-802b-4e8d-ae7f-556bc4252840). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.278-0500 I INDEX [conn43] validating index consistency _id_ on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.277-0500 I COMMAND [conn45] CMD: validate test0_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.297-0500 I INDEX [conn107] validating collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.261-0500 W STORAGE [conn85] Could not complete validation of table:index-38-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.280-0500 I INDEX [conn88] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.281-0500 I COMMAND [conn46] CMD: validate test0_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.278-0500 I INDEX [conn43] validating index consistency lastmod_1 on collection config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.277-0500 W STORAGE [conn45] Could not complete validation of table:collection-37--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.297-0500 I INDEX [conn107] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.261-0500 I INDEX [conn85] validating collection config.tags (UUID: d225b508-e40e-4c3c-a716-26adc4561055)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.280-0500 I INDEX [conn88] Validation complete for collection local.replset.minvalid (UUID: a96fd08c-e1c8-43e5-868a-0849697b175e). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.281-0500 W STORAGE [conn46] Could not complete validation of table:collection-77--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.278-0500 I INDEX [conn43] Validation complete for collection config.cache.chunks.test0_fsmdb0.fsmcoll0 (UUID: dad6441c-7462-448b-9e35-8123157c4429). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.277-0500 I INDEX [conn45] validating the internal structure of index _id_ on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.297-0500 I INDEX [conn107] Validation complete for collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.262-0500 I INDEX [conn85] validating index consistency _id_ on collection config.tags
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.281-0500 I COMMAND [conn88] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.281-0500 I INDEX [conn46] validating the internal structure of index _id_ on collection test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.279-0500 I COMMAND [conn43] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.277-0500 W STORAGE [conn45] Could not complete validation of table:index-38--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.298-0500 I COMMAND [conn107] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.299-0500 I INDEX [conn107] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.281-0500 I INDEX [conn88] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.281-0500 W STORAGE [conn46] Could not complete validation of table:index-78--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.279-0500 W STORAGE [conn43] Could not complete validation of table:collection-27--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.277-0500 I INDEX [conn45] validating the internal structure of index _id_hashed on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.299-0500 W STORAGE [conn107] Could not complete validation of table:index-16--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.287-0500 I INDEX [conn88] validating collection local.replset.oplogTruncateAfterPoint (UUID: 4ac06258-0ea7-46c8-b773-0c637830872b)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.262-0500 I INDEX [conn85] validating index consistency ns_1_min_1 on collection config.tags
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.281-0500 I INDEX [conn46] validating the internal structure of index _id_hashed on collection test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.279-0500 I INDEX [conn43] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.277-0500 W STORAGE [conn45] Could not complete validation of table:index-39--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.299-0500 I INDEX [conn107] validating collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.287-0500 I INDEX [conn88] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.262-0500 I INDEX [conn85] validating index consistency ns_1_tag_1 on collection config.tags
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.281-0500 W STORAGE [conn46] Could not complete validation of table:index-89--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.279-0500 W STORAGE [conn43] Could not complete validation of table:index-28--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.277-0500 I INDEX [conn45] validating collection test0_fsmdb0.fsmcoll0 (UUID: d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.299-0500 I INDEX [conn107] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.287-0500 I INDEX [conn88] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 4ac06258-0ea7-46c8-b773-0c637830872b). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.262-0500 I INDEX [conn85] Validation complete for collection config.tags (UUID: d225b508-e40e-4c3c-a716-26adc4561055). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.282-0500 I INDEX [conn46] validating collection test0_fsmdb0.agg_out (UUID: bf3cdc90-36f7-41c4-a8c0-a6114d9633bb)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.279-0500 I INDEX [conn43] validating collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.279-0500 I INDEX [conn45] validating index consistency _id_ on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.299-0500 I INDEX [conn107] Validation complete for collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.288-0500 I COMMAND [conn88] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.262-0500 I COMMAND [conn85] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.283-0500 I INDEX [conn46] validating index consistency _id_ on collection test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.279-0500 I INDEX [conn43] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.279-0500 I INDEX [conn45] validating index consistency _id_hashed on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.301-0500 I COMMAND [conn107] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.289-0500 I INDEX [conn88] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.262-0500 W STORAGE [conn85] Could not complete validation of table:collection-15-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.283-0500 I INDEX [conn46] validating index consistency _id_hashed on collection test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.279-0500 I INDEX [conn43] Validation complete for collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.279-0500 I INDEX [conn45] Validation complete for collection test0_fsmdb0.fsmcoll0 (UUID: d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.301-0500 W STORAGE [conn107] Could not complete validation of table:collection-10--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.291-0500 I INDEX [conn88] validating collection local.startup_log (UUID: e8e71921-e80f-42ad-92d0-ad769374a694)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.262-0500 I INDEX [conn85] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.283-0500 I INDEX [conn46] Validation complete for collection test0_fsmdb0.agg_out (UUID: bf3cdc90-36f7-41c4-a8c0-a6114d9633bb). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.280-0500 I COMMAND [conn43] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.281-0500 I NETWORK [conn45] end connection 127.0.0.1:51890 (10 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.301-0500 I INDEX [conn107] validating collection local.oplog.rs (UUID: f999d0d7-cb6c-4d2c-a5ff-807a7ed09766)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.291-0500 I INDEX [conn88] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.262-0500 W STORAGE [conn85] Could not complete validation of table:index-16-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.284-0500 I COMMAND [conn46] CMD: validate test0_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.280-0500 W STORAGE [conn43] Could not complete validation of table:collection-25--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:06.373-0500 I NETWORK [conn44] end connection 127.0.0.1:51860 (9 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.308-0500 I INDEX [conn107] Validation complete for collection local.oplog.rs (UUID: f999d0d7-cb6c-4d2c-a5ff-807a7ed09766). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.291-0500 I INDEX [conn88] Validation complete for collection local.startup_log (UUID: e8e71921-e80f-42ad-92d0-ad769374a694). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.262-0500 I INDEX [conn85] validating collection config.transactions (UUID: c2741992-901b-4092-a01f-3dfe88ab21c5)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.284-0500 W STORAGE [conn46] Could not complete validation of table:collection-33--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.280-0500 I INDEX [conn43] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.308-0500 I COMMAND [conn107] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.291-0500 I COMMAND [conn88] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.262-0500 I INDEX [conn85] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.284-0500 I INDEX [conn46] validating the internal structure of index _id_ on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.280-0500 W STORAGE [conn43] Could not complete validation of table:index-26--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.310-0500 I INDEX [conn107] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.292-0500 I INDEX [conn88] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.263-0500 I INDEX [conn85] Validation complete for collection config.transactions (UUID: c2741992-901b-4092-a01f-3dfe88ab21c5). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.284-0500 W STORAGE [conn46] Could not complete validation of table:index-34--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.280-0500 I INDEX [conn43] validating collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.313-0500 I INDEX [conn107] validating collection local.replset.election (UUID: 101a66fe-c3c0-4bee-94b9-e9bb8d04aa79)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.294-0500 I INDEX [conn88] validating collection local.system.replset (UUID: 318b7af2-23ac-427e-bba7-a3e3f5b1e60d)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.263-0500 I COMMAND [conn85] CMD: validate config.version, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.284-0500 I INDEX [conn46] validating the internal structure of index _id_hashed on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.280-0500 I INDEX [conn43] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.313-0500 I INDEX [conn107] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.294-0500 I INDEX [conn88] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.263-0500 W STORAGE [conn85] Could not complete validation of table:collection-39-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.284-0500 W STORAGE [conn46] Could not complete validation of table:index-35--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.280-0500 I INDEX [conn43] Validation complete for collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.313-0500 I INDEX [conn107] Validation complete for collection local.replset.election (UUID: 101a66fe-c3c0-4bee-94b9-e9bb8d04aa79). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.294-0500 I INDEX [conn88] Validation complete for collection local.system.replset (UUID: 318b7af2-23ac-427e-bba7-a3e3f5b1e60d). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.263-0500 I INDEX [conn85] validating the internal structure of index _id_ on collection config.version
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.285-0500 I INDEX [conn46] validating collection test0_fsmdb0.fsmcoll0 (UUID: d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.281-0500 I COMMAND [conn43] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.313-0500 I COMMAND [conn107] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.295-0500 I COMMAND [conn88] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.263-0500 W STORAGE [conn85] Could not complete validation of table:index-40-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.286-0500 I INDEX [conn46] validating index consistency _id_ on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.282-0500 I INDEX [conn43] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.315-0500 I INDEX [conn107] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.296-0500 I INDEX [conn88] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.263-0500 I INDEX [conn85] validating collection config.version (UUID: d52b8328-6d55-4f54-8cfd-e715a58e3315)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.286-0500 I INDEX [conn46] validating index consistency _id_hashed on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.282-0500 W STORAGE [conn43] Could not complete validation of table:index-22--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.317-0500 I INDEX [conn107] validating collection local.replset.minvalid (UUID: 5dfed1a1-c7a1-4f91-a3da-2544e54d2e9a)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.297-0500 I INDEX [conn88] validating collection local.system.rollback.id (UUID: 2d9a033a-73d1-44ef-b7d1-30b6243b0419)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.263-0500 I INDEX [conn85] validating index consistency _id_ on collection config.version
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.287-0500 I INDEX [conn46] Validation complete for collection test0_fsmdb0.fsmcoll0 (UUID: d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.282-0500 I INDEX [conn43] validating collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.318-0500 I INDEX [conn107] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.298-0500 I INDEX [conn88] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.263-0500 I INDEX [conn85] Validation complete for collection config.version (UUID: d52b8328-6d55-4f54-8cfd-e715a58e3315). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.288-0500 I NETWORK [conn46] end connection 127.0.0.1:51524 (9 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.282-0500 I INDEX [conn43] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.318-0500 I INDEX [conn107] Validation complete for collection local.replset.minvalid (UUID: 5dfed1a1-c7a1-4f91-a3da-2544e54d2e9a). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.298-0500 I INDEX [conn88] Validation complete for collection local.system.rollback.id (UUID: 2d9a033a-73d1-44ef-b7d1-30b6243b0419). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.265-0500 I COMMAND [conn85] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:06.373-0500 I NETWORK [conn45] end connection 127.0.0.1:51498 (8 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.283-0500 I INDEX [conn43] Validation complete for collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.318-0500 I COMMAND [conn107] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.299-0500 I COMMAND [conn88] CMD: validate test0_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.265-0500 W STORAGE [conn85] Could not complete validation of table:collection-10-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.625-0500 I COMMAND [ReplWriterWorker-10] CMD: drop test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.284-0500 I COMMAND [conn43] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.319-0500 I INDEX [conn107] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.301-0500 I INDEX [conn88] validating the internal structure of index _id_ on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.265-0500 I INDEX [conn85] validating collection local.oplog.rs (UUID: 5bb0c359-7cb9-48f8-8ff8-4b4c84c12ec5)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.625-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test0_fsmdb0.agg_out (bf3cdc90-36f7-41c4-a8c0-a6114d9633bb) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796667, 6), t: 1 } and commit timestamp Timestamp(1574796667, 6)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.291-0500 W STORAGE [conn43] Could not complete validation of table:collection-16--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.325-0500 I INDEX [conn107] validating collection local.replset.oplogTruncateAfterPoint (UUID: 31ce824c-ef86-4223-a4be-3069dae7b5f2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.303-0500 I INDEX [conn88] validating the internal structure of index _id_hashed on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.265-0500 I INDEX [conn85] Validation complete for collection local.oplog.rs (UUID: 5bb0c359-7cb9-48f8-8ff8-4b4c84c12ec5). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.625-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test0_fsmdb0.agg_out (bf3cdc90-36f7-41c4-a8c0-a6114d9633bb).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:07.639-0500 I COMMAND [ReplWriterWorker-5] CMD: drop test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.291-0500 I INDEX [conn43] validating collection local.oplog.rs (UUID: 307925b3-4143-4c06-a46a-f04119b3afb4)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.325-0500 I INDEX [conn107] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.305-0500 I INDEX [conn88] validating collection test0_fsmdb0.fsmcoll0 (UUID: d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.266-0500 I COMMAND [conn85] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.626-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (bf3cdc90-36f7-41c4-a8c0-a6114d9633bb)'. Ident: 'index-78--2310912778499990807', commit timestamp: 'Timestamp(1574796667, 6)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:07.639-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796667, 14), t: 1 } and commit timestamp Timestamp(1574796667, 14)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.297-0500 I INDEX [conn43] Validation complete for collection local.oplog.rs (UUID: 307925b3-4143-4c06-a46a-f04119b3afb4). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.325-0500 I INDEX [conn107] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 31ce824c-ef86-4223-a4be-3069dae7b5f2). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.306-0500 I INDEX [conn88] validating index consistency _id_ on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.267-0500 I INDEX [conn85] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.626-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (bf3cdc90-36f7-41c4-a8c0-a6114d9633bb)'. Ident: 'index-89--2310912778499990807', commit timestamp: 'Timestamp(1574796667, 6)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:07.639-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.298-0500 I COMMAND [conn43] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.325-0500 I COMMAND [conn107] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.307-0500 I INDEX [conn88] validating index consistency _id_hashed on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.269-0500 I INDEX [conn85] validating collection local.replset.election (UUID: 5f00e271-c3c6-4d7b-9d39-1c8e9e8a77d4)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.626-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-77--2310912778499990807, commit timestamp: Timestamp(1574796667, 6)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:07.639-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3)'. Ident: 'index-38--8000595249233899911', commit timestamp: 'Timestamp(1574796667, 14)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.299-0500 I INDEX [conn43] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.326-0500 I INDEX [conn107] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.307-0500 I INDEX [conn88] Validation complete for collection test0_fsmdb0.fsmcoll0 (UUID: d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:07.640-0500 I COMMAND [ReplWriterWorker-14] CMD: drop test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.269-0500 I INDEX [conn85] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.635-0500 I COMMAND [ReplWriterWorker-0] CMD: drop config.cache.chunks.test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:07.639-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3)'. Ident: 'index-39--8000595249233899911', commit timestamp: 'Timestamp(1574796667, 14)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.301-0500 I INDEX [conn43] validating collection local.replset.election (UUID: 7b059263-7419-4cf5-8072-b44957d729c9)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.329-0500 I INDEX [conn107] validating collection local.startup_log (UUID: fd9e05bb-cd6c-441c-9265-3783d4065b03)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.308-0500 I NETWORK [conn88] end connection 127.0.0.1:38858 (27 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:07.640-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796667, 14), t: 1 } and commit timestamp Timestamp(1574796667, 14)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.269-0500 I INDEX [conn85] Validation complete for collection local.replset.election (UUID: 5f00e271-c3c6-4d7b-9d39-1c8e9e8a77d4). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.635-0500 I STORAGE [ReplWriterWorker-0] dropCollection: config.cache.chunks.test0_fsmdb0.agg_out (b53e5b23-cfff-452a-9863-a2ca857d4f54) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796667, 10), t: 1 } and commit timestamp Timestamp(1574796667, 10)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:07.639-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test0_fsmdb0.fsmcoll0'. Ident: collection-37--8000595249233899911, commit timestamp: Timestamp(1574796667, 14)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.301-0500 I INDEX [conn43] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.329-0500 I INDEX [conn107] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.361-0500 I NETWORK [conn87] end connection 127.0.0.1:38832 (26 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:07.640-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.270-0500 I COMMAND [conn85] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.635-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for config.cache.chunks.test0_fsmdb0.agg_out (b53e5b23-cfff-452a-9863-a2ca857d4f54).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.301-0500 I INDEX [conn43] Validation complete for collection local.replset.election (UUID: 7b059263-7419-4cf5-8072-b44957d729c9). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.329-0500 I INDEX [conn107] Validation complete for collection local.startup_log (UUID: fd9e05bb-cd6c-441c-9265-3783d4065b03). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:06.373-0500 I NETWORK [conn86] end connection 127.0.0.1:38830 (25 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:07.640-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3)'. Ident: 'index-38--4104909142373009110', commit timestamp: 'Timestamp(1574796667, 14)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.270-0500 I INDEX [conn85] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.635-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test0_fsmdb0.agg_out (b53e5b23-cfff-452a-9863-a2ca857d4f54)'. Ident: 'index-100--2310912778499990807', commit timestamp: 'Timestamp(1574796667, 10)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.302-0500 I COMMAND [conn43] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.330-0500 I COMMAND [conn107] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:07.640-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3)'. Ident: 'index-39--4104909142373009110', commit timestamp: 'Timestamp(1574796667, 14)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:07.613-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38876 #89 (26 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.273-0500 I INDEX [conn85] validating collection local.replset.minvalid (UUID: ce934bfb-84f4-4d44-a963-37c09c6c95a6)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.635-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test0_fsmdb0.agg_out (b53e5b23-cfff-452a-9863-a2ca857d4f54)'. Ident: 'index-101--2310912778499990807', commit timestamp: 'Timestamp(1574796667, 10)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.302-0500 W STORAGE [conn43] Could not complete validation of table:collection-4--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.331-0500 I INDEX [conn107] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:07.640-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test0_fsmdb0.fsmcoll0'. Ident: collection-37--4104909142373009110, commit timestamp: Timestamp(1574796667, 14)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:07.614-0500 I NETWORK [conn89] received client metadata from 127.0.0.1:38876 conn89: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.273-0500 I INDEX [conn85] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.635-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'config.cache.chunks.test0_fsmdb0.agg_out'. Ident: collection-99--2310912778499990807, commit timestamp: Timestamp(1574796667, 10)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.302-0500 I INDEX [conn43] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.334-0500 I INDEX [conn107] validating collection local.system.replset (UUID: 3eb8c3e8-f477-448c-9a25-5db5ef40b0d6)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:07.623-0500 I COMMAND [conn37] CMD: drop test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.273-0500 I INDEX [conn85] Validation complete for collection local.replset.minvalid (UUID: ce934bfb-84f4-4d44-a963-37c09c6c95a6). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.303-0500 I INDEX [conn43] validating collection local.replset.minvalid (UUID: e1166351-a2a9-4335-b202-a653b252b811)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.642-0500 I COMMAND [ReplWriterWorker-6] CMD: drop test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.334-0500 I INDEX [conn107] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:07.637-0500 I COMMAND [conn37] CMD: drop test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.274-0500 I COMMAND [conn85] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.303-0500 I INDEX [conn43] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.642-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796667, 15), t: 1 } and commit timestamp Timestamp(1574796667, 15)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.334-0500 I INDEX [conn107] Validation complete for collection local.system.replset (UUID: 3eb8c3e8-f477-448c-9a25-5db5ef40b0d6). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:07.637-0500 I STORAGE [conn37] dropCollection: test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.274-0500 I INDEX [conn85] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.303-0500 I INDEX [conn43] Validation complete for collection local.replset.minvalid (UUID: e1166351-a2a9-4335-b202-a653b252b811). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.642-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.335-0500 I COMMAND [conn107] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:07.637-0500 I STORAGE [conn37] Finishing collection drop for test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.279-0500 I INDEX [conn85] validating collection local.replset.oplogTruncateAfterPoint (UUID: b5258dce-fb89-4436-a191-b8586ea2e6c0)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.304-0500 I COMMAND [conn43] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.642-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3)'. Ident: 'index-34--2310912778499990807', commit timestamp: 'Timestamp(1574796667, 15)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.336-0500 I INDEX [conn107] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:07.637-0500 I STORAGE [conn37] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3)'. Ident: 'index-30-8224331490264904478', commit timestamp: 'Timestamp(1574796667, 14)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.279-0500 I INDEX [conn85] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.308-0500 I INDEX [conn43] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.642-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3)'. Ident: 'index-35--2310912778499990807', commit timestamp: 'Timestamp(1574796667, 15)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.338-0500 I INDEX [conn107] validating collection local.system.rollback.id (UUID: 223114bc-2956-4d9b-8f0a-5c567c2cb10e)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:07.637-0500 I STORAGE [conn37] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3)'. Ident: 'index-31-8224331490264904478', commit timestamp: 'Timestamp(1574796667, 14)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.279-0500 I INDEX [conn85] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: b5258dce-fb89-4436-a191-b8586ea2e6c0). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.311-0500 I INDEX [conn43] validating collection local.replset.oplogTruncateAfterPoint (UUID: 022b88bb-9282-4f39-aad1-6988341f4ac1)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.642-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test0_fsmdb0.fsmcoll0'. Ident: collection-33--2310912778499990807, commit timestamp: Timestamp(1574796667, 15)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.338-0500 I INDEX [conn107] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:07.637-0500 I STORAGE [conn37] Deferring table drop for collection 'test0_fsmdb0.fsmcoll0'. Ident: collection-29-8224331490264904478, commit timestamp: Timestamp(1574796667, 14)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.280-0500 I COMMAND [conn85] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.311-0500 I INDEX [conn43] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.338-0500 I INDEX [conn107] Validation complete for collection local.system.rollback.id (UUID: 223114bc-2956-4d9b-8f0a-5c567c2cb10e). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.280-0500 I INDEX [conn85] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.311-0500 I INDEX [conn43] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 022b88bb-9282-4f39-aad1-6988341f4ac1). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.340-0500 I COMMAND [conn107] CMD: validate test0_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.284-0500 I INDEX [conn85] validating collection local.startup_log (UUID: a1488758-c116-4144-adba-02b8f3b8144d)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.311-0500 I COMMAND [conn43] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.341-0500 I INDEX [conn107] validating the internal structure of index _id_ on collection test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.284-0500 I INDEX [conn85] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.312-0500 I INDEX [conn43] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.343-0500 I INDEX [conn107] validating the internal structure of index _id_hashed on collection test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.284-0500 I INDEX [conn85] Validation complete for collection local.startup_log (UUID: a1488758-c116-4144-adba-02b8f3b8144d). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.315-0500 I INDEX [conn43] validating collection local.startup_log (UUID: 62f9eac5-a715-4818-9af1-edc47894f622)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.345-0500 I INDEX [conn107] validating collection test0_fsmdb0.agg_out (UUID: bf3cdc90-36f7-41c4-a8c0-a6114d9633bb)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.284-0500 I COMMAND [conn85] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.288-0500 I INDEX [conn85] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.347-0500 I INDEX [conn107] validating index consistency _id_ on collection test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.315-0500 I INDEX [conn43] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.293-0500 I INDEX [conn85] validating collection local.system.replset (UUID: ea98bf03-b956-4e01-b9a4-857e601cceda)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.347-0500 I INDEX [conn107] validating index consistency _id_hashed on collection test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.316-0500 I INDEX [conn43] Validation complete for collection local.startup_log (UUID: 62f9eac5-a715-4818-9af1-edc47894f622). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.293-0500 I INDEX [conn85] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.347-0500 I INDEX [conn107] Validation complete for collection test0_fsmdb0.agg_out (UUID: bf3cdc90-36f7-41c4-a8c0-a6114d9633bb). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.316-0500 I COMMAND [conn43] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.293-0500 I INDEX [conn85] Validation complete for collection local.system.replset (UUID: ea98bf03-b956-4e01-b9a4-857e601cceda). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.348-0500 I COMMAND [conn107] CMD: validate test0_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.317-0500 I INDEX [conn43] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.294-0500 I COMMAND [conn85] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.349-0500 I INDEX [conn107] validating the internal structure of index _id_ on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.320-0500 I INDEX [conn43] validating collection local.system.replset (UUID: c43cc3e4-845d-4144-8406-83bf4df96d39)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.295-0500 I INDEX [conn85] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.351-0500 I INDEX [conn107] validating the internal structure of index _id_hashed on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.320-0500 I INDEX [conn43] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.297-0500 I INDEX [conn85] validating collection local.system.rollback.id (UUID: 0ad52f2a-9d3e-4f9f-b91b-17a9c570ab7e)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:07.647-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test0_fsmdb0.fsmcoll0 took 0 ms and found the collection is not sharded
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.353-0500 I INDEX [conn107] validating collection test0_fsmdb0.fsmcoll0 (UUID: d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.320-0500 I INDEX [conn43] Validation complete for collection local.system.replset (UUID: c43cc3e4-845d-4144-8406-83bf4df96d39). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.297-0500 I INDEX [conn85] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.355-0500 I INDEX [conn107] validating index consistency _id_ on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:07.647-0500 I SHARDING [conn37] Updating metadata for collection test0_fsmdb0.fsmcoll0 from collection version: 1|3||5ddd7d71cf8184c2e1492ff8, shard version: 1|1||5ddd7d71cf8184c2e1492ff8 to collection version: due to UUID change
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.321-0500 I COMMAND [conn43] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.297-0500 I INDEX [conn85] Validation complete for collection local.system.rollback.id (UUID: 0ad52f2a-9d3e-4f9f-b91b-17a9c570ab7e). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.355-0500 I INDEX [conn107] validating index consistency _id_hashed on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.322-0500 I INDEX [conn43] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:07.647-0500 I COMMAND [ShardServerCatalogCacheLoader-0] CMD: drop config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.298-0500 I NETWORK [conn85] end connection 127.0.0.1:56112 (34 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.355-0500 I INDEX [conn107] Validation complete for collection test0_fsmdb0.fsmcoll0 (UUID: d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.325-0500 I INDEX [conn43] validating collection local.system.rollback.id (UUID: af3b2fdb-b5ae-49b3-a026-c55e1bf822c0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:07.647-0500 I STORAGE [ShardServerCatalogCacheLoader-0] dropCollection: config.cache.chunks.test0_fsmdb0.fsmcoll0 (44049d48-fa0f-4a8e-b7c3-56550b94d236) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.361-0500 I NETWORK [conn84] end connection 127.0.0.1:56084 (33 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.357-0500 I NETWORK [conn107] end connection 127.0.0.1:46332 (35 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.325-0500 I INDEX [conn43] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.325-0500 I INDEX [conn43] Validation complete for collection local.system.rollback.id (UUID: af3b2fdb-b5ae-49b3-a026-c55e1bf822c0). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:07.647-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Finishing collection drop for config.cache.chunks.test0_fsmdb0.fsmcoll0 (44049d48-fa0f-4a8e-b7c3-56550b94d236).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.362-0500 I NETWORK [conn106] end connection 127.0.0.1:46306 (34 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:06.373-0500 I NETWORK [conn83] end connection 127.0.0.1:56082 (32 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.327-0500 I COMMAND [conn43] CMD: validate test0_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:07.647-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test0_fsmdb0.fsmcoll0 (44049d48-fa0f-4a8e-b7c3-56550b94d236)'. Ident: 'index-33-8224331490264904478', commit timestamp: 'Timestamp(1574796667, 22)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:06.373-0500 I NETWORK [conn105] end connection 127.0.0.1:46302 (33 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.327-0500 W STORAGE [conn43] Could not complete validation of table:collection-77--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:07.619-0500 I SHARDING [conn22] distributed lock 'test0_fsmdb0' acquired for 'dropDatabase', ts : 5ddd7d7b5cde74b6784bb3aa
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:07.647-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test0_fsmdb0.fsmcoll0 (44049d48-fa0f-4a8e-b7c3-56550b94d236)'. Ident: 'index-34-8224331490264904478', commit timestamp: 'Timestamp(1574796667, 22)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.327-0500 I INDEX [conn43] validating the internal structure of index _id_ on collection test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.614-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46344 #108 (34 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:07.619-0500 I SHARDING [conn22] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:07.619-0500-5ddd7d7b5cde74b6784bb3ad", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55594", time: new Date(1574796667619), what: "dropDatabase.start", ns: "test0_fsmdb0", details: {} }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:07.647-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Deferring table drop for collection 'config.cache.chunks.test0_fsmdb0.fsmcoll0'. Ident: collection-32-8224331490264904478, commit timestamp: Timestamp(1574796667, 22)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.327-0500 W STORAGE [conn43] Could not complete validation of table:index-78--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.615-0500 I NETWORK [conn108] received client metadata from 127.0.0.1:46344 conn108: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:07.621-0500 I SHARDING [conn22] distributed lock 'test0_fsmdb0.agg_out' acquired for 'dropCollection', ts : 5ddd7d7b5cde74b6784bb3b0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.327-0500 I INDEX [conn43] validating the internal structure of index _id_hashed on collection test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.623-0500 I COMMAND [conn55] CMD: drop test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:07.621-0500 I SHARDING [conn22] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:07.621-0500-5ddd7d7b5cde74b6784bb3b2", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55594", time: new Date(1574796667621), what: "dropCollection.start", ns: "test0_fsmdb0.agg_out", details: {} }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.327-0500 W STORAGE [conn43] Could not complete validation of table:index-89--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.623-0500 I STORAGE [conn55] dropCollection: test0_fsmdb0.agg_out (bf3cdc90-36f7-41c4-a8c0-a6114d9633bb) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:07.631-0500 I SHARDING [conn22] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:07.631-0500-5ddd7d7b5cde74b6784bb3ba", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55594", time: new Date(1574796667631), what: "dropCollection", ns: "test0_fsmdb0.agg_out", details: {} }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.328-0500 I INDEX [conn43] validating collection test0_fsmdb0.agg_out (UUID: bf3cdc90-36f7-41c4-a8c0-a6114d9633bb)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.623-0500 I STORAGE [conn55] Finishing collection drop for test0_fsmdb0.agg_out (bf3cdc90-36f7-41c4-a8c0-a6114d9633bb).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:07.634-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7d7b5cde74b6784bb3b0' unlocked.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.330-0500 I INDEX [conn43] validating index consistency _id_ on collection test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.623-0500 I STORAGE [conn55] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (bf3cdc90-36f7-41c4-a8c0-a6114d9633bb)'. Ident: 'index-73--2588534479858262356', commit timestamp: 'Timestamp(1574796667, 6)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:07.635-0500 I SHARDING [conn22] distributed lock 'test0_fsmdb0.fsmcoll0' acquired for 'dropCollection', ts : 5ddd7d7b5cde74b6784bb3bd
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.330-0500 I INDEX [conn43] validating index consistency _id_hashed on collection test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.623-0500 I STORAGE [conn55] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (bf3cdc90-36f7-41c4-a8c0-a6114d9633bb)'. Ident: 'index-79--2588534479858262356', commit timestamp: 'Timestamp(1574796667, 6)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:07.635-0500 I SHARDING [conn22] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:07.635-0500-5ddd7d7b5cde74b6784bb3bf", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55594", time: new Date(1574796667635), what: "dropCollection.start", ns: "test0_fsmdb0.fsmcoll0", details: {} }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.331-0500 I INDEX [conn43] Validation complete for collection test0_fsmdb0.agg_out (UUID: bf3cdc90-36f7-41c4-a8c0-a6114d9633bb). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.623-0500 I STORAGE [conn55] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-69--2588534479858262356, commit timestamp: Timestamp(1574796667, 6)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:07.649-0500 I SHARDING [conn22] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:07.649-0500-5ddd7d7b5cde74b6784bb3c8", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55594", time: new Date(1574796667649), what: "dropCollection", ns: "test0_fsmdb0.fsmcoll0", details: {} }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.331-0500 I COMMAND [conn43] CMD: validate test0_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.631-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test0_fsmdb0.agg_out took 0 ms and found the collection is not sharded
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:07.651-0500 I COMMAND [ReplWriterWorker-9] CMD: drop config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.331-0500 W STORAGE [conn43] Could not complete validation of table:collection-33--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.631-0500 I SHARDING [conn55] Updating metadata for collection test0_fsmdb0.agg_out from collection version: 1|0||5ddd7d74cf8184c2e14932e8, shard version: 1|0||5ddd7d74cf8184c2e14932e8 to collection version: due to UUID change
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.331-0500 I INDEX [conn43] validating the internal structure of index _id_ on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:07.651-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7d7b5cde74b6784bb3bd' unlocked.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:07.651-0500 I STORAGE [ReplWriterWorker-9] dropCollection: config.cache.chunks.test0_fsmdb0.fsmcoll0 (44049d48-fa0f-4a8e-b7c3-56550b94d236) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796667, 22), t: 1 } and commit timestamp Timestamp(1574796667, 22)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.631-0500 I COMMAND [ShardServerCatalogCacheLoader-1] CMD: drop config.cache.chunks.test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.332-0500 W STORAGE [conn43] Could not complete validation of table:index-34--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:07.651-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for config.cache.chunks.test0_fsmdb0.fsmcoll0 (44049d48-fa0f-4a8e-b7c3-56550b94d236).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.631-0500 I STORAGE [ShardServerCatalogCacheLoader-1] dropCollection: config.cache.chunks.test0_fsmdb0.agg_out (b53e5b23-cfff-452a-9863-a2ca857d4f54) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.332-0500 I INDEX [conn43] validating the internal structure of index _id_hashed on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:07.651-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test0_fsmdb0.fsmcoll0 (44049d48-fa0f-4a8e-b7c3-56550b94d236)'. Ident: 'index-42--8000595249233899911', commit timestamp: 'Timestamp(1574796667, 22)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.631-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Finishing collection drop for config.cache.chunks.test0_fsmdb0.agg_out (b53e5b23-cfff-452a-9863-a2ca857d4f54).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.332-0500 W STORAGE [conn43] Could not complete validation of table:index-35--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:07.651-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test0_fsmdb0.fsmcoll0 (44049d48-fa0f-4a8e-b7c3-56550b94d236)'. Ident: 'index-43--8000595249233899911', commit timestamp: 'Timestamp(1574796667, 22)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.631-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test0_fsmdb0.agg_out (b53e5b23-cfff-452a-9863-a2ca857d4f54)'. Ident: 'index-93--2588534479858262356', commit timestamp: 'Timestamp(1574796667, 10)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.333-0500 I INDEX [conn43] validating collection test0_fsmdb0.fsmcoll0 (UUID: d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:07.652-0500 I COMMAND [ReplWriterWorker-8] CMD: drop config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:07.651-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'config.cache.chunks.test0_fsmdb0.fsmcoll0'. Ident: collection-41--8000595249233899911, commit timestamp: Timestamp(1574796667, 22)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.631-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test0_fsmdb0.agg_out (b53e5b23-cfff-452a-9863-a2ca857d4f54)'. Ident: 'index-95--2588534479858262356', commit timestamp: 'Timestamp(1574796667, 10)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.335-0500 I INDEX [conn43] validating index consistency _id_ on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.631-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Deferring table drop for collection 'config.cache.chunks.test0_fsmdb0.agg_out'. Ident: collection-91--2588534479858262356, commit timestamp: Timestamp(1574796667, 10)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:07.652-0500 I STORAGE [ReplWriterWorker-8] dropCollection: config.cache.chunks.test0_fsmdb0.fsmcoll0 (44049d48-fa0f-4a8e-b7c3-56550b94d236) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796667, 22), t: 1 } and commit timestamp Timestamp(1574796667, 22)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.335-0500 I INDEX [conn43] validating index consistency _id_hashed on collection test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.640-0500 I COMMAND [conn55] CMD: drop test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:07.652-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for config.cache.chunks.test0_fsmdb0.fsmcoll0 (44049d48-fa0f-4a8e-b7c3-56550b94d236).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.336-0500 I INDEX [conn43] Validation complete for collection test0_fsmdb0.fsmcoll0 (UUID: d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.640-0500 I STORAGE [conn55] dropCollection: test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:07.652-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test0_fsmdb0.fsmcoll0 (44049d48-fa0f-4a8e-b7c3-56550b94d236)'. Ident: 'index-42--4104909142373009110', commit timestamp: 'Timestamp(1574796667, 22)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.337-0500 I NETWORK [conn43] end connection 127.0.0.1:34890 (9 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.640-0500 I STORAGE [conn55] Finishing collection drop for test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:07.652-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test0_fsmdb0.fsmcoll0 (44049d48-fa0f-4a8e-b7c3-56550b94d236)'. Ident: 'index-43--4104909142373009110', commit timestamp: 'Timestamp(1574796667, 22)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.653-0500 I COMMAND [ReplWriterWorker-5] CMD: drop config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:06.373-0500 I NETWORK [conn42] end connection 127.0.0.1:34862 (8 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.640-0500 I STORAGE [conn55] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3)'. Ident: 'index-26--2588534479858262356', commit timestamp: 'Timestamp(1574796667, 15)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:07.653-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'config.cache.chunks.test0_fsmdb0.fsmcoll0'. Ident: collection-41--4104909142373009110, commit timestamp: Timestamp(1574796667, 22)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.653-0500 I STORAGE [ReplWriterWorker-5] dropCollection: config.cache.chunks.test0_fsmdb0.fsmcoll0 (dad6441c-7462-448b-9e35-8123157c4429) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796667, 23), t: 1 } and commit timestamp Timestamp(1574796667, 23)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.640-0500 I STORAGE [conn55] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3)'. Ident: 'index-27--2588534479858262356', commit timestamp: 'Timestamp(1574796667, 15)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.625-0500 I COMMAND [ReplWriterWorker-4] CMD: drop test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.653-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for config.cache.chunks.test0_fsmdb0.fsmcoll0 (dad6441c-7462-448b-9e35-8123157c4429).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.640-0500 I STORAGE [conn55] Deferring table drop for collection 'test0_fsmdb0.fsmcoll0'. Ident: collection-25--2588534479858262356, commit timestamp: Timestamp(1574796667, 15)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.626-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test0_fsmdb0.agg_out (bf3cdc90-36f7-41c4-a8c0-a6114d9633bb) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796667, 6), t: 1 } and commit timestamp Timestamp(1574796667, 6)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.653-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test0_fsmdb0.fsmcoll0 (dad6441c-7462-448b-9e35-8123157c4429)'. Ident: 'index-38--2310912778499990807', commit timestamp: 'Timestamp(1574796667, 23)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.648-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test0_fsmdb0.fsmcoll0 took 0 ms and found the collection is not sharded
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.626-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test0_fsmdb0.agg_out (bf3cdc90-36f7-41c4-a8c0-a6114d9633bb).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.653-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test0_fsmdb0.fsmcoll0 (dad6441c-7462-448b-9e35-8123157c4429)'. Ident: 'index-39--2310912778499990807', commit timestamp: 'Timestamp(1574796667, 23)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.648-0500 I SHARDING [conn55] Updating metadata for collection test0_fsmdb0.fsmcoll0 from collection version: 1|3||5ddd7d71cf8184c2e1492ff8, shard version: 1|3||5ddd7d71cf8184c2e1492ff8 to collection version: due to UUID change
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.626-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.agg_out (bf3cdc90-36f7-41c4-a8c0-a6114d9633bb)'. Ident: 'index-78--7234316082034423155', commit timestamp: 'Timestamp(1574796667, 6)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.653-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'config.cache.chunks.test0_fsmdb0.fsmcoll0'. Ident: collection-37--2310912778499990807, commit timestamp: Timestamp(1574796667, 23)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.649-0500 I COMMAND [ShardServerCatalogCacheLoader-1] CMD: drop config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.626-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.agg_out (bf3cdc90-36f7-41c4-a8c0-a6114d9633bb)'. Ident: 'index-89--7234316082034423155', commit timestamp: 'Timestamp(1574796667, 6)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.649-0500 I STORAGE [ShardServerCatalogCacheLoader-1] dropCollection: config.cache.chunks.test0_fsmdb0.fsmcoll0 (dad6441c-7462-448b-9e35-8123157c4429) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.626-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test0_fsmdb0.agg_out'. Ident: collection-77--7234316082034423155, commit timestamp: Timestamp(1574796667, 6)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.649-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Finishing collection drop for config.cache.chunks.test0_fsmdb0.fsmcoll0 (dad6441c-7462-448b-9e35-8123157c4429).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.635-0500 I COMMAND [ReplWriterWorker-9] CMD: drop config.cache.chunks.test0_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.649-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test0_fsmdb0.fsmcoll0 (dad6441c-7462-448b-9e35-8123157c4429)'. Ident: 'index-30--2588534479858262356', commit timestamp: 'Timestamp(1574796667, 23)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.635-0500 I STORAGE [ReplWriterWorker-9] dropCollection: config.cache.chunks.test0_fsmdb0.agg_out (b53e5b23-cfff-452a-9863-a2ca857d4f54) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796667, 10), t: 1 } and commit timestamp Timestamp(1574796667, 10)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.649-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test0_fsmdb0.fsmcoll0 (dad6441c-7462-448b-9e35-8123157c4429)'. Ident: 'index-31--2588534479858262356', commit timestamp: 'Timestamp(1574796667, 23)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.635-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for config.cache.chunks.test0_fsmdb0.agg_out (b53e5b23-cfff-452a-9863-a2ca857d4f54).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.649-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Deferring table drop for collection 'config.cache.chunks.test0_fsmdb0.fsmcoll0'. Ident: collection-29--2588534479858262356, commit timestamp: Timestamp(1574796667, 23)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.635-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test0_fsmdb0.agg_out (b53e5b23-cfff-452a-9863-a2ca857d4f54)'. Ident: 'index-100--7234316082034423155', commit timestamp: 'Timestamp(1574796667, 10)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.635-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test0_fsmdb0.agg_out (b53e5b23-cfff-452a-9863-a2ca857d4f54)'. Ident: 'index-101--7234316082034423155', commit timestamp: 'Timestamp(1574796667, 10)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.635-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'config.cache.chunks.test0_fsmdb0.agg_out'. Ident: collection-99--7234316082034423155, commit timestamp: Timestamp(1574796667, 10)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.642-0500 I COMMAND [ReplWriterWorker-3] CMD: drop test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.642-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796667, 15), t: 1 } and commit timestamp Timestamp(1574796667, 15)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.642-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.642-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3)'. Ident: 'index-34--7234316082034423155', commit timestamp: 'Timestamp(1574796667, 15)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.642-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test0_fsmdb0.fsmcoll0 (d0083b18-6530-4d1e-bc5a-bf2e2c2ae6d3)'. Ident: 'index-35--7234316082034423155', commit timestamp: 'Timestamp(1574796667, 15)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.642-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test0_fsmdb0.fsmcoll0'. Ident: collection-33--7234316082034423155, commit timestamp: Timestamp(1574796667, 15)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.653-0500 I COMMAND [ReplWriterWorker-14] CMD: drop config.cache.chunks.test0_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.653-0500 I STORAGE [ReplWriterWorker-14] dropCollection: config.cache.chunks.test0_fsmdb0.fsmcoll0 (dad6441c-7462-448b-9e35-8123157c4429) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796667, 23), t: 1 } and commit timestamp Timestamp(1574796667, 23)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.653-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for config.cache.chunks.test0_fsmdb0.fsmcoll0 (dad6441c-7462-448b-9e35-8123157c4429).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.653-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test0_fsmdb0.fsmcoll0 (dad6441c-7462-448b-9e35-8123157c4429)'. Ident: 'index-38--7234316082034423155', commit timestamp: 'Timestamp(1574796667, 23)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.653-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test0_fsmdb0.fsmcoll0 (dad6441c-7462-448b-9e35-8123157c4429)'. Ident: 'index-39--7234316082034423155', commit timestamp: 'Timestamp(1574796667, 23)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.653-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'config.cache.chunks.test0_fsmdb0.fsmcoll0'. Ident: collection-37--7234316082034423155, commit timestamp: Timestamp(1574796667, 23)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.651-0500 I COMMAND [conn55] dropDatabase test0_fsmdb0 - starting
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.652-0500 I COMMAND [conn55] dropDatabase test0_fsmdb0 - dropped 0 collection(s)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.652-0500 I COMMAND [conn55] dropDatabase test0_fsmdb0 - finished
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.656-0500 I COMMAND [ReplWriterWorker-15] dropDatabase test0_fsmdb0 - starting
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.656-0500 I COMMAND [ReplWriterWorker-1] dropDatabase test0_fsmdb0 - starting
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.656-0500 I COMMAND [ReplWriterWorker-15] dropDatabase test0_fsmdb0 - dropped 0 collection(s)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.656-0500 I COMMAND [ReplWriterWorker-1] dropDatabase test0_fsmdb0 - dropped 0 collection(s)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.656-0500 I COMMAND [ReplWriterWorker-15] dropDatabase test0_fsmdb0 - finished
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.656-0500 I COMMAND [ReplWriterWorker-1] dropDatabase test0_fsmdb0 - finished
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:07.658-0500 I COMMAND [conn37] dropDatabase test0_fsmdb0 - starting
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:07.658-0500 I COMMAND [conn37] dropDatabase test0_fsmdb0 - dropped 0 collection(s)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:07.658-0500 I COMMAND [conn37] dropDatabase test0_fsmdb0 - finished
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:07.659-0500 I COMMAND [ReplWriterWorker-8] dropDatabase test0_fsmdb0 - starting
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:07.659-0500 I COMMAND [ReplWriterWorker-8] dropDatabase test0_fsmdb0 - dropped 0 collection(s)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:07.659-0500 I COMMAND [ReplWriterWorker-8] dropDatabase test0_fsmdb0 - finished
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:07.660-0500 I COMMAND [ReplWriterWorker-3] dropDatabase test0_fsmdb0 - starting
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:07.660-0500 I COMMAND [ReplWriterWorker-3] dropDatabase test0_fsmdb0 - dropped 0 collection(s)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:07.660-0500 I COMMAND [ReplWriterWorker-3] dropDatabase test0_fsmdb0 - finished
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:07.663-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test0_fsmdb0 took 0 ms and failed :: caused by :: NamespaceNotFound: database test0_fsmdb0 not found
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:07.663-0500 I SHARDING [conn37] setting this node's cached database version for test0_fsmdb0 to {}
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.664-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test0_fsmdb0 took 0 ms and failed :: caused by :: NamespaceNotFound: database test0_fsmdb0 not found
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:07.664-0500 I SHARDING [conn55] setting this node's cached database version for test0_fsmdb0 to {}
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:07.664-0500 I SHARDING [conn22] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:07.664-0500-5ddd7d7b5cde74b6784bb3d0", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55594", time: new Date(1574796667664), what: "dropDatabase", ns: "test0_fsmdb0", details: {} }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:07.664-0500 I SHARDING [ReplWriterWorker-7] setting this node's cached database version for test0_fsmdb0 to {}
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:07.665-0500 I SHARDING [ReplWriterWorker-4] setting this node's cached database version for test0_fsmdb0 to {}
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:07.666-0500 I SHARDING [ReplWriterWorker-4] setting this node's cached database version for test0_fsmdb0 to {}
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:07.666-0500 I SHARDING [ReplWriterWorker-5] setting this node's cached database version for test0_fsmdb0 to {}
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:07.667-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7d7b5cde74b6784bb3aa' unlocked.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:07.667-0500 I NETWORK [conn27] end connection 127.0.0.1:57874 (1 connection now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:07.668-0500 I NETWORK [conn26] end connection 127.0.0.1:57870 (0 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:07.668-0500 I NETWORK [conn59] end connection 127.0.0.1:44730 (0 connections now open)
[executor:fsm_workload_test:job0] 2019-11-26T14:31:07.668-0500 agg_out:CleanupConcurrencyWorkloads ran in 0.06 seconds: no failures detected.
[CheckReplDBHashInBackground:job0] Stopping the background check repl dbhash thread.
[executor] 2019-11-26T14:31:07.668-0500 Waiting for threads to complete
[executor] 2019-11-26T14:31:07.669-0500 Threads are completed!
[executor] 2019-11-26T14:31:07.669-0500 Summary of latest execution: All 6 test(s) passed in 23.80 seconds.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:07.671-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57880 #30 (1 connection now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:07.672-0500 I NETWORK [conn30] received client metadata from 127.0.0.1:57880 conn30: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:07.672-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44740 #60 (1 connection now open)
[CheckReplDBHashInBackground:job0] Starting the background check repl dbhash thread.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:07.672-0500 I NETWORK [conn60] received client metadata from 127.0.0.1:44740 conn60: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:07.673-0500 I NETWORK [conn30] end connection 127.0.0.1:57880 (0 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:07.673-0500 I NETWORK [conn60] end connection 127.0.0.1:44740 (0 connections now open)
[CheckReplDBHashInBackground:job0] Resuming the background check repl dbhash thread.
[executor:fsm_workload_test:job0] 2019-11-26T14:31:07.675-0500 Running agg_out.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval TestData = new Object(); TestData["usingReplicaSetShards"] = true; TestData["runningWithAutoSplit"] = false; TestData["runningWithBalancer"] = false; TestData["fsmWorkloads"] = ["jstests/concurrency/fsm_workloads/agg_out.js"]; TestData["resmokeDbPathPrefix"] = "/home/nz_linux/data/job0/resmoke"; TestData["dbNamePrefix"] = "test1_"; TestData["sameDB"] = false; TestData["sameCollection"] = false; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "resmoke_runner"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); --readMode=commands mongodb://localhost:20007,localhost:20008 jstests/concurrency/fsm_libs/resmoke_runner.js
[fsm_workload_test:agg_out] 2019-11-26T14:31:07.675-0500 Starting FSM workload jstests/concurrency/fsm_workloads/agg_out.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval TestData = new Object(); TestData["usingReplicaSetShards"] = true; TestData["runningWithAutoSplit"] = false; TestData["runningWithBalancer"] = false; TestData["fsmWorkloads"] = ["jstests/concurrency/fsm_workloads/agg_out.js"]; TestData["resmokeDbPathPrefix"] = "/home/nz_linux/data/job0/resmoke"; TestData["dbNamePrefix"] = "test1_"; TestData["sameDB"] = false; TestData["sameCollection"] = false; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "resmoke_runner"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); --readMode=commands mongodb://localhost:20007,localhost:20008 jstests/concurrency/fsm_libs/resmoke_runner.js
[executor:fsm_workload_test:job0] 2019-11-26T14:31:07.675-0500 Running agg_out:CheckReplDBHashInBackground...
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:08.609-0500 I STORAGE [TimestampMonitor] Removing drop-pending idents with drop timestamps before timestamp Timestamp(1574796662, 29)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.721-0500 Starting JSTest jstests/hooks/run_check_repl_dbhash_background.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_check_repl_dbhash_background"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_check_repl_dbhash_background.js
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:08.609-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-74--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796660, 4)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:08.610-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-75--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796660, 4)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:08.611-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-70--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796660, 4)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:08.612-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-77--2588534479858262356 (ns: test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266) with drop timestamp Timestamp(1574796660, 1268)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:08.613-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-83--2588534479858262356 (ns: test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266) with drop timestamp Timestamp(1574796660, 1268)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:08.614-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-71--2588534479858262356 (ns: test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266) with drop timestamp Timestamp(1574796660, 1268)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:08.615-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-76--2588534479858262356 (ns: test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095) with drop timestamp Timestamp(1574796660, 1269)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:08.617-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-85--2588534479858262356 (ns: test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095) with drop timestamp Timestamp(1574796660, 1269)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:08.618-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-72--2588534479858262356 (ns: test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095) with drop timestamp Timestamp(1574796660, 1269)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:08.619-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-82--2588534479858262356 (ns: test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2) with drop timestamp Timestamp(1574796660, 1524)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:08.620-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-87--2588534479858262356 (ns: test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2) with drop timestamp Timestamp(1574796660, 1524)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:08.620-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-80--2588534479858262356 (ns: test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2) with drop timestamp Timestamp(1574796660, 1524)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:08.621-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-90--2588534479858262356 (ns: test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37) with drop timestamp Timestamp(1574796660, 2031)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:08.622-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-92--2588534479858262356 (ns: test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37) with drop timestamp Timestamp(1574796660, 2031)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:08.623-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-89--2588534479858262356 (ns: test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37) with drop timestamp Timestamp(1574796660, 2031)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:08.625-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-98--2588534479858262356 (ns: test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0) with drop timestamp Timestamp(1574796660, 2533)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:08.626-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-99--2588534479858262356 (ns: test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0) with drop timestamp Timestamp(1574796660, 2533)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:08.627-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-97--2588534479858262356 (ns: test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0) with drop timestamp Timestamp(1574796660, 2533)
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.731-0500 FSM workload jstests/concurrency/fsm_workloads/agg_out.js started with pid 15266.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.734-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js started with pid 15269.
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.752-0500 MongoDB shell version v0.0.0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.756-0500 MongoDB shell version v0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.802-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:09.803-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44742 #61 (1 connection now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:09.803-0500 I NETWORK [conn61] received client metadata from 127.0.0.1:44742 conn61: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.805-0500 Implicit session: session { "id" : UUID("a974a92d-e243-4b87-b89a-2bc31ad7c8d4") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.806-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:09.806-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44744 #62 (2 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:09.806-0500 I NETWORK [conn62] received client metadata from 127.0.0.1:44744 conn62: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.807-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.808-0500 true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.808-0500 Implicit session: session { "id" : UUID("cf450b15-804f-4097-a776-84016bebb4e9") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.810-0500 MongoDB server version: 0.0.0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.811-0500 true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.815-0500 2019-11-26T14:31:09.814-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.815-0500 2019-11-26T14:31:09.815-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.815-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56148 #86 (33 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.815-0500 I NETWORK [conn86] received client metadata from 127.0.0.1:56148 conn86: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.816-0500 2019-11-26T14:31:09.815-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.816-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56150 #87 (34 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.816-0500 I NETWORK [conn87] received client metadata from 127.0.0.1:56150 conn87: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.817-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.817-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.817-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.817-0500 [jsTest] New session started with sessionID: { "id" : UUID("1d32741d-6e02-4981-9d06-11f0d6886c21") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.817-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.817-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.817-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.819-0500 2019-11-26T14:31:09.818-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.819-0500 2019-11-26T14:31:09.819-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.819-0500 2019-11-26T14:31:09.819-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.819-0500 2019-11-26T14:31:09.819-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.819-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51926 #46 (10 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.819-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38894 #90 (27 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.819-0500 2019-11-26T14:31:09.819-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:09.819-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52818 #46 (10 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.820-0500 2019-11-26T14:31:09.820-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.819-0500 I NETWORK [conn46] received client metadata from 127.0.0.1:51926 conn46: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.820-0500 2019-11-26T14:31:09.819-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.819-0500 I NETWORK [conn90] received client metadata from 127.0.0.1:38894 conn90: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.820-0500 2019-11-26T14:31:09.820-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.820-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56158 #88 (35 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:09.819-0500 I NETWORK [conn46] received client metadata from 127.0.0.1:52818 conn46: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.821-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.820-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38900 #91 (28 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.821-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.821-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.820-0500 I NETWORK [conn88] received client metadata from 127.0.0.1:56158 conn88: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.821-0500 [jsTest] New session started with sessionID: { "id" : UUID("7e97b0c8-c535-4d74-9d00-aef94372f016") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.820-0500 I NETWORK [conn91] received client metadata from 127.0.0.1:38900 conn91: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.821-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.820-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56162 #89 (36 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.821-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.820-0500 I NETWORK [conn89] received client metadata from 127.0.0.1:56162 conn89: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.821-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.821-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:09.821-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34930 #44 (9 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.821-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46370 #109 (35 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.821-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:09.822-0500 I NETWORK [conn44] received client metadata from 127.0.0.1:34930 conn44: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.822-0500 2019-11-26T14:31:09.821-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.822-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:09.821-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51572 #47 (9 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.822-0500 2019-11-26T14:31:09.821-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.822-0500 I NETWORK [conn109] received client metadata from 127.0.0.1:46370 conn109: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.823-0500 [jsTest] New session started with sessionID: { "id" : UUID("c7da74d9-a3e4-4f19-ac21-10efe4ab8adb") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:09.822-0500 I NETWORK [conn47] received client metadata from 127.0.0.1:51572 conn47: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.823-0500 2019-11-26T14:31:09.821-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.822-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46376 #110 (36 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.823-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.822-0500 I NETWORK [conn110] received client metadata from 127.0.0.1:46376 conn110: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.823-0500 2019-11-26T14:31:09.821-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.823-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:09.823-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52834 #47 (11 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.824-0500 2019-11-26T14:31:09.822-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.823-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38914 #92 (29 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.824-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.823-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51950 #47 (11 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.824-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:09.823-0500 I NETWORK [conn47] received client metadata from 127.0.0.1:52834 conn47: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.824-0500 2019-11-26T14:31:09.823-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.824-0500 I NETWORK [conn92] received client metadata from 127.0.0.1:38914 conn92: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.824-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.824-0500 I NETWORK [conn47] received client metadata from 127.0.0.1:51950 conn47: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.824-0500 2019-11-26T14:31:09.823-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.824-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38918 #93 (30 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.825-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.824-0500 I NETWORK [conn93] received client metadata from 127.0.0.1:38918 conn93: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.825-0500 2019-11-26T14:31:09.823-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.825-0500 [jsTest] New session started with sessionID: { "id" : UUID("d2792eb4-9630-4de3-9db3-0163798544ba") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.825-0500 2019-11-26T14:31:09.823-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.825-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.825-0500 2019-11-26T14:31:09.824-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.825-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.825-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.825-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.825-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.825-0500 Skipping data consistency checks for 1-node CSRS: { "type" : "replica set", "primary" : "localhost:20000", "nodes" : [ "localhost:20000" ] }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.826-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.826-0500 [jsTest] New session started with sessionID: { "id" : UUID("8c7c5b23-f6dd-45bc-947f-eb2283fd3ce5") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.826-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.826-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.826-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:09.826-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34948 #45 (10 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.826-0500 2019-11-26T14:31:09.825-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.826-0500 2019-11-26T14:31:09.825-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.826-0500 2019-11-26T14:31:09.825-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.826-0500 2019-11-26T14:31:09.825-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.826-0500 2019-11-26T14:31:09.826-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.826-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46388 #111 (37 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:09.826-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51584 #48 (10 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:09.826-0500 I NETWORK [conn45] received client metadata from 127.0.0.1:34948 conn45: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:09.826-0500 I NETWORK [conn48] received client metadata from 127.0.0.1:51584 conn48: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.826-0500 I NETWORK [conn111] received client metadata from 127.0.0.1:46388 conn111: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.826-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46392 #112 (38 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.827-0500 I NETWORK [conn112] received client metadata from 127.0.0.1:46392 conn112: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.827-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.827-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.827-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.827-0500 [jsTest] New session started with sessionID: { "id" : UUID("a51d2333-6bbd-4121-9d6a-a09712cc2abb") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.827-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.827-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.827-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.835-0500 setting random seed: 911854311
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:09.836-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44786 #63 (3 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:09.836-0500 I NETWORK [conn63] received client metadata from 127.0.0.1:44786 conn63: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.836-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.836-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.836-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.837-0500 [jsTest] New session started with sessionID: { "id" : UUID("60769e48-332b-4589-ade0-ca535f2c1413") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.837-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.837-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.837-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.837-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56190 #90 (37 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.838-0500 I NETWORK [conn90] received client metadata from 127.0.0.1:56190 conn90: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.838-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.838-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.838-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.838-0500 [jsTest] New session started with sessionID: { "id" : UUID("ddb4db0f-7c90-4eec-b684-195659b74777") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.838-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.838-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.838-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.840-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38932 #94 (31 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.840-0500 I NETWORK [conn94] received client metadata from 127.0.0.1:38932 conn94: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.841-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.841-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.841-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.841-0500 [jsTest] New session started with sessionID: { "id" : UUID("81335018-84e5-43a1-800f-fafdebec5f72") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.841-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.841-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.841-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.841-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46400 #113 (39 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.841-0500 I NETWORK [conn113] received client metadata from 127.0.0.1:46400 conn113: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.842-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.842-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.842-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.842-0500 [jsTest] New session started with sessionID: { "id" : UUID("03283c6f-75e1-44c5-ba8c-76b78c1eeac1") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.842-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.842-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.842-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:09.843-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44794 #64 (4 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:09.843-0500 I NETWORK [conn64] received client metadata from 127.0.0.1:44794 conn64: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:09.843-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57938 #31 (1 connection now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:09.843-0500 I NETWORK [conn31] received client metadata from 127.0.0.1:57938 conn31: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.846-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56200 #91 (38 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.846-0500 I NETWORK [conn91] received client metadata from 127.0.0.1:56200 conn91: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.847-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.847-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.847-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.847-0500 [jsTest] New session started with sessionID: { "id" : UUID("10e4d534-34e6-4227-8a27-dae3bfd44ffb") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.847-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.847-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.847-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.847-0500 Recreating replica set from config {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.848-0500 "_id" : "config-rs",
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.848-0500 "version" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.848-0500 "configsvr" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.848-0500 "protocolVersion" : NumberLong(1),
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.848-0500 "writeConcernMajorityJournalDefault" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.848-0500 "members" : [
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.848-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.848-0500 "_id" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.848-0500 "host" : "localhost:20000",
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.848-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.848-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.848-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.848-0500 "priority" : 1,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.848-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56202 #92 (39 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.848-0500 "tags" : {
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.848-0500 I NETWORK [conn92] received client metadata from 127.0.0.1:56202 conn92: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.848-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.848-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.848-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.848-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.848-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.849-0500 ],
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.849-0500 "settings" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.849-0500 "chainingAllowed" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.849-0500 "heartbeatIntervalMillis" : 2000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.849-0500 "heartbeatTimeoutSecs" : 10,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.849-0500 "electionTimeoutMillis" : 86400000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.849-0500 "catchUpTimeoutMillis" : -1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.849-0500 "catchUpTakeoverDelayMillis" : 30000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.849-0500 "getLastErrorModes" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.849-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.849-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.849-0500 "getLastErrorDefaults" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.849-0500 "w" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.849-0500 "wtimeout" : 0
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.849-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.849-0500 "replicaSetId" : ObjectId("5ddd7d655cde74b6784bb14d")
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.849-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.849-0500 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.849-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38944 #95 (32 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.849-0500 I NETWORK [conn95] received client metadata from 127.0.0.1:38944 conn95: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.850-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.850-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.850-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.850-0500 [jsTest] New session started with sessionID: { "id" : UUID("32bb5d8f-e52a-4955-a151-59e06afecc47") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.850-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.850-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.850-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.851-0500 Recreating replica set from config {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.851-0500 "_id" : "shard-rs0",
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.851-0500 "version" : 2,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.851-0500 "protocolVersion" : NumberLong(1),
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.851-0500 "writeConcernMajorityJournalDefault" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.851-0500 "members" : [
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.851-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.851-0500 "_id" : 0,
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.851-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38946 #96 (33 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.851-0500 "host" : "localhost:20001",
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.851-0500 I NETWORK [conn96] received client metadata from 127.0.0.1:38946 conn96: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.851-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.851-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.851-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.851-0500 "priority" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.851-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500 "_id" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500 "host" : "localhost:20002",
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500 "priority" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500 "_id" : 2,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500 "host" : "localhost:20003",
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.852-0500 "priority" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.853-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.853-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:09.852-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52872 #48 (12 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.853-0500 },
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.851-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51982 #48 (12 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.853-0500 "slaveDelay" : NumberLong(0),
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.853-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38952 #97 (34 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.853-0500 "votes" : 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:09.852-0500 I NETWORK [conn48] received client metadata from 127.0.0.1:52872 conn48: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.853-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.853-0500 ],
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.853-0500 "settings" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.853-0500 "chainingAllowed" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.854-0500 "heartbeatIntervalMillis" : 2000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.854-0500 "heartbeatTimeoutSecs" : 10,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.854-0500 "electionTimeoutMillis" : 86400000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.854-0500 "catchUpTimeoutMillis" : -1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.854-0500 "catchUpTakeoverDelayMillis" : 30000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.854-0500 "getLastErrorModes" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.854-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.854-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.854-0500 "getLastErrorDefaults" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.854-0500 "w" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.854-0500 "wtimeout" : 0
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.854-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.854-0500 "replicaSetId" : ObjectId("5ddd7d683bbfe7fa5630d3b8")
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.854-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.854-0500 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.852-0500 I NETWORK [conn48] received client metadata from 127.0.0.1:51982 conn48: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.853-0500 I NETWORK [conn97] received client metadata from 127.0.0.1:38952 conn97: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.854-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46420 #114 (40 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.854-0500 I NETWORK [conn114] received client metadata from 127.0.0.1:46420 conn114: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.855-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.855-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.855-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.855-0500 [jsTest] New session started with sessionID: { "id" : UUID("b705efe3-0641-4c2c-b9d2-3d26667bbaaa") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.855-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.855-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.855-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.856-0500 Recreating replica set from config {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.856-0500 "_id" : "shard-rs1",
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.856-0500 "version" : 2,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.856-0500 "protocolVersion" : NumberLong(1),
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.856-0500 "writeConcernMajorityJournalDefault" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.856-0500 "members" : [
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.856-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.856-0500 "_id" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.856-0500 "host" : "localhost:20004",
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.856-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.856-0500 "buildIndexes" : true,
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.856-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46422 #115 (41 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.856-0500 "hidden" : false,
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.856-0500 I NETWORK [conn115] received client metadata from 127.0.0.1:46422 conn115: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.856-0500 "priority" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.856-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.856-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.856-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.856-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.856-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.856-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.856-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500 "_id" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500 "host" : "localhost:20005",
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500 "priority" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500 "_id" : 2,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500 "host" : "localhost:20006",
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500 "priority" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.857-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.858-0500 ],
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.858-0500 "settings" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.858-0500 "chainingAllowed" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.858-0500 "heartbeatIntervalMillis" : 2000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.858-0500 "heartbeatTimeoutSecs" : 10,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.858-0500 "electionTimeoutMillis" : 86400000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.858-0500 "catchUpTimeoutMillis" : -1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.858-0500 "catchUpTakeoverDelayMillis" : 30000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.858-0500 "getLastErrorModes" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.858-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.858-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.858-0500 "getLastErrorDefaults" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.858-0500 "w" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.858-0500 "wtimeout" : 0
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.858-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.858-0500 "replicaSetId" : ObjectId("5ddd7d6bcf8184c2e1492eba")
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.858-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.858-0500 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:09.857-0500 I NETWORK [listener] connection accepted from 127.0.0.1:34984 #46 (11 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.859-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.859-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.859-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.859-0500 [jsTest] New session started with sessionID: { "id" : UUID("ff3e7434-8ba8-455a-bcea-46def16deaef") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.859-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.859-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.859-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.857-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46428 #116 (42 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:09.856-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51622 #49 (11 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:09.857-0500 I NETWORK [conn46] received client metadata from 127.0.0.1:34984 conn46: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.858-0500 I NETWORK [conn116] received client metadata from 127.0.0.1:46428 conn116: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:09.856-0500 I NETWORK [conn49] received client metadata from 127.0.0.1:51622 conn49: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.859-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.859-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.860-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.860-0500 [jsTest] New session started with sessionID: { "id" : UUID("5c42aee0-126d-4d0f-893c-d0fc04f4cdbe") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.860-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.860-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.860-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.860-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.860-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.860-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.860-0500 [jsTest] New session started with sessionID: { "id" : UUID("8d65de22-913b-4d9c-9fa6-6258c690546d") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.860-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.860-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.860-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.863-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.863-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.863-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.863-0500 [jsTest] New session started with sessionID: { "id" : UUID("77af661d-f553-4de9-8d56-6c06078a7914") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.863-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.863-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.863-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.863-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.863-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.863-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.863-0500 [jsTest] New session started with sessionID: { "id" : UUID("286dda13-c17e-419d-8c82-2e5bfc6b4b8e") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.864-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.864-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.864-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.864-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.864-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.864-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.864-0500 [jsTest] New session started with sessionID: { "id" : UUID("a5b7e8fe-c401-4ff5-ac73-ac9d1fd7dfde") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.864-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.864-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.864-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.866-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.866-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.866-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.866-0500 [jsTest] Workload(s) started: jstests/concurrency/fsm_workloads/agg_out.js
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.866-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.866-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.866-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.867-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.867-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.867-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.867-0500 [jsTest] New session started with sessionID: { "id" : UUID("802be2a3-9ddb-4c1d-9b87-98584e471fdb") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.867-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.867-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:09.867-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.869-0500 I SHARDING [conn19] distributed lock 'test1_fsmdb0' acquired for 'dropCollection', ts : 5ddd7d7d5cde74b6784bb3e9
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.871-0500 I SHARDING [conn19] distributed lock 'test1_fsmdb0.fsmcoll0' acquired for 'dropCollection', ts : 5ddd7d7d5cde74b6784bb3eb
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.872-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d7d5cde74b6784bb3eb' unlocked.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.873-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d7d5cde74b6784bb3e9' unlocked.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.875-0500 I SHARDING [conn19] distributed lock 'test1_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d7d5cde74b6784bb3f3
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.878-0500 I SHARDING [conn19] Registering new database { _id: "test1_fsmdb0", primary: "shard-rs0", partitioned: false, version: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 } } in sharding catalog
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.880-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test1_fsmdb0 from version {} to version { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.880-0500 I SHARDING [conn37] setting this node's cached database version for test1_fsmdb0 to { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.880-0500 I SHARDING [conn19] Enabling sharding for database [test1_fsmdb0] in config db
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.882-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d7d5cde74b6784bb3f3' unlocked.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.884-0500 I SHARDING [conn19] distributed lock 'test1_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d7d5cde74b6784bb3fc
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.885-0500 I SHARDING [conn19] distributed lock 'test1_fsmdb0.fsmcoll0' acquired for 'shardCollection', ts : 5ddd7d7d5cde74b6784bb3fe
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.887-0500 I STORAGE [conn37] createCollection: test1_fsmdb0.fsmcoll0 with provided UUID: dccb4b9f-92a4-4a8c-933f-ac40a7941a38 and options: { uuid: UUID("dccb4b9f-92a4-4a8c-933f-ac40a7941a38") }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.898-0500 I INDEX [conn37] index build: done building index _id_ on ns test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.898-0500 I INDEX [conn37] Registering index build: 82c6d58f-4430-4dc5-9dc6-68f9143a8303
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:09.900-0500 I STORAGE [ReplWriterWorker-11] createCollection: test1_fsmdb0.fsmcoll0 with provided UUID: dccb4b9f-92a4-4a8c-933f-ac40a7941a38 and options: { uuid: UUID("dccb4b9f-92a4-4a8c-933f-ac40a7941a38") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.901-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:09.901-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44822 #65 (5 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:09.901-0500 I NETWORK [conn65] received client metadata from 127.0.0.1:44822 conn65: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.902-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:09.902-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44824 #66 (6 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:09.903-0500 I NETWORK [conn66] received client metadata from 127.0.0.1:44824 conn66: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.903-0500 Implicit session: session { "id" : UUID("5f192990-5cc4-492f-9e9f-121e02d9f1a2") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.904-0500 Implicit session: session { "id" : UUID("3fcf86d5-a4a1-4656-beb0-4e888b0faa18") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.904-0500 MongoDB server version: 0.0.0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.906-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.908-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38968 #98 (35 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.910-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46436 #117 (43 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.910-0500 I NETWORK [conn98] received client metadata from 127.0.0.1:38968 conn98: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.910-0500 I NETWORK [conn117] received client metadata from 127.0.0.1:46436 conn117: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.912-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38972 #99 (36 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.912-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46440 #118 (44 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.913-0500 I NETWORK [conn99] received client metadata from 127.0.0.1:38972 conn99: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.913-0500 I NETWORK [conn118] received client metadata from 127.0.0.1:46440 conn118: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.913-0500 I INDEX [conn37] index build: starting on test1_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.913-0500 I INDEX [conn37] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.913-0500 I STORAGE [conn37] Index build initialized: 82c6d58f-4430-4dc5-9dc6-68f9143a8303: test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.913-0500 I INDEX [conn37] Waiting for index build to complete: 82c6d58f-4430-4dc5-9dc6-68f9143a8303
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.913-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52010 #49 (13 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.913-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:09.913-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51642 #50 (12 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.913-0500 I NETWORK [conn49] received client metadata from 127.0.0.1:52010 conn49: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:09.913-0500 I NETWORK [conn50] received client metadata from 127.0.0.1:51642 conn50: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:09.913-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:09.914-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35004 #47 (12 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:09.914-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52904 #49 (13 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.914-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:09.914-0500 I NETWORK [conn47] received client metadata from 127.0.0.1:35004 conn47: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:09.914-0500 I NETWORK [conn49] received client metadata from 127.0.0.1:52904 conn49: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.915-0500 I STORAGE [ReplWriterWorker-5] createCollection: test1_fsmdb0.fsmcoll0 with provided UUID: dccb4b9f-92a4-4a8c-933f-ac40a7941a38 and options: { uuid: UUID("dccb4b9f-92a4-4a8c-933f-ac40a7941a38") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.915-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.915-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.915-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.915-0500 [jsTest] New session started with sessionID: { "id" : UUID("7bb26b01-5ba2-44cf-b64a-20a0db1139d5") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.915-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.915-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.915-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.915-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.915-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.915-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.915-0500 [jsTest] New session started with sessionID: { "id" : UUID("60968bd5-e168-42a2-838b-645bc0d1c7c1") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.915-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.915-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.915-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.916-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.916-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.916-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.916-0500 [jsTest] New session started with sessionID: { "id" : UUID("d76404e1-84fc-45be-a1c3-4de7a3809694") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.916-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.916-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.916-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.916-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.916-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.916-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.916-0500 [jsTest] New session started with sessionID: { "id" : UUID("1a24e8a7-85ed-48c8-821c-91f710c43afe") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.916-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.916-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.916-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.916-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.916-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.916-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.916-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.916-0500 [jsTest] New session started with sessionID: { "id" : UUID("478885b1-b21b-45d5-b296-b6238aebcc81") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.916-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.916-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.916-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.917-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.917-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.917-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.917-0500 [jsTest] New session started with sessionID: { "id" : UUID("2c28c3b0-d5c4-4eb5-b6f5-fca688bd494a") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.917-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.917-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.917-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.917-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 82c6d58f-4430-4dc5-9dc6-68f9143a8303: test1_fsmdb0.fsmcoll0 ( dccb4b9f-92a4-4a8c-933f-ac40a7941a38 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.917-0500 I INDEX [conn37] Index build completed: 82c6d58f-4430-4dc5-9dc6-68f9143a8303
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.918-0500 Running data consistency checks for replica set: shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.919-0500 Running data consistency checks for replica set: shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.923-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.923-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.923-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.923-0500 [jsTest] New session started with sessionID: { "id" : UUID("b6afd043-0fd4-4ecb-b4b7-af89bea73940") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.923-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.923-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.923-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.923-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.923-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.923-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.923-0500 [jsTest] New session started with sessionID: { "id" : UUID("7e69d87d-2ee5-4ff6-9e34-f35c7003f5a3") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.923-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.923-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.923-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.923-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.923-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500 [jsTest] New session started with sessionID: { "id" : UUID("3c4d76fe-38b7-4f92-8231-e7c5d763de84") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500 [jsTest] New session started with sessionID: { "id" : UUID("964f0874-0887-45d9-b360-b491d3de2587") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500 [jsTest] New session started with sessionID: { "id" : UUID("4affe478-4e6e-4c33-8540-5c65781c64dc") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500 [jsTest] New session started with sessionID: { "id" : UUID("0a55adc3-8572-4272-a594-297a978ed81b") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.924-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.925-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:09.924-0500 W CONTROL [conn50] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 40 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.924-0500 W CONTROL [conn99] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.924-0500 W CONTROL [conn118] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 47 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.925-0500 W CONTROL [conn49] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 0 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:09.925-0500 W CONTROL [conn47] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 43 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:09.925-0500 W CONTROL [conn49] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.928-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.928-0500 W CONTROL [conn118] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 47 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:09.929-0500 W CONTROL [conn50] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 40 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:09.929-0500 W CONTROL [conn47] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 43 }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:09.931-0500 I NETWORK [conn66] end connection 127.0.0.1:44824 (5 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.931-0500 I NETWORK [conn117] end connection 127.0.0.1:46436 (43 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:09.931-0500 I INDEX [ReplWriterWorker-5] index build: starting on test1_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:09.931-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:09.931-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 66cacd56-6302-4922-a3f3-02a46edad03b: test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:09.931-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.931-0500 I NETWORK [conn118] end connection 127.0.0.1:46440 (42 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:09.931-0500 I NETWORK [conn50] end connection 127.0.0.1:51642 (11 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:09.931-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:09.931-0500 I NETWORK [conn47] end connection 127.0.0.1:35004 (11 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:09.933-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:09.935-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 66cacd56-6302-4922-a3f3-02a46edad03b: test1_fsmdb0.fsmcoll0 ( dccb4b9f-92a4-4a8c-933f-ac40a7941a38 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.936-0500 I SHARDING [conn37] CMD: shardcollection: { _shardsvrShardCollection: "test1_fsmdb0.fsmcoll0", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("802be2a3-9ddb-4c1d-9b87-98584e471fdb"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796669, 11), signature: { hash: BinData(0, 3119F58DE2009FC81F1185C9E8BC8365133E2132), keyId: 6763700092420489256 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44794", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796669, 11), t: 1 } }, $db: "admin" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.936-0500 I SHARDING [conn37] about to log metadata event into changelog: { _id: "nz_desktop:20001-2019-11-26T14:31:09.936-0500-5ddd7d7d3bbfe7fa5630d6e6", server: "nz_desktop:20001", shard: "shard-rs0", clientAddr: "127.0.0.1:38444", time: new Date(1574796669936), what: "shardCollection.start", ns: "test1_fsmdb0.fsmcoll0", details: { shardKey: { _id: "hashed" }, collection: "test1_fsmdb0.fsmcoll0", uuid: UUID("dccb4b9f-92a4-4a8c-933f-ac40a7941a38"), empty: true, fromMapReduce: false, primary: "shard-rs0:shard-rs0/localhost:20001,localhost:20002,localhost:20003", numChunks: 4 } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.937-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.938-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46450 #119 (43 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.938-0500 I NETWORK [conn119] received client metadata from 127.0.0.1:46450 conn119: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.939-0500 I STORAGE [conn119] createCollection: test1_fsmdb0.fsmcoll0 with provided UUID: dccb4b9f-92a4-4a8c-933f-ac40a7941a38 and options: { uuid: UUID("dccb4b9f-92a4-4a8c-933f-ac40a7941a38") }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.946-0500 I INDEX [ReplWriterWorker-14] index build: starting on test1_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.946-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.946-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: ab7280a4-9f06-48ab-97a8-6a7dfc651fc0: test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38 ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:09.976-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js finished.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.947-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38986 #101 (37 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:09.947-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52912 #50 (14 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.953-0500 I INDEX [conn119] index build: done building index _id_ on ns test1_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:10.055-0500 Using 5 threads (requested 5)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:11.044-0500 Starting JSTest jstests/hooks/run_check_repl_dbhash_background.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_check_repl_dbhash_background"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_check_repl_dbhash_background.js
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.044-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.044-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.044-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.044-0500 Implicit session: session { "id" : UUID("39f6d868-7c12-4416-9520-e4d5ce95b092") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.044-0500 Implicit session: session { "id" : UUID("434e5532-88aa-490c-b47d-417a0bbe2f80") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.044-0500 Implicit session: session { "id" : UUID("02a80bf1-6db2-4f2e-a32c-4dfe3a26542c") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.044-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.044-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.044-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500 Implicit session: session { "id" : UUID("1fd24bef-1faf-430e-a225-c01a332a688e") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500 Implicit session: session { "id" : UUID("d360ada2-9887-4b39-a39e-02bf8af7725a") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500 [tid:2] setting random seed: 2173768868
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500 [tid:0] setting random seed: 1315648332
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500 [tid:4] setting random seed: 2244939943
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500 [tid:1] setting random seed: 542021918
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500 [tid:3] setting random seed: 2359880270
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500 [tid:2]
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500 [jsTest] New session started with sessionID: { "id" : UUID("275b014e-73f2-4e18-a6f0-826d2de7f856") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500 [tid:0]
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500 [jsTest] New session started with sessionID: { "id" : UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.045-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.046-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.046-0500 [tid:1]
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.046-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:09.955-0500 I STORAGE [ReplWriterWorker-10] createCollection: test1_fsmdb0.fsmcoll0 with provided UUID: dccb4b9f-92a4-4a8c-933f-ac40a7941a38 and options: { uuid: UUID("dccb4b9f-92a4-4a8c-933f-ac40a7941a38") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.046-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:09.955-0500 I STORAGE [ReplWriterWorker-4] createCollection: test1_fsmdb0.fsmcoll0 with provided UUID: dccb4b9f-92a4-4a8c-933f-ac40a7941a38 and options: { uuid: UUID("dccb4b9f-92a4-4a8c-933f-ac40a7941a38") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.050-0500 [jsTest] New session started with sessionID: { "id" : UUID("e3e6d81f-56e4-4925-8103-eee7edf55063") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:09.966-0500 I NETWORK [conn65] end connection 127.0.0.1:44822 (4 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.050-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.970-0500 I NETWORK [conn87] end connection 127.0.0.1:56150 (38 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.050-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.050-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:10.317-0500 I NETWORK [listener] connection accepted from 127.0.0.1:58004 #32 (2 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.051-0500 [tid:3]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.946-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.051-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.948-0500 I NETWORK [conn101] received client metadata from 127.0.0.1:38986 conn101: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.051-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:09.948-0500 I NETWORK [conn50] received client metadata from 127.0.0.1:52912 conn50: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.051-0500 [jsTest] New session started with sessionID: { "id" : UUID("24e7029f-69f1-4b83-9971-584e1ea130ee") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.051-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.051-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.051-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.052-0500 [tid:4]
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.052-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.052-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.052-0500 [jsTest] New session started with sessionID: { "id" : UUID("476da9c6-a903-4290-8632-5349ffeb7563") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.961-0500 I INDEX [conn119] index build: done building index _id_hashed on ns test1_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.052-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:09.970-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test1_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.052-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:11.052-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:09.970-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:09.969-0500 I NETWORK [conn62] end connection 127.0.0.1:44744 (3 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:11.053-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js started with pid 15379.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.976-0500 I NETWORK [conn86] end connection 127.0.0.1:56148 (37 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:10.317-0500 I NETWORK [conn32] received client metadata from 127.0.0.1:58004 conn32: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.947-0500 I NETWORK [conn49] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.948-0500 I NETWORK [listener] connection accepted from 127.0.0.1:38992 #102 (38 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:09.965-0500 W CONTROL [conn49] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 4 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.961-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test1_fsmdb0 from version {} to version { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:09.976-0500 I NETWORK [conn47] end connection 127.0.0.1:51572 (10 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:09.976-0500 I NETWORK [conn44] end connection 127.0.0.1:34930 (10 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.041-0500 I COMMAND [conn64] command test1_fsmdb0.fsmcoll0 appName: "MongoDB Shell" command: shardCollection { shardCollection: "test1_fsmdb0.fsmcoll0", key: { _id: "hashed" }, lsid: { id: UUID("802be2a3-9ddb-4c1d-9b87-98584e471fdb") }, $clusterTime: { clusterTime: Timestamp(1574796669, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:245 protocol:op_msg 158ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:09.989-0500 D4 TXN [conn31] New transaction started with txnNumber: 0 on session with lsid 92b77a98-5848-4954-907d-ff6607ceff71
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:10.339-0500 I NETWORK [listener] connection accepted from 127.0.0.1:58010 #33 (3 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.947-0500 I NETWORK [conn49] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.949-0500 I NETWORK [conn102] received client metadata from 127.0.0.1:38992 conn102: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:09.967-0500 I NETWORK [conn49] end connection 127.0.0.1:52904 (13 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.962-0500 I SHARDING [conn119] Marking collection test1_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:09.986-0500 I INDEX [ReplWriterWorker-0] index build: starting on test1_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:09.986-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.135-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test1_fsmdb0 from version {} to version { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:10.038-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test1_fsmdb0 from version {} to version { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:10.339-0500 I NETWORK [conn33] received client metadata from 127.0.0.1:58010 conn33: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.947-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:09.976-0500 I NETWORK [conn46] end connection 127.0.0.1:52818 (12 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.970-0500 I NETWORK [conn110] end connection 127.0.0.1:46376 (42 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:09.987-0500 I INDEX [ReplWriterWorker-9] index build: starting on test1_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:09.986-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 937aebf7-b134-4ece-b004-a2131d57c863: test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38 ): indexes: 1
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.136-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.fsmcoll0 to version 1|3||5ddd7d7d3bbfe7fa5630d6e7 took 0 ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:10.038-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.fsmcoll0 to version 1|3||5ddd7d7d3bbfe7fa5630d6e7 took 0 ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:10.350-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test1_fsmdb0 from version {} to version { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.964-0500 W CONTROL [conn99] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 4 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.012-0500 I STORAGE [ReplWriterWorker-4] createCollection: config.cache.chunks.test1_fsmdb0.fsmcoll0 with provided UUID: 24d02c72-11d8-48c7-b13e-109658af75b4 and options: { uuid: UUID("24d02c72-11d8-48c7-b13e-109658af75b4") }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.976-0500 I NETWORK [conn109] end connection 127.0.0.1:46370 (41 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:09.987-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:09.986-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.227-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.agg_out took 0 ms and found the collection is not sharded
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:10.040-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d7d5cde74b6784bb3fe' unlocked.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:10.041-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d7d5cde74b6784bb3fc' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.966-0500 I NETWORK [conn98] end connection 127.0.0.1:38968 (37 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.029-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.997-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.fsmcoll0 to version 1|3||5ddd7d7d3bbfe7fa5630d6e7 took 1 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:09.987-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: e345632f-c1ad-4896-9619-b6aae2573871: test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:09.987-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.306-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44852 #67 (4 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:10.352-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.fsmcoll0 to version 1|3||5ddd7d7d3bbfe7fa5630d6e7 took 1 ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:10.041-0500 I COMMAND [conn19] command admin.$cmd appName: "MongoDB Shell" command: _configsvrShardCollection { _configsvrShardCollection: "test1_fsmdb0.fsmcoll0", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("802be2a3-9ddb-4c1d-9b87-98584e471fdb"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1574796669, 9), signature: { hash: BinData(0, 3119F58DE2009FC81F1185C9E8BC8365133E2132), keyId: 6763700092420489256 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44794", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796669, 9), t: 1 } }, $db: "admin" } numYields:0 reslen:587 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 6 } }, Global: { acquireCount: { r: 2, w: 4 } }, Database: { acquireCount: { r: 2, w: 4 } }, Collection: { acquireCount: { r: 2, w: 4 } }, Mutex: { acquireCount: { r: 10, W: 1 } } } flowControl:{ acquireCount: 4 } storage:{} protocol:op_msg 158ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.967-0500 I NETWORK [conn99] end connection 127.0.0.1:38972 (36 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.947-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52022 #50 (14 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.948-0500 I NETWORK [conn50] received client metadata from 127.0.0.1:52022 conn50: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.997-0500 I SHARDING [conn59] Updating metadata for collection test1_fsmdb0.fsmcoll0 from collection version: to collection version: 1|3||5ddd7d7d3bbfe7fa5630d6e7, shard version: 1|3||5ddd7d7d3bbfe7fa5630d6e7 due to version change
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:09.987-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:09.989-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.306-0500 I NETWORK [conn67] received client metadata from 127.0.0.1:44852 conn67: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:10.621-0500 I COMMAND [conn33] command test1_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063") }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 271ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:10.044-0500 I SHARDING [conn19] distributed lock 'test1_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d7e5cde74b6784bb41e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.970-0500 I NETWORK [conn91] end connection 127.0.0.1:38900 (35 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.048-0500 I INDEX [ReplWriterWorker-0] index build: starting on config.cache.chunks.test1_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.948-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:09.997-0500 I STORAGE [ShardServerCatalogCacheLoader-1] createCollection: config.cache.chunks.test1_fsmdb0.fsmcoll0 with provided UUID: 06773b9f-88ae-4430-b4bd-32b9c52979b6 and options: { uuid: UUID("06773b9f-88ae-4430-b4bd-32b9c52979b6") }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:09.987-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:09.990-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 937aebf7-b134-4ece-b004-a2131d57c863: test1_fsmdb0.fsmcoll0 ( dccb4b9f-92a4-4a8c-933f-ac40a7941a38 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.306-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44854 #68 (5 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:10.654-0500 I COMMAND [conn32] command test1_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee") }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 304ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:10.044-0500 I SHARDING [conn19] Enabling sharding for database [test1_fsmdb0] in config db
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.976-0500 I NETWORK [conn90] end connection 127.0.0.1:38894 (34 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.048-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.948-0500 I SHARDING [Sharding-Fixed-2] Updating config server with confirmed set shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:10.011-0500 I INDEX [ShardServerCatalogCacheLoader-1] index build: done building index _id_ on ns config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:09.990-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:10.013-0500 I STORAGE [ReplWriterWorker-15] createCollection: config.cache.chunks.test1_fsmdb0.fsmcoll0 with provided UUID: 06773b9f-88ae-4430-b4bd-32b9c52979b6 and options: { uuid: UUID("06773b9f-88ae-4430-b4bd-32b9c52979b6") }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.306-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44856 #69 (6 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:11.048-0500 I COMMAND [conn33] command test1_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063") }, $clusterTime: { clusterTime: Timestamp(1574796670, 3066), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440\", to: \"test1_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:714 protocol:op_msg 424ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:10.045-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d7e5cde74b6784bb41e' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.995-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.fsmcoll0 to version 1|3||5ddd7d7d3bbfe7fa5630d6e7 took 1 ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.048-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: c9da2212-f278-4cde-bbae-53b6ef5600b4: config.cache.chunks.test1_fsmdb0.fsmcoll0 (24d02c72-11d8-48c7-b13e-109658af75b4 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.949-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:10.011-0500 I INDEX [ShardServerCatalogCacheLoader-1] Registering index build: 25d006d2-4724-42c0-8a15-ec115f374bb3
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:09.992-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: e345632f-c1ad-4896-9619-b6aae2573871: test1_fsmdb0.fsmcoll0 ( dccb4b9f-92a4-4a8c-933f-ac40a7941a38 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:10.029-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.306-0500 I NETWORK [conn68] received client metadata from 127.0.0.1:44854 conn68: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:10.048-0500 I SHARDING [conn19] distributed lock 'test1_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d7e5cde74b6784bb424
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.995-0500 I SHARDING [conn37] Marking collection test1_fsmdb0.fsmcoll0 as collection version: 1|3||5ddd7d7d3bbfe7fa5630d6e7, shard version: 1|1||5ddd7d7d3bbfe7fa5630d6e7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.048-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.951-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: ab7280a4-9f06-48ab-97a8-6a7dfc651fc0: test1_fsmdb0.fsmcoll0 ( dccb4b9f-92a4-4a8c-933f-ac40a7941a38 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:10.027-0500 I INDEX [ShardServerCatalogCacheLoader-1] index build: starting on config.cache.chunks.test1_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:10.013-0500 I STORAGE [ReplWriterWorker-1] createCollection: config.cache.chunks.test1_fsmdb0.fsmcoll0 with provided UUID: 06773b9f-88ae-4430-b4bd-32b9c52979b6 and options: { uuid: UUID("06773b9f-88ae-4430-b4bd-32b9c52979b6") }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:10.050-0500 I INDEX [ReplWriterWorker-10] index build: starting on config.cache.chunks.test1_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.307-0500 I NETWORK [conn69] received client metadata from 127.0.0.1:44856 conn69: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:10.049-0500 I SHARDING [conn19] distributed lock 'test1_fsmdb0.fsmcoll0' acquired for 'shardCollection', ts : 5ddd7d7e5cde74b6784bb426
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:09.995-0500 I STORAGE [ShardServerCatalogCacheLoader-0] createCollection: config.cache.chunks.test1_fsmdb0.fsmcoll0 with provided UUID: 24d02c72-11d8-48c7-b13e-109658af75b4 and options: { uuid: UUID("24d02c72-11d8-48c7-b13e-109658af75b4") }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.049-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.964-0500 W CONTROL [conn49] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 8 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:10.027-0500 I INDEX [ShardServerCatalogCacheLoader-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:10.029-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:10.050-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.315-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44858 #70 (7 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:10.051-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test1_fsmdb0 from version {} to version { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.010-0500 I INDEX [ShardServerCatalogCacheLoader-0] index build: done building index _id_ on ns config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.050-0500 I SHARDING [ReplWriterWorker-9] Marking collection config.cache.chunks.test1_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.967-0500 I NETWORK [conn49] end connection 127.0.0.1:52010 (13 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:10.027-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Index build initialized: 25d006d2-4724-42c0-8a15-ec115f374bb3: config.cache.chunks.test1_fsmdb0.fsmcoll0 (06773b9f-88ae-4430-b4bd-32b9c52979b6 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:10.050-0500 I INDEX [ReplWriterWorker-4] index build: starting on config.cache.chunks.test1_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:10.050-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 684da9e4-2b16-4c4d-8d72-a25006e09073: config.cache.chunks.test1_fsmdb0.fsmcoll0 (06773b9f-88ae-4430-b4bd-32b9c52979b6 ): indexes: 1
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.315-0500 I NETWORK [conn70] received client metadata from 127.0.0.1:44858 conn70: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:10.052-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.fsmcoll0 to version 1|3||5ddd7d7d3bbfe7fa5630d6e7 took 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.010-0500 I INDEX [ShardServerCatalogCacheLoader-0] Registering index build: 9a83ffd2-7922-4fa0-be32-b2c3d615578a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.052-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: drain applied 4 side writes (inserted: 4, deleted: 0) for 'lastmod_1' in 0 ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:09.976-0500 I NETWORK [conn46] end connection 127.0.0.1:51926 (12 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:10.027-0500 I INDEX [ShardServerCatalogCacheLoader-1] Waiting for index build to complete: 25d006d2-4724-42c0-8a15-ec115f374bb3
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:10.050-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:10.050-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.316-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44860 #71 (8 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:10.053-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d7e5cde74b6784bb426' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.025-0500 I INDEX [ShardServerCatalogCacheLoader-0] index build: starting on config.cache.chunks.test1_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.052-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index lastmod_1 on ns config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.030-0500 I STORAGE [ReplWriterWorker-9] createCollection: config.cache.chunks.test1_fsmdb0.fsmcoll0 with provided UUID: 24d02c72-11d8-48c7-b13e-109658af75b4 and options: { uuid: UUID("24d02c72-11d8-48c7-b13e-109658af75b4") }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:10.027-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:10.050-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 14b6c741-2a24-4823-b785-f6e7718d341d: config.cache.chunks.test1_fsmdb0.fsmcoll0 (06773b9f-88ae-4430-b4bd-32b9c52979b6 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:10.051-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.316-0500 I NETWORK [conn71] received client metadata from 127.0.0.1:44860 conn71: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:10.054-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d7e5cde74b6784bb424' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.025-0500 I INDEX [ShardServerCatalogCacheLoader-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.053-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: c9da2212-f278-4cde-bbae-53b6ef5600b4: config.cache.chunks.test1_fsmdb0.fsmcoll0 ( 24d02c72-11d8-48c7-b13e-109658af75b4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.046-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:10.028-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:10.050-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:10.052-0500 I SHARDING [ReplWriterWorker-11] Marking collection config.cache.chunks.test1_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.318-0500 I NETWORK [conn67] end connection 127.0.0.1:44852 (7 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:10.196-0500 I SHARDING [conn31] distributed lock 'test1_fsmdb0' acquired for 'createCollection', ts : 5ddd7d7e5cde74b6784bb435
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.025-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Index build initialized: 9a83ffd2-7922-4fa0-be32-b2c3d615578a: config.cache.chunks.test1_fsmdb0.fsmcoll0 (24d02c72-11d8-48c7-b13e-109658af75b4 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.210-0500 I STORAGE [ReplWriterWorker-4] createCollection: test1_fsmdb0.agg_out with provided UUID: f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4 and options: { uuid: UUID("f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4") }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.062-0500 I INDEX [ReplWriterWorker-15] index build: starting on config.cache.chunks.test1_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:10.032-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:10.050-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:10.054-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 4 side writes (inserted: 4, deleted: 0) for 'lastmod_1' in 0 ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.318-0500 I NETWORK [conn68] end connection 127.0.0.1:44854 (6 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:10.197-0500 I SHARDING [conn31] distributed lock 'test1_fsmdb0.agg_out' acquired for 'createCollection', ts : 5ddd7d7e5cde74b6784bb437
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.025-0500 I INDEX [ShardServerCatalogCacheLoader-0] Waiting for index build to complete: 9a83ffd2-7922-4fa0-be32-b2c3d615578a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.220-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.062-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:10.034-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 25d006d2-4724-42c0-8a15-ec115f374bb3: config.cache.chunks.test1_fsmdb0.fsmcoll0 ( 06773b9f-88ae-4430-b4bd-32b9c52979b6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:10.051-0500 I SHARDING [ReplWriterWorker-8] Marking collection config.cache.chunks.test1_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:10.054-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.320-0500 I NETWORK [conn69] end connection 127.0.0.1:44856 (5 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:10.224-0500 I SHARDING [conn31] distributed lock with ts: 5ddd7d7e5cde74b6784bb437' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.025-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.256-0500 I INDEX [ReplWriterWorker-6] index build: starting on test1_fsmdb0.agg_out properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.062-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: fc4fde0d-4663-404e-989f-1fe89f607936: config.cache.chunks.test1_fsmdb0.fsmcoll0 (24d02c72-11d8-48c7-b13e-109658af75b4 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:10.034-0500 I INDEX [ShardServerCatalogCacheLoader-1] Index build completed: 25d006d2-4724-42c0-8a15-ec115f374bb3
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:10.053-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: drain applied 4 side writes (inserted: 4, deleted: 0) for 'lastmod_1' in 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:10.055-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 684da9e4-2b16-4c4d-8d72-a25006e09073: config.cache.chunks.test1_fsmdb0.fsmcoll0 ( 06773b9f-88ae-4430-b4bd-32b9c52979b6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.330-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44864 #72 (6 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:10.225-0500 I SHARDING [conn31] distributed lock with ts: 5ddd7d7e5cde74b6784bb435' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.025-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.256-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.063-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:10.034-0500 I SHARDING [ShardServerCatalogCacheLoader-1] Marking collection config.cache.chunks.test1_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:10.054-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index lastmod_1 on ns config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.330-0500 I NETWORK [conn72] received client metadata from 127.0.0.1:44864 conn72: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.028-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index lastmod_1 on ns config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.256-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 5048c913-02e2-4ef4-8223-339622f3295a: test1_fsmdb0.agg_out (f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.063-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:10.473-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46492 #120 (42 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:10.055-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 14b6c741-2a24-4823-b785-f6e7718d341d: config.cache.chunks.test1_fsmdb0.fsmcoll0 ( 06773b9f-88ae-4430-b4bd-32b9c52979b6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.330-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44866 #73 (7 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.031-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 9a83ffd2-7922-4fa0-be32-b2c3d615578a: config.cache.chunks.test1_fsmdb0.fsmcoll0 ( 24d02c72-11d8-48c7-b13e-109658af75b4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.256-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.064-0500 I SHARDING [ReplWriterWorker-11] Marking collection config.cache.chunks.test1_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:10.474-0500 I NETWORK [conn120] received client metadata from 127.0.0.1:46492 conn120: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.330-0500 I NETWORK [conn73] received client metadata from 127.0.0.1:44866 conn73: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.031-0500 I INDEX [ShardServerCatalogCacheLoader-0] Index build completed: 9a83ffd2-7922-4fa0-be32-b2c3d615578a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.257-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.066-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 4 side writes (inserted: 4, deleted: 0) for 'lastmod_1' in 0 ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.339-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44870 #74 (8 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.031-0500 I SHARDING [ShardServerCatalogCacheLoader-0] Marking collection config.cache.chunks.test1_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.259-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.066-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.340-0500 I NETWORK [conn74] received client metadata from 127.0.0.1:44870 conn74: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.036-0500 I SHARDING [conn37] Created 4 chunk(s) for: test1_fsmdb0.fsmcoll0, producing collection version 1|3||5ddd7d7d3bbfe7fa5630d6e7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.261-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 5048c913-02e2-4ef4-8223-339622f3295a: test1_fsmdb0.agg_out ( f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.068-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: fc4fde0d-4663-404e-989f-1fe89f607936: config.cache.chunks.test1_fsmdb0.fsmcoll0 ( 24d02c72-11d8-48c7-b13e-109658af75b4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.342-0500 I NETWORK [conn72] end connection 127.0.0.1:44864 (7 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.036-0500 I SHARDING [conn37] about to log metadata event into changelog: { _id: "nz_desktop:20001-2019-11-26T14:31:10.036-0500-5ddd7d7e3bbfe7fa5630d725", server: "nz_desktop:20001", shard: "shard-rs0", clientAddr: "127.0.0.1:38444", time: new Date(1574796670036), what: "shardCollection.end", ns: "test1_fsmdb0.fsmcoll0", details: { version: "1|3||5ddd7d7d3bbfe7fa5630d6e7" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.383-0500 I STORAGE [ReplWriterWorker-15] createCollection: test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92 with provided UUID: 30b7e609-caec-491b-8860-d6828489d28f and options: { uuid: UUID("30b7e609-caec-491b-8860-d6828489d28f"), temp: true }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.221-0500 I STORAGE [ReplWriterWorker-15] createCollection: test1_fsmdb0.agg_out with provided UUID: f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4 and options: { uuid: UUID("f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4") }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.342-0500 I NETWORK [conn73] end connection 127.0.0.1:44866 (6 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.037-0500 I COMMAND [conn37] command admin.$cmd appName: "MongoDB Shell" command: _shardsvrShardCollection { _shardsvrShardCollection: "test1_fsmdb0.fsmcoll0", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("802be2a3-9ddb-4c1d-9b87-98584e471fdb"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796669, 11), signature: { hash: BinData(0, 3119F58DE2009FC81F1185C9E8BC8365133E2132), keyId: 6763700092420489256 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44794", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796669, 11), t: 1 } }, $db: "admin" } numYields:0 reslen:415 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 9 } }, ReplicationStateTransition: { acquireCount: { w: 15 } }, Global: { acquireCount: { r: 8, w: 7 } }, Database: { acquireCount: { r: 8, w: 7, W: 1 } }, Collection: { acquireCount: { r: 8, w: 3, W: 4 } }, Mutex: { acquireCount: { r: 16, W: 4 } } } flowControl:{ acquireCount: 5 } storage:{} protocol:op_msg 151ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.395-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.234-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.517-0500 I COMMAND [conn74] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856") }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 167ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.197-0500 I STORAGE [conn37] createCollection: test1_fsmdb0.agg_out with generated UUID: f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4 and options: {}
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.396-0500 I STORAGE [ReplWriterWorker-5] createCollection: test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795 with provided UUID: 247a5030-b416-45d9-b2c7-e0f93a48ca5c and options: { uuid: UUID("247a5030-b416-45d9-b2c7-e0f93a48ca5c"), temp: true }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.267-0500 I INDEX [ReplWriterWorker-5] index build: starting on test1_fsmdb0.agg_out properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.522-0500 I COMMAND [conn70] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d") }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 172ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.208-0500 I INDEX [conn37] index build: done building index _id_ on ns test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.410-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.267-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.584-0500 I COMMAND [conn71] command test1_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563") }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 234ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.227-0500 I INDEX [conn65] Registering index build: 2d9610bf-50f3-487a-ae6e-480405b12732
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.411-0500 I STORAGE [ReplWriterWorker-13] createCollection: test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900 with provided UUID: bd024227-4d0c-4d17-a5bc-33092f16f4b5 and options: { uuid: UUID("bd024227-4d0c-4d17-a5bc-33092f16f4b5"), temp: true }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.267-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 64abac1e-538e-4662-8582-fe15177b4262: test1_fsmdb0.agg_out (f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4 ): indexes: 1
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:10.709-0500 I COMMAND [conn70] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d") }, $clusterTime: { clusterTime: Timestamp(1574796670, 2202), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 182ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.239-0500 I INDEX [conn65] index build: starting on test1_fsmdb0.agg_out properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.427-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.267-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:11.043-0500 I COMMAND [conn74] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856") }, $clusterTime: { clusterTime: Timestamp(1574796670, 1816), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313\", to: \"test1_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:714 protocol:op_msg 522ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.239-0500 I INDEX [conn65] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.428-0500 I STORAGE [ReplWriterWorker-7] createCollection: test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2 with provided UUID: 34493476-06d5-494e-8201-f3daa0838c9d and options: { uuid: UUID("34493476-06d5-494e-8201-f3daa0838c9d"), temp: true }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.268-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:11.047-0500 I COMMAND [conn71] command test1_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563") }, $clusterTime: { clusterTime: Timestamp(1574796670, 2948), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad\", to: \"test1_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:714 protocol:op_msg 460ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.239-0500 I STORAGE [conn65] Index build initialized: 2d9610bf-50f3-487a-ae6e-480405b12732: test1_fsmdb0.agg_out (f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.443-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.270-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.239-0500 I INDEX [conn65] Waiting for index build to complete: 2d9610bf-50f3-487a-ae6e-480405b12732
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.444-0500 I STORAGE [ReplWriterWorker-4] createCollection: test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db with provided UUID: ae2d7e04-d26b-40a8-b331-95f6b2cc3d27 and options: { uuid: UUID("ae2d7e04-d26b-40a8-b331-95f6b2cc3d27"), temp: true }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.270-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 64abac1e-538e-4662-8582-fe15177b4262: test1_fsmdb0.agg_out ( f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.239-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.458-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.396-0500 I STORAGE [ReplWriterWorker-8] createCollection: test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92 with provided UUID: 30b7e609-caec-491b-8860-d6828489d28f and options: { uuid: UUID("30b7e609-caec-491b-8860-d6828489d28f"), temp: true }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.240-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.476-0500 I INDEX [ReplWriterWorker-9] index build: starting on test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.412-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.242-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.476-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.413-0500 I STORAGE [ReplWriterWorker-0] createCollection: test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795 with provided UUID: 247a5030-b416-45d9-b2c7-e0f93a48ca5c and options: { uuid: UUID("247a5030-b416-45d9-b2c7-e0f93a48ca5c"), temp: true }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.244-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 2d9610bf-50f3-487a-ae6e-480405b12732: test1_fsmdb0.agg_out ( f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.476-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 38a9d5e4-a8ed-4615-ae37-41fe47fa9f40: test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92 (30b7e609-caec-491b-8860-d6828489d28f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.429-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.244-0500 I INDEX [conn65] Index build completed: 2d9610bf-50f3-487a-ae6e-480405b12732
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.476-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.430-0500 I STORAGE [ReplWriterWorker-14] createCollection: test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900 with provided UUID: bd024227-4d0c-4d17-a5bc-33092f16f4b5 and options: { uuid: UUID("bd024227-4d0c-4d17-a5bc-33092f16f4b5"), temp: true }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.351-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.477-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.445-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.351-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39014 #103 (35 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.479-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.445-0500 I STORAGE [ReplWriterWorker-11] createCollection: test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2 with provided UUID: 34493476-06d5-494e-8201-f3daa0838c9d and options: { uuid: UUID("34493476-06d5-494e-8201-f3daa0838c9d"), temp: true }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.351-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39016 #106 (36 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.489-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 38a9d5e4-a8ed-4615-ae37-41fe47fa9f40: test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92 ( 30b7e609-caec-491b-8860-d6828489d28f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.460-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.351-0500 I NETWORK [conn103] received client metadata from 127.0.0.1:39014 conn103: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.496-0500 I INDEX [ReplWriterWorker-5] index build: starting on test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.460-0500 I STORAGE [ReplWriterWorker-15] createCollection: test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db with provided UUID: ae2d7e04-d26b-40a8-b331-95f6b2cc3d27 and options: { uuid: UUID("ae2d7e04-d26b-40a8-b331-95f6b2cc3d27"), temp: true }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.352-0500 I NETWORK [conn106] received client metadata from 127.0.0.1:39016 conn106: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.496-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.476-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.352-0500 I STORAGE [conn46] createCollection: test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92 with generated UUID: 30b7e609-caec-491b-8860-d6828489d28f and options: { temp: true }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.496-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 8aeedae2-db2b-438d-862c-7c0a57305f90: test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795 (247a5030-b416-45d9-b2c7-e0f93a48ca5c ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.493-0500 I INDEX [ReplWriterWorker-4] index build: starting on test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.352-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39018 #108 (37 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.497-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.493-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.352-0500 I NETWORK [conn108] received client metadata from 127.0.0.1:39018 conn108: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.497-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.493-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 02920b4e-2d6c-4932-8c4c-074afae82d9f: test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92 (30b7e609-caec-491b-8860-d6828489d28f ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.353-0500 I STORAGE [conn108] createCollection: test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795 with generated UUID: 247a5030-b416-45d9-b2c7-e0f93a48ca5c and options: { temp: true }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.499-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.493-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.353-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39020 #110 (38 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.501-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 8aeedae2-db2b-438d-862c-7c0a57305f90: test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795 ( 247a5030-b416-45d9-b2c7-e0f93a48ca5c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.494-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.353-0500 I NETWORK [conn110] received client metadata from 127.0.0.1:39020 conn110: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.525-0500 I INDEX [ReplWriterWorker-0] index build: starting on test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.496-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.354-0500 I STORAGE [conn110] createCollection: test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900 with generated UUID: bd024227-4d0c-4d17-a5bc-33092f16f4b5 and options: { temp: true }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.525-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.498-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 02920b4e-2d6c-4932-8c4c-074afae82d9f: test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92 ( 30b7e609-caec-491b-8860-d6828489d28f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.354-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39022 #112 (39 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.525-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: b0c4f33b-6f26-4b06-81e9-34932c4e3b14: test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900 (bd024227-4d0c-4d17-a5bc-33092f16f4b5 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.513-0500 I INDEX [ReplWriterWorker-0] index build: starting on test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.354-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39024 #114 (40 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.525-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.513-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.354-0500 I NETWORK [conn112] received client metadata from 127.0.0.1:39022 conn112: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.525-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.513-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 492a9f2a-3218-4129-8388-78b9c5224e9b: test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795 (247a5030-b416-45d9-b2c7-e0f93a48ca5c ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.354-0500 I NETWORK [conn114] received client metadata from 127.0.0.1:39024 conn114: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.528-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.513-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.355-0500 I STORAGE [conn112] createCollection: test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2 with generated UUID: 34493476-06d5-494e-8201-f3daa0838c9d and options: { temp: true }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.538-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: b0c4f33b-6f26-4b06-81e9-34932c4e3b14: test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900 ( bd024227-4d0c-4d17-a5bc-33092f16f4b5 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.513-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.355-0500 I STORAGE [conn114] createCollection: test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db with generated UUID: ae2d7e04-d26b-40a8-b331-95f6b2cc3d27 and options: { temp: true }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.574-0500 I INDEX [ReplWriterWorker-11] index build: starting on test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.515-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.381-0500 I INDEX [conn46] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.574-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.517-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 492a9f2a-3218-4129-8388-78b9c5224e9b: test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795 ( 247a5030-b416-45d9-b2c7-e0f93a48ca5c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.382-0500 I INDEX [conn46] Registering index build: 635372fb-507f-416b-a289-dd20f3e247ea
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.574-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: ed4e009c-fcf3-4cd0-ac06-9e6cc011ae05: test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2 (34493476-06d5-494e-8201-f3daa0838c9d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.541-0500 I INDEX [ReplWriterWorker-0] index build: starting on test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.387-0500 I INDEX [conn108] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.574-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.541-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.387-0500 I INDEX [conn108] Registering index build: 5b0b55c7-33fc-4d1a-94d9-b7b5b1ba5fa5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.574-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.541-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: fa00b384-4e01-4753-8768-55923180e0c5: test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900 (bd024227-4d0c-4d17-a5bc-33092f16f4b5 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.392-0500 I INDEX [conn110] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.577-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.542-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.392-0500 I INDEX [conn110] Registering index build: 694cada4-0cd7-414a-af45-f1a505808390
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.581-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: ed4e009c-fcf3-4cd0-ac06-9e6cc011ae05: test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2 ( 34493476-06d5-494e-8201-f3daa0838c9d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.542-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.398-0500 I INDEX [conn112] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.595-0500 I INDEX [ReplWriterWorker-1] index build: starting on test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:11.075-0500 MongoDB shell version v0.0.0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.573-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.396-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.398-0500 I INDEX [conn112] Registering index build: 1f321288-4246-41ef-b128-6af4a230adf2
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.396-0500 Implicit session: session { "id" : UUID("6d88a5c3-fd74-4a34-907f-8899667f5939") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.397-0500 MongoDB server version: 0.0.0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.397-0500 true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.397-0500 2019-11-26T14:31:11.136-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.397-0500 2019-11-26T14:31:11.137-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.397-0500 2019-11-26T14:31:11.137-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.397-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.397-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.397-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.397-0500 [jsTest] New session started with sessionID: { "id" : UUID("d7741ff0-3063-4e3d-8d3e-6f716f224ad9") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.397-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.397-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.397-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.397-0500 2019-11-26T14:31:11.140-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.397-0500 2019-11-26T14:31:11.141-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.397-0500 2019-11-26T14:31:11.141-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.397-0500 2019-11-26T14:31:11.141-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.397-0500 2019-11-26T14:31:11.142-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.397-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.397-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.398-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.398-0500 [jsTest] New session started with sessionID: { "id" : UUID("bf4c8bdc-193b-4d73-9288-3185c6183ff9") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.398-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.398-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.398-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.398-0500 2019-11-26T14:31:11.143-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.398-0500 2019-11-26T14:31:11.143-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.398-0500 2019-11-26T14:31:11.143-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.398-0500 2019-11-26T14:31:11.143-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.398-0500 2019-11-26T14:31:11.144-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.398-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.398-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.398-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.398-0500 [jsTest] New session started with sessionID: { "id" : UUID("f51df85e-4e36-4aef-b469-8e1975f208af") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.398-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.398-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.398-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.398-0500 Skipping data consistency checks for 1-node CSRS: { "type" : "replica set", "primary" : "localhost:20000", "nodes" : [ "localhost:20000" ] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.398-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.398-0500 Implicit session: session { "id" : UUID("0e2a5d6d-e6ac-48e0-9cd7-4d5d48855a7f") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.398-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.399-0500 MongoDB server version: 0.0.0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.399-0500 Implicit session: session { "id" : UUID("89e405b8-5992-4572-80e1-b8f88015f176") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.399-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:11.126-0500 I COMMAND [conn32] command test1_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee") }, $clusterTime: { clusterTime: Timestamp(1574796670, 3071), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0\", to: \"test1_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:716 protocol:op_msg 469ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.399-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:11.126-0500 I COMMAND [conn70] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d") }, $clusterTime: { clusterTime: Timestamp(1574796670, 3645), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:824 protocol:op_msg 400ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.399-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.399-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:11.137-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56290 #93 (38 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.399-0500 [jsTest] New session started with sessionID: { "id" : UUID("6c20dae3-3d58-4e84-9f66-e45c7eb4b405") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:11.144-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51706 #51 (11 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.399-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:11.144-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35070 #48 (11 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.399-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:11.144-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46510 #121 (43 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.400-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.595-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.400-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.576-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: fa00b384-4e01-4753-8768-55923180e0c5: test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900 ( bd024227-4d0c-4d17-a5bc-33092f16f4b5 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.400-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.405-0500 I INDEX [conn114] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.400-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:11.216-0500 I COMMAND [conn33] command test1_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063") }, $clusterTime: { clusterTime: Timestamp(1574796671, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 167ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.400-0500 [jsTest] New session started with sessionID: { "id" : UUID("a9c981a2-3dbc-4da3-b581-6770e772473d") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.401-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.401-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:11.127-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44886 #75 (7 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.401-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:11.137-0500 I NETWORK [conn93] received client metadata from 127.0.0.1:56290 conn93: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.401-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:11.144-0500 I NETWORK [conn51] received client metadata from 127.0.0.1:51706 conn51: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.401-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:11.144-0500 I NETWORK [conn48] received client metadata from 127.0.0.1:35070 conn48: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.401-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:11.144-0500 I NETWORK [conn121] received client metadata from 127.0.0.1:46510 conn121: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.401-0500 [jsTest] New session started with sessionID: { "id" : UUID("23beec8b-21e2-47b2-8bf9-6b4208124c12") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.595-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: e970399d-2ef6-4905-b35a-8bb1c62e1418: test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db (ae2d7e04-d26b-40a8-b331-95f6b2cc3d27 ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.401-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.592-0500 I INDEX [ReplWriterWorker-9] index build: starting on test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.402-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.405-0500 I INDEX [conn114] Registering index build: 256daffa-ec19-4b0e-a3da-32c273fba78d
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.402-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:11.127-0500 I NETWORK [conn75] received client metadata from 127.0.0.1:44886 conn75: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.402-0500 Running data consistency checks for replica set: shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:11.138-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56292 #94 (39 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.402-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:11.216-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51730 #52 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.402-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:11.216-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35092 #49 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.402-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:11.145-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46514 #122 (44 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.402-0500 [jsTest] New session started with sessionID: { "id" : UUID("735e4be8-c7f8-47f8-80eb-3b55e97f5e15") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.595-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.403-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.592-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.403-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.420-0500 I INDEX [conn46] index build: starting on test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.403-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:11.165-0500 I COMMAND [conn74] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856") }, $clusterTime: { clusterTime: Timestamp(1574796671, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 117ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.403-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:11.138-0500 I NETWORK [conn94] received client metadata from 127.0.0.1:56292 conn94: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.403-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:11.216-0500 I NETWORK [conn52] received client metadata from 127.0.0.1:51730 conn52: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.403-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.403-0500 [jsTest] New session started with sessionID: { "id" : UUID("59482e33-2072-4c7a-8b17-0c795795be2f") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.403-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.403-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:11.216-0500 I NETWORK [conn49] received client metadata from 127.0.0.1:35092 conn49: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.404-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:11.145-0500 I NETWORK [conn122] received client metadata from 127.0.0.1:46514 conn122: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.404-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.596-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.404-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.592-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: e2037135-a47f-459d-8d99-bae01cd8c347: test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2 (34493476-06d5-494e-8201-f3daa0838c9d ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.404-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.420-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.404-0500 [jsTest] New session started with sessionID: { "id" : UUID("edf9e13a-af1d-447b-9453-3370e524297c") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:11.200-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44908 #76 (8 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.404-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:11.227-0500 W CONTROL [conn52] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 40 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.404-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:11.228-0500 W CONTROL [conn49] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 43 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.405-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:11.212-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46528 #123 (45 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.405-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.598-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.405-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.405-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.592-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.405-0500 [jsTest] New session started with sessionID: { "id" : UUID("c8c15e08-f1a6-4edc-831c-249e4d0ea0c0") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.420-0500 I STORAGE [conn46] Index build initialized: 635372fb-507f-416b-a289-dd20f3e247ea: test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92 (30b7e609-caec-491b-8860-d6828489d28f ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.405-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:11.200-0500 I NETWORK [conn76] received client metadata from 127.0.0.1:44908 conn76: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.405-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:11.243-0500 W CONTROL [conn52] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 40 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.405-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:11.243-0500 W CONTROL [conn49] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 43 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.406-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:11.212-0500 I NETWORK [conn123] received client metadata from 127.0.0.1:46528 conn123: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.406-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.600-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92 (30b7e609-caec-491b-8860-d6828489d28f) to test1_fsmdb0.agg_out and drop f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.406-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.592-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.406-0500 [jsTest] New session started with sessionID: { "id" : UUID("c65a601f-c957-428e-adeb-3bd85740d639") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.420-0500 I INDEX [conn46] Waiting for index build to complete: 635372fb-507f-416b-a289-dd20f3e247ea
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.406-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.406-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:11.203-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44910 #77 (9 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.406-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:11.246-0500 I NETWORK [conn52] end connection 127.0.0.1:51730 (11 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.406-0500 Running data consistency checks for replica set: shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:11.246-0500 I NETWORK [conn49] end connection 127.0.0.1:35092 (11 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.407-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:11.215-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46530 #124 (46 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.407-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.600-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test1_fsmdb0.agg_out (f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796670, 1572), t: 1 } and commit timestamp Timestamp(1574796670, 1572)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.407-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.594-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.407-0500 [jsTest] New session started with sessionID: { "id" : UUID("22b55b7e-19b1-44f7-a006-5fd0a88e8439") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.434-0500 I INDEX [conn108] index build: starting on test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.407-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.407-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:11.204-0500 I NETWORK [conn77] received client metadata from 127.0.0.1:44910 conn77: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.407-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:11.215-0500 I NETWORK [conn124] received client metadata from 127.0.0.1:46530 conn124: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.407-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.600-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test1_fsmdb0.agg_out (f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4).
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.408-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.596-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: e2037135-a47f-459d-8d99-bae01cd8c347: test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2 ( 34493476-06d5-494e-8201-f3daa0838c9d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.408-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.434-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.408-0500 [jsTest] New session started with sessionID: { "id" : UUID("4d729167-e406-4e8d-91ff-243998628740") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:11.245-0500 I NETWORK [conn77] end connection 127.0.0.1:44910 (8 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.408-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:11.227-0500 W CONTROL [conn124] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 47 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.408-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.600-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection 30b7e609-caec-491b-8860-d6828489d28f from test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92 to test1_fsmdb0.agg_out
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.408-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.612-0500 I INDEX [ReplWriterWorker-11] index build: starting on test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.408-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.434-0500 I STORAGE [conn108] Index build initialized: 5b0b55c7-33fc-4d1a-94d9-b7b5b1ba5fa5: test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795 (247a5030-b416-45d9-b2c7-e0f93a48ca5c ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.409-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:11.248-0500 I COMMAND [conn71] command test1_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563") }, $clusterTime: { clusterTime: Timestamp(1574796671, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 199ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.409-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:11.242-0500 W CONTROL [conn124] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 47 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.409-0500 [jsTest] New session started with sessionID: { "id" : UUID("ce1b6fba-29b6-4c71-9f0b-4316a619610a") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.600-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4)'. Ident: 'index-54--8000595249233899911', commit timestamp: 'Timestamp(1574796670, 1572)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.409-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.612-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.409-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.434-0500 I INDEX [conn108] Waiting for index build to complete: 5b0b55c7-33fc-4d1a-94d9-b7b5b1ba5fa5
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.409-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:11.245-0500 I NETWORK [conn123] end connection 127.0.0.1:46528 (45 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.409-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.600-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4)'. Ident: 'index-55--8000595249233899911', commit timestamp: 'Timestamp(1574796670, 1572)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.409-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.612-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: eaea30d7-af05-4e61-b244-c1d0efaac63d: test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db (ae2d7e04-d26b-40a8-b331-95f6b2cc3d27 ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.410-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.434-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.410-0500 [jsTest] New session started with sessionID: { "id" : UUID("c2fd152e-5073-489c-bebe-220f9da9e078") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:11.245-0500 I NETWORK [conn124] end connection 127.0.0.1:46530 (44 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.410-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.600-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-53--8000595249233899911, commit timestamp: Timestamp(1574796670, 1572)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.410-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.612-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:12.410-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.434-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.603-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: e970399d-2ef6-4905-b35a-8bb1c62e1418: test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db ( ae2d7e04-d26b-40a8-b331-95f6b2cc3d27 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.613-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.435-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.604-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795 (247a5030-b416-45d9-b2c7-e0f93a48ca5c) to test1_fsmdb0.agg_out and drop 30b7e609-caec-491b-8860-d6828489d28f.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.615-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.435-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.604-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test1_fsmdb0.agg_out (30b7e609-caec-491b-8860-d6828489d28f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796670, 1817), t: 1 } and commit timestamp Timestamp(1574796670, 1817)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.616-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92 (30b7e609-caec-491b-8860-d6828489d28f) to test1_fsmdb0.agg_out and drop f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.444-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.604-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test1_fsmdb0.agg_out (30b7e609-caec-491b-8860-d6828489d28f).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.616-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test1_fsmdb0.agg_out (f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796670, 1572), t: 1 } and commit timestamp Timestamp(1574796670, 1572)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.447-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.604-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection 247a5030-b416-45d9-b2c7-e0f93a48ca5c from test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.616-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test1_fsmdb0.agg_out (f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.455-0500 I INDEX [conn110] index build: starting on test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.604-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (30b7e609-caec-491b-8860-d6828489d28f)'. Ident: 'index-58--8000595249233899911', commit timestamp: 'Timestamp(1574796670, 1817)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.616-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection 30b7e609-caec-491b-8860-d6828489d28f from test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.455-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.604-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (30b7e609-caec-491b-8860-d6828489d28f)'. Ident: 'index-67--8000595249233899911', commit timestamp: 'Timestamp(1574796670, 1817)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.617-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4)'. Ident: 'index-54--4104909142373009110', commit timestamp: 'Timestamp(1574796670, 1572)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.455-0500 I STORAGE [conn110] Index build initialized: 694cada4-0cd7-414a-af45-f1a505808390: test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900 (bd024227-4d0c-4d17-a5bc-33092f16f4b5 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.604-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-57--8000595249233899911, commit timestamp: Timestamp(1574796670, 1817)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.617-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4)'. Ident: 'index-55--4104909142373009110', commit timestamp: 'Timestamp(1574796670, 1572)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.455-0500 I INDEX [conn110] Waiting for index build to complete: 694cada4-0cd7-414a-af45-f1a505808390
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.609-0500 I STORAGE [ReplWriterWorker-8] createCollection: test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 with provided UUID: 418c9b5a-5c5b-489a-9bd0-4f2a944f24c3 and options: { uuid: UUID("418c9b5a-5c5b-489a-9bd0-4f2a944f24c3"), temp: true }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.617-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-53--4104909142373009110, commit timestamp: Timestamp(1574796670, 1572)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.456-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 635372fb-507f-416b-a289-dd20f3e247ea: test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92 ( 30b7e609-caec-491b-8860-d6828489d28f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.621-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.618-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: eaea30d7-af05-4e61-b244-c1d0efaac63d: test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db ( ae2d7e04-d26b-40a8-b331-95f6b2cc3d27 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.458-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 5b0b55c7-33fc-4d1a-94d9-b7b5b1ba5fa5: test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795 ( 247a5030-b416-45d9-b2c7-e0f93a48ca5c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.625-0500 I STORAGE [ReplWriterWorker-15] createCollection: test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b with provided UUID: b367778f-f90c-4b21-bd95-25d7e9b4cdde and options: { uuid: UUID("b367778f-f90c-4b21-bd95-25d7e9b4cdde"), temp: true }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.631-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795 (247a5030-b416-45d9-b2c7-e0f93a48ca5c) to test1_fsmdb0.agg_out and drop 30b7e609-caec-491b-8860-d6828489d28f.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.472-0500 I INDEX [conn112] index build: starting on test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.639-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.631-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test1_fsmdb0.agg_out (30b7e609-caec-491b-8860-d6828489d28f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796670, 1817), t: 1 } and commit timestamp Timestamp(1574796670, 1817)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.472-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.642-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900 (bd024227-4d0c-4d17-a5bc-33092f16f4b5) to test1_fsmdb0.agg_out and drop 247a5030-b416-45d9-b2c7-e0f93a48ca5c.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.631-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test1_fsmdb0.agg_out (30b7e609-caec-491b-8860-d6828489d28f).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.472-0500 I STORAGE [conn112] Index build initialized: 1f321288-4246-41ef-b128-6af4a230adf2: test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2 (34493476-06d5-494e-8201-f3daa0838c9d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.643-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test1_fsmdb0.agg_out (247a5030-b416-45d9-b2c7-e0f93a48ca5c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796670, 2832), t: 1 } and commit timestamp Timestamp(1574796670, 2832)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.631-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection 247a5030-b416-45d9-b2c7-e0f93a48ca5c from test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.473-0500 I INDEX [conn112] Waiting for index build to complete: 1f321288-4246-41ef-b128-6af4a230adf2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.643-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test1_fsmdb0.agg_out (247a5030-b416-45d9-b2c7-e0f93a48ca5c).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.631-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (30b7e609-caec-491b-8860-d6828489d28f)'. Ident: 'index-58--4104909142373009110', commit timestamp: 'Timestamp(1574796670, 1817)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.473-0500 I INDEX [conn108] Index build completed: 5b0b55c7-33fc-4d1a-94d9-b7b5b1ba5fa5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.643-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection bd024227-4d0c-4d17-a5bc-33092f16f4b5 from test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.631-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (30b7e609-caec-491b-8860-d6828489d28f)'. Ident: 'index-67--4104909142373009110', commit timestamp: 'Timestamp(1574796670, 1817)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.473-0500 I INDEX [conn46] Index build completed: 635372fb-507f-416b-a289-dd20f3e247ea
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.643-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (247a5030-b416-45d9-b2c7-e0f93a48ca5c)'. Ident: 'index-60--8000595249233899911', commit timestamp: 'Timestamp(1574796670, 2832)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.631-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-57--4104909142373009110, commit timestamp: Timestamp(1574796670, 1817)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.473-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.643-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (247a5030-b416-45d9-b2c7-e0f93a48ca5c)'. Ident: 'index-69--8000595249233899911', commit timestamp: 'Timestamp(1574796670, 2832)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.635-0500 I STORAGE [ReplWriterWorker-0] createCollection: test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 with provided UUID: 418c9b5a-5c5b-489a-9bd0-4f2a944f24c3 and options: { uuid: UUID("418c9b5a-5c5b-489a-9bd0-4f2a944f24c3"), temp: true }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.473-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.643-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-59--8000595249233899911, commit timestamp: Timestamp(1574796670, 2832)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.650-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.473-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.647-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2 (34493476-06d5-494e-8201-f3daa0838c9d) to test1_fsmdb0.agg_out and drop bd024227-4d0c-4d17-a5bc-33092f16f4b5.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.654-0500 I STORAGE [ReplWriterWorker-14] createCollection: test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b with provided UUID: b367778f-f90c-4b21-bd95-25d7e9b4cdde and options: { uuid: UUID("b367778f-f90c-4b21-bd95-25d7e9b4cdde"), temp: true }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.474-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.647-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test1_fsmdb0.agg_out (bd024227-4d0c-4d17-a5bc-33092f16f4b5) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796670, 3014), t: 1 } and commit timestamp Timestamp(1574796670, 3014)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.669-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.482-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.647-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test1_fsmdb0.agg_out (bd024227-4d0c-4d17-a5bc-33092f16f4b5).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.672-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900 (bd024227-4d0c-4d17-a5bc-33092f16f4b5) to test1_fsmdb0.agg_out and drop 247a5030-b416-45d9-b2c7-e0f93a48ca5c.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.485-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.647-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection 34493476-06d5-494e-8201-f3daa0838c9d from test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.672-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test1_fsmdb0.agg_out (247a5030-b416-45d9-b2c7-e0f93a48ca5c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796670, 2832), t: 1 } and commit timestamp Timestamp(1574796670, 2832)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.492-0500 I INDEX [conn114] index build: starting on test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.647-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (bd024227-4d0c-4d17-a5bc-33092f16f4b5)'. Ident: 'index-62--8000595249233899911', commit timestamp: 'Timestamp(1574796670, 3014)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.672-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test1_fsmdb0.agg_out (247a5030-b416-45d9-b2c7-e0f93a48ca5c).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.492-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.647-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (bd024227-4d0c-4d17-a5bc-33092f16f4b5)'. Ident: 'index-71--8000595249233899911', commit timestamp: 'Timestamp(1574796670, 3014)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.672-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection bd024227-4d0c-4d17-a5bc-33092f16f4b5 from test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.492-0500 I STORAGE [conn114] Index build initialized: 256daffa-ec19-4b0e-a3da-32c273fba78d: test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db (ae2d7e04-d26b-40a8-b331-95f6b2cc3d27 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.647-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-61--8000595249233899911, commit timestamp: Timestamp(1574796670, 3014)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.672-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (247a5030-b416-45d9-b2c7-e0f93a48ca5c)'. Ident: 'index-60--4104909142373009110', commit timestamp: 'Timestamp(1574796670, 2832)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.492-0500 I INDEX [conn114] Waiting for index build to complete: 256daffa-ec19-4b0e-a3da-32c273fba78d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.656-0500 I STORAGE [ReplWriterWorker-5] createCollection: test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad with provided UUID: f4a2101d-f864-403b-a1ca-601c782ee658 and options: { uuid: UUID("f4a2101d-f864-403b-a1ca-601c782ee658"), temp: true }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.672-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (247a5030-b416-45d9-b2c7-e0f93a48ca5c)'. Ident: 'index-69--4104909142373009110', commit timestamp: 'Timestamp(1574796670, 2832)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.493-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.670-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.672-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-59--4104909142373009110, commit timestamp: Timestamp(1574796670, 2832)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.495-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 694cada4-0cd7-414a-af45-f1a505808390: test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900 ( bd024227-4d0c-4d17-a5bc-33092f16f4b5 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.686-0500 I INDEX [ReplWriterWorker-7] index build: starting on test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.676-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2 (34493476-06d5-494e-8201-f3daa0838c9d) to test1_fsmdb0.agg_out and drop bd024227-4d0c-4d17-a5bc-33092f16f4b5.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.495-0500 I INDEX [conn110] Index build completed: 694cada4-0cd7-414a-af45-f1a505808390
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.686-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.676-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test1_fsmdb0.agg_out (bd024227-4d0c-4d17-a5bc-33092f16f4b5) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796670, 3014), t: 1 } and commit timestamp Timestamp(1574796670, 3014)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.495-0500 I COMMAND [conn110] command test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900 appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796670, 544), signature: { hash: BinData(0, 49A8305EA2704FB306183AA332FFFA2DB925DD76), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 102ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.686-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: f15361ec-e1d1-4a43-920d-1b9f0bb46438: test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b (b367778f-f90c-4b21-bd95-25d7e9b4cdde ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.676-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test1_fsmdb0.agg_out (bd024227-4d0c-4d17-a5bc-33092f16f4b5).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.499-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 1f321288-4246-41ef-b128-6af4a230adf2: test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2 ( 34493476-06d5-494e-8201-f3daa0838c9d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.686-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.676-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 34493476-06d5-494e-8201-f3daa0838c9d from test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.499-0500 I INDEX [conn112] Index build completed: 1f321288-4246-41ef-b128-6af4a230adf2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.686-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.676-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (bd024227-4d0c-4d17-a5bc-33092f16f4b5)'. Ident: 'index-62--4104909142373009110', commit timestamp: 'Timestamp(1574796670, 3014)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.500-0500 I COMMAND [conn112] command test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2 appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796670, 544), signature: { hash: BinData(0, 49A8305EA2704FB306183AA332FFFA2DB925DD76), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58010", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 101ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.688-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db (ae2d7e04-d26b-40a8-b331-95f6b2cc3d27) to test1_fsmdb0.agg_out and drop 34493476-06d5-494e-8201-f3daa0838c9d.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.676-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (bd024227-4d0c-4d17-a5bc-33092f16f4b5)'. Ident: 'index-71--4104909142373009110', commit timestamp: 'Timestamp(1574796670, 3014)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.500-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.688-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.676-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-61--4104909142373009110, commit timestamp: Timestamp(1574796670, 3014)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.504-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.688-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test1_fsmdb0.agg_out (34493476-06d5-494e-8201-f3daa0838c9d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796670, 3071), t: 1 } and commit timestamp Timestamp(1574796670, 3071)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.678-0500 I STORAGE [ReplWriterWorker-5] createCollection: test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad with provided UUID: f4a2101d-f864-403b-a1ca-601c782ee658 and options: { uuid: UUID("f4a2101d-f864-403b-a1ca-601c782ee658"), temp: true }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.508-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 256daffa-ec19-4b0e-a3da-32c273fba78d: test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db ( ae2d7e04-d26b-40a8-b331-95f6b2cc3d27 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.688-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test1_fsmdb0.agg_out (34493476-06d5-494e-8201-f3daa0838c9d).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.694-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.509-0500 I INDEX [conn114] Index build completed: 256daffa-ec19-4b0e-a3da-32c273fba78d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.689-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection ae2d7e04-d26b-40a8-b331-95f6b2cc3d27 from test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.708-0500 I INDEX [ReplWriterWorker-11] index build: starting on test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.509-0500 I COMMAND [conn114] command test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796670, 544), signature: { hash: BinData(0, 49A8305EA2704FB306183AA332FFFA2DB925DD76), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58004", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 103ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.689-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (34493476-06d5-494e-8201-f3daa0838c9d)'. Ident: 'index-64--8000595249233899911', commit timestamp: 'Timestamp(1574796670, 3071)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.708-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.517-0500 I COMMAND [conn46] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.689-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (34493476-06d5-494e-8201-f3daa0838c9d)'. Ident: 'index-73--8000595249233899911', commit timestamp: 'Timestamp(1574796670, 3071)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.709-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: b7f64aa6-a996-4488-9e5e-320e66ede171: test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b (b367778f-f90c-4b21-bd95-25d7e9b4cdde ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.517-0500 I STORAGE [conn46] dropCollection: test1_fsmdb0.agg_out (f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796670, 1572), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.689-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-63--8000595249233899911, commit timestamp: Timestamp(1574796670, 3071)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.709-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.517-0500 I STORAGE [conn46] Finishing collection drop for test1_fsmdb0.agg_out (f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.691-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: f15361ec-e1d1-4a43-920d-1b9f0bb46438: test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b ( b367778f-f90c-4b21-bd95-25d7e9b4cdde ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.709-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.517-0500 I STORAGE [conn46] renameCollection: renaming collection 30b7e609-caec-491b-8860-d6828489d28f from test1_fsmdb0.tmp.agg_out.38950466-9bca-4cbd-b994-28079a60db92 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.700-0500 I STORAGE [ReplWriterWorker-2] createCollection: test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 with provided UUID: 7ee90729-1153-4b80-b011-fcdc7ee3a014 and options: { uuid: UUID("7ee90729-1153-4b80-b011-fcdc7ee3a014"), temp: true }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.711-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db (ae2d7e04-d26b-40a8-b331-95f6b2cc3d27) to test1_fsmdb0.agg_out and drop 34493476-06d5-494e-8201-f3daa0838c9d.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.517-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4)'. Ident: 'index-45-8224331490264904478', commit timestamp: 'Timestamp(1574796670, 1572)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.715-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.712-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.517-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (f0e9dad2-72e2-4d70-b831-14c7fc4e2ba4)'. Ident: 'index-46-8224331490264904478', commit timestamp: 'Timestamp(1574796670, 1572)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.732-0500 I INDEX [ReplWriterWorker-15] index build: starting on test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.712-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test1_fsmdb0.agg_out (34493476-06d5-494e-8201-f3daa0838c9d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796670, 3071), t: 1 } and commit timestamp Timestamp(1574796670, 3071)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.517-0500 I STORAGE [conn46] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-44-8224331490264904478, commit timestamp: Timestamp(1574796670, 1572)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.732-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.712-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test1_fsmdb0.agg_out (34493476-06d5-494e-8201-f3daa0838c9d).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.517-0500 I COMMAND [conn68] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1236576849660503856, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3944945577625726776, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796670350), clusterTime: Timestamp(1574796670, 539) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796670, 539), signature: { hash: BinData(0, 49A8305EA2704FB306183AA332FFFA2DB925DD76), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44870", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 166ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.732-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 7654e1e5-a76b-4257-8882-17678a86fb45: test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 (418c9b5a-5c5b-489a-9bd0-4f2a944f24c3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.712-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection ae2d7e04-d26b-40a8-b331-95f6b2cc3d27 from test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.520-0500 I COMMAND [conn68] CMD: dropIndexes test1_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.732-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.712-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (34493476-06d5-494e-8201-f3daa0838c9d)'. Ident: 'index-64--4104909142373009110', commit timestamp: 'Timestamp(1574796670, 3071)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.522-0500 I COMMAND [conn108] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.732-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.712-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (34493476-06d5-494e-8201-f3daa0838c9d)'. Ident: 'index-73--4104909142373009110', commit timestamp: 'Timestamp(1574796670, 3071)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.522-0500 I STORAGE [conn108] dropCollection: test1_fsmdb0.agg_out (30b7e609-caec-491b-8860-d6828489d28f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796670, 1817), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.735-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.712-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-63--4104909142373009110, commit timestamp: Timestamp(1574796670, 3071)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.522-0500 I STORAGE [conn108] Finishing collection drop for test1_fsmdb0.agg_out (30b7e609-caec-491b-8860-d6828489d28f).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.736-0500 I STORAGE [ReplWriterWorker-12] createCollection: test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 with provided UUID: 5d6fa62d-1d0b-4c3f-a788-8d271ca06181 and options: { uuid: UUID("5d6fa62d-1d0b-4c3f-a788-8d271ca06181"), temp: true }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.714-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: b7f64aa6-a996-4488-9e5e-320e66ede171: test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b ( b367778f-f90c-4b21-bd95-25d7e9b4cdde ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.522-0500 I STORAGE [conn108] renameCollection: renaming collection 247a5030-b416-45d9-b2c7-e0f93a48ca5c from test1_fsmdb0.tmp.agg_out.3680fdb9-ee6a-41f5-86dd-f0d5969aa795 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.739-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 7654e1e5-a76b-4257-8882-17678a86fb45: test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 ( 418c9b5a-5c5b-489a-9bd0-4f2a944f24c3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.716-0500 I STORAGE [ReplWriterWorker-6] createCollection: test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 with provided UUID: 7ee90729-1153-4b80-b011-fcdc7ee3a014 and options: { uuid: UUID("7ee90729-1153-4b80-b011-fcdc7ee3a014"), temp: true }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.522-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (30b7e609-caec-491b-8860-d6828489d28f)'. Ident: 'index-53-8224331490264904478', commit timestamp: 'Timestamp(1574796670, 1817)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.754-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.732-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.522-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (30b7e609-caec-491b-8860-d6828489d28f)'. Ident: 'index-58-8224331490264904478', commit timestamp: 'Timestamp(1574796670, 1817)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.775-0500 I INDEX [ReplWriterWorker-12] index build: starting on test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.747-0500 I INDEX [ReplWriterWorker-10] index build: starting on test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.522-0500 I STORAGE [conn108] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-48-8224331490264904478, commit timestamp: Timestamp(1574796670, 1817)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.775-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.747-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.522-0500 I COMMAND [conn67] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7657589589046188115, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6790032927209309628, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796670350), clusterTime: Timestamp(1574796670, 539) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796670, 539), signature: { hash: BinData(0, 49A8305EA2704FB306183AA332FFFA2DB925DD76), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44858", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:385 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 171ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.775-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 27cc304e-466f-4604-bfed-3aa7743cb00b: test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad (f4a2101d-f864-403b-a1ca-601c782ee658 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.747-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 964abad6-0cd9-48fb-a0ce-436988300d61: test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 (418c9b5a-5c5b-489a-9bd0-4f2a944f24c3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.523-0500 I STORAGE [conn108] createCollection: test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 with generated UUID: 418c9b5a-5c5b-489a-9bd0-4f2a944f24c3 and options: { temp: true }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.775-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.748-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.525-0500 I COMMAND [conn67] CMD: dropIndexes test1_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.776-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.748-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.527-0500 I STORAGE [conn114] createCollection: test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b with generated UUID: b367778f-f90c-4b21-bd95-25d7e9b4cdde and options: { temp: true }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.776-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b (b367778f-f90c-4b21-bd95-25d7e9b4cdde) to test1_fsmdb0.agg_out and drop ae2d7e04-d26b-40a8-b331-95f6b2cc3d27.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.751-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.577-0500 I INDEX [conn108] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.779-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.752-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 964abad6-0cd9-48fb-a0ce-436988300d61: test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 ( 418c9b5a-5c5b-489a-9bd0-4f2a944f24c3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.584-0500 I INDEX [conn114] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.780-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test1_fsmdb0.agg_out (ae2d7e04-d26b-40a8-b331-95f6b2cc3d27) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796670, 3579), t: 1 } and commit timestamp Timestamp(1574796670, 3579)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.755-0500 I STORAGE [ReplWriterWorker-0] createCollection: test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 with provided UUID: 5d6fa62d-1d0b-4c3f-a788-8d271ca06181 and options: { uuid: UUID("5d6fa62d-1d0b-4c3f-a788-8d271ca06181"), temp: true }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.584-0500 I COMMAND [conn110] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.780-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test1_fsmdb0.agg_out (ae2d7e04-d26b-40a8-b331-95f6b2cc3d27).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.771-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.584-0500 I STORAGE [conn110] dropCollection: test1_fsmdb0.agg_out (247a5030-b416-45d9-b2c7-e0f93a48ca5c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796670, 2832), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.780-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection b367778f-f90c-4b21-bd95-25d7e9b4cdde from test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.791-0500 I INDEX [ReplWriterWorker-6] index build: starting on test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.584-0500 I STORAGE [conn110] Finishing collection drop for test1_fsmdb0.agg_out (247a5030-b416-45d9-b2c7-e0f93a48ca5c).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.780-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (ae2d7e04-d26b-40a8-b331-95f6b2cc3d27)'. Ident: 'index-66--8000595249233899911', commit timestamp: 'Timestamp(1574796670, 3579)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.791-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.584-0500 I STORAGE [conn110] renameCollection: renaming collection bd024227-4d0c-4d17-a5bc-33092f16f4b5 from test1_fsmdb0.tmp.agg_out.d74fa0c3-24a2-4e63-853f-5ad9ae980900 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.780-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (ae2d7e04-d26b-40a8-b331-95f6b2cc3d27)'. Ident: 'index-75--8000595249233899911', commit timestamp: 'Timestamp(1574796670, 3579)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.791-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: ac2d7e22-14c5-4d10-b450-fb68109e1e26: test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad (f4a2101d-f864-403b-a1ca-601c782ee658 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.584-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (247a5030-b416-45d9-b2c7-e0f93a48ca5c)'. Ident: 'index-54-8224331490264904478', commit timestamp: 'Timestamp(1574796670, 2832)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.780-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-65--8000595249233899911, commit timestamp: Timestamp(1574796670, 3579)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.791-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.584-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (247a5030-b416-45d9-b2c7-e0f93a48ca5c)'. Ident: 'index-60-8224331490264904478', commit timestamp: 'Timestamp(1574796670, 2832)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.783-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 27cc304e-466f-4604-bfed-3aa7743cb00b: test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad ( f4a2101d-f864-403b-a1ca-601c782ee658 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.791-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.584-0500 I STORAGE [conn110] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-49-8224331490264904478, commit timestamp: Timestamp(1574796670, 2832)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.788-0500 I STORAGE [ReplWriterWorker-5] createCollection: test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 with provided UUID: 7cd99e28-0fa5-4979-824b-e49dbbfe73da and options: { uuid: UUID("7cd99e28-0fa5-4979-824b-e49dbbfe73da"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.792-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b (b367778f-f90c-4b21-bd95-25d7e9b4cdde) to test1_fsmdb0.agg_out and drop ae2d7e04-d26b-40a8-b331-95f6b2cc3d27.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.584-0500 I INDEX [conn114] Registering index build: de65fd72-3cb8-497c-9bc0-6a2299a57122
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.805-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.794-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.584-0500 I INDEX [conn108] Registering index build: 4339b748-57e3-4dd9-ad52-6a3c9c79f57f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.821-0500 I INDEX [ReplWriterWorker-5] index build: starting on test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.794-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test1_fsmdb0.agg_out (ae2d7e04-d26b-40a8-b331-95f6b2cc3d27) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796670, 3579), t: 1 } and commit timestamp Timestamp(1574796670, 3579)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.584-0500 I COMMAND [conn65] command test1_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4546461976437904142, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 9021960855480189471, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796670350), clusterTime: Timestamp(1574796670, 539) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796670, 539), signature: { hash: BinData(0, 49A8305EA2704FB306183AA332FFFA2DB925DD76), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 231ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.821-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.794-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test1_fsmdb0.agg_out (ae2d7e04-d26b-40a8-b331-95f6b2cc3d27).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.620-0500 I INDEX [conn114] index build: starting on test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.821-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: b1246853-b5d1-49d5-8071-d01f0e02a77b: test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 (7ee90729-1153-4b80-b011-fcdc7ee3a014 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.794-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection b367778f-f90c-4b21-bd95-25d7e9b4cdde from test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.620-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.821-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.794-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (ae2d7e04-d26b-40a8-b331-95f6b2cc3d27)'. Ident: 'index-66--4104909142373009110', commit timestamp: 'Timestamp(1574796670, 3579)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.620-0500 I STORAGE [conn114] Index build initialized: de65fd72-3cb8-497c-9bc0-6a2299a57122: test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b (b367778f-f90c-4b21-bd95-25d7e9b4cdde ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.822-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.794-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (ae2d7e04-d26b-40a8-b331-95f6b2cc3d27)'. Ident: 'index-75--4104909142373009110', commit timestamp: 'Timestamp(1574796670, 3579)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.620-0500 I INDEX [conn114] Waiting for index build to complete: de65fd72-3cb8-497c-9bc0-6a2299a57122
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.824-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.794-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-65--4104909142373009110, commit timestamp: Timestamp(1574796670, 3579)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.620-0500 I COMMAND [conn112] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.827-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: b1246853-b5d1-49d5-8071-d01f0e02a77b: test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 ( 7ee90729-1153-4b80-b011-fcdc7ee3a014 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.797-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: ac2d7e22-14c5-4d10-b450-fb68109e1e26: test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad ( f4a2101d-f864-403b-a1ca-601c782ee658 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.620-0500 I STORAGE [conn112] dropCollection: test1_fsmdb0.agg_out (bd024227-4d0c-4d17-a5bc-33092f16f4b5) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796670, 3014), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.842-0500 I INDEX [ReplWriterWorker-6] index build: starting on test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.806-0500 I STORAGE [ReplWriterWorker-13] createCollection: test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 with provided UUID: 7cd99e28-0fa5-4979-824b-e49dbbfe73da and options: { uuid: UUID("7cd99e28-0fa5-4979-824b-e49dbbfe73da"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.620-0500 I STORAGE [conn112] Finishing collection drop for test1_fsmdb0.agg_out (bd024227-4d0c-4d17-a5bc-33092f16f4b5).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.842-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.818-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.620-0500 I STORAGE [conn112] renameCollection: renaming collection 34493476-06d5-494e-8201-f3daa0838c9d from test1_fsmdb0.tmp.agg_out.f041ee40-1e55-4acb-b46e-2749d1ebd8a2 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.842-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 1243b414-0dd6-4afc-bb48-546639aed0af: test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 (5d6fa62d-1d0b-4c3f-a788-8d271ca06181 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.837-0500 I INDEX [ReplWriterWorker-15] index build: starting on test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.620-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (bd024227-4d0c-4d17-a5bc-33092f16f4b5)'. Ident: 'index-55-8224331490264904478', commit timestamp: 'Timestamp(1574796670, 3014)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.842-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.837-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.620-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (bd024227-4d0c-4d17-a5bc-33092f16f4b5)'. Ident: 'index-62-8224331490264904478', commit timestamp: 'Timestamp(1574796670, 3014)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.843-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.837-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 40e6576e-2eb0-4cbd-8657-a2ffa73cb87b: test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 (7ee90729-1153-4b80-b011-fcdc7ee3a014 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.620-0500 I STORAGE [conn112] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-50-8224331490264904478, commit timestamp: Timestamp(1574796670, 3014)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.844-0500 I COMMAND [ReplWriterWorker-0] CMD: drop test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.837-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.621-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.844-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 (418c9b5a-5c5b-489a-9bd0-4f2a944f24c3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796670, 4653), t: 1 } and commit timestamp Timestamp(1574796670, 4653)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.838-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.621-0500 I COMMAND [conn71] command test1_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5459904903005149906, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3707753988895127330, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796670352), clusterTime: Timestamp(1574796670, 536) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796670, 540), signature: { hash: BinData(0, 49A8305EA2704FB306183AA332FFFA2DB925DD76), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58010", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 267ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.844-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 (418c9b5a-5c5b-489a-9bd0-4f2a944f24c3).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.840-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.621-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.844-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 (418c9b5a-5c5b-489a-9bd0-4f2a944f24c3)'. Ident: 'index-78--8000595249233899911', commit timestamp: 'Timestamp(1574796670, 4653)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.844-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 40e6576e-2eb0-4cbd-8657-a2ffa73cb87b: test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 ( 7ee90729-1153-4b80-b011-fcdc7ee3a014 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.622-0500 I STORAGE [conn112] createCollection: test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad with generated UUID: f4a2101d-f864-403b-a1ca-601c782ee658 and options: { temp: true }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.844-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 (418c9b5a-5c5b-489a-9bd0-4f2a944f24c3)'. Ident: 'index-87--8000595249233899911', commit timestamp: 'Timestamp(1574796670, 4653)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.857-0500 I INDEX [ReplWriterWorker-3] index build: starting on test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.630-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.844-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313'. Ident: collection-77--8000595249233899911, commit timestamp: Timestamp(1574796670, 4653)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.857-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.644-0500 I INDEX [conn108] index build: starting on test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.845-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.857-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 4c6ac508-9d71-4c40-acc2-86a1a936dd24: test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 (5d6fa62d-1d0b-4c3f-a788-8d271ca06181 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.644-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:10.849-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 1243b414-0dd6-4afc-bb48-546639aed0af: test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 ( 5d6fa62d-1d0b-4c3f-a788-8d271ca06181 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.857-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.644-0500 I STORAGE [conn108] Index build initialized: 4339b748-57e3-4dd9-ad52-6a3c9c79f57f: test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 (418c9b5a-5c5b-489a-9bd0-4f2a944f24c3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.858-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.062-0500 I INDEX [ReplWriterWorker-6] index build: starting on test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.644-0500 I INDEX [conn108] Waiting for index build to complete: 4339b748-57e3-4dd9-ad52-6a3c9c79f57f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.859-0500 I COMMAND [ReplWriterWorker-1] CMD: drop test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.062-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.645-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: de65fd72-3cb8-497c-9bc0-6a2299a57122: test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b ( b367778f-f90c-4b21-bd95-25d7e9b4cdde ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.859-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 (418c9b5a-5c5b-489a-9bd0-4f2a944f24c3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796670, 4653), t: 1 } and commit timestamp Timestamp(1574796670, 4653)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.062-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 4f663bef-2f3f-4bd9-be8c-e44ca2ded1c8: test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 (7cd99e28-0fa5-4979-824b-e49dbbfe73da ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.645-0500 I INDEX [conn114] Index build completed: de65fd72-3cb8-497c-9bc0-6a2299a57122
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.859-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 (418c9b5a-5c5b-489a-9bd0-4f2a944f24c3).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.062-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.653-0500 I INDEX [conn112] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.859-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 (418c9b5a-5c5b-489a-9bd0-4f2a944f24c3)'. Ident: 'index-78--4104909142373009110', commit timestamp: 'Timestamp(1574796670, 4653)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.062-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.653-0500 I COMMAND [conn46] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.859-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 (418c9b5a-5c5b-489a-9bd0-4f2a944f24c3)'. Ident: 'index-87--4104909142373009110', commit timestamp: 'Timestamp(1574796670, 4653)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.063-0500 I COMMAND [ReplWriterWorker-12] CMD: drop test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.654-0500 I STORAGE [conn46] dropCollection: test1_fsmdb0.agg_out (34493476-06d5-494e-8201-f3daa0838c9d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796670, 3071), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.859-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313'. Ident: collection-77--4104909142373009110, commit timestamp: Timestamp(1574796670, 4653)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.063-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad (f4a2101d-f864-403b-a1ca-601c782ee658) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796671, 3), t: 1 } and commit timestamp Timestamp(1574796671, 3)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.654-0500 I STORAGE [conn46] Finishing collection drop for test1_fsmdb0.agg_out (34493476-06d5-494e-8201-f3daa0838c9d).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.860-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.063-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad (f4a2101d-f864-403b-a1ca-601c782ee658).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.654-0500 I STORAGE [conn46] renameCollection: renaming collection ae2d7e04-d26b-40a8-b331-95f6b2cc3d27 from test1_fsmdb0.tmp.agg_out.faf90ad4-d967-428f-80b3-5958433207db to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:10.861-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 4c6ac508-9d71-4c40-acc2-86a1a936dd24: test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 ( 5d6fa62d-1d0b-4c3f-a788-8d271ca06181 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.063-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad (f4a2101d-f864-403b-a1ca-601c782ee658)'. Ident: 'index-82--8000595249233899911', commit timestamp: 'Timestamp(1574796671, 3)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.654-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (34493476-06d5-494e-8201-f3daa0838c9d)'. Ident: 'index-56-8224331490264904478', commit timestamp: 'Timestamp(1574796670, 3071)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.076-0500 I INDEX [ReplWriterWorker-5] index build: starting on test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.064-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad (f4a2101d-f864-403b-a1ca-601c782ee658)'. Ident: 'index-91--8000595249233899911', commit timestamp: 'Timestamp(1574796671, 3)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.654-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (34493476-06d5-494e-8201-f3daa0838c9d)'. Ident: 'index-64-8224331490264904478', commit timestamp: 'Timestamp(1574796670, 3071)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.076-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.064-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad'. Ident: collection-81--8000595249233899911, commit timestamp: Timestamp(1574796671, 3)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.654-0500 I STORAGE [conn46] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-51-8224331490264904478, commit timestamp: Timestamp(1574796670, 3071)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.076-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: c0bfe7c6-1173-4d2e-8c80-2c6a29051196: test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 (7cd99e28-0fa5-4979-824b-e49dbbfe73da ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.065-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.654-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.076-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.066-0500 I COMMAND [ReplWriterWorker-13] CMD: drop test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.654-0500 I INDEX [conn112] Registering index build: c8dbdba2-891e-490f-aab8-5b483e383396
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.077-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.066-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 (7ee90729-1153-4b80-b011-fcdc7ee3a014) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796671, 5), t: 1 } and commit timestamp Timestamp(1574796671, 5)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.654-0500 I COMMAND [conn70] command test1_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6282430676925606727, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6937696471281401714, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796670352), clusterTime: Timestamp(1574796670, 536) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796670, 540), signature: { hash: BinData(0, 49A8305EA2704FB306183AA332FFFA2DB925DD76), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58004", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:385 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 300ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.077-0500 I COMMAND [ReplWriterWorker-12] CMD: drop test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.066-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 (7ee90729-1153-4b80-b011-fcdc7ee3a014).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.654-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.078-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad (f4a2101d-f864-403b-a1ca-601c782ee658) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796671, 3), t: 1 } and commit timestamp Timestamp(1574796671, 3)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.066-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 (7ee90729-1153-4b80-b011-fcdc7ee3a014)'. Ident: 'index-86--8000595249233899911', commit timestamp: 'Timestamp(1574796671, 5)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.655-0500 I STORAGE [conn46] createCollection: test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 with generated UUID: 7ee90729-1153-4b80-b011-fcdc7ee3a014 and options: { temp: true }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.078-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad (f4a2101d-f864-403b-a1ca-601c782ee658).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.066-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 (7ee90729-1153-4b80-b011-fcdc7ee3a014)'. Ident: 'index-95--8000595249233899911', commit timestamp: 'Timestamp(1574796671, 5)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.657-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.078-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad (f4a2101d-f864-403b-a1ca-601c782ee658)'. Ident: 'index-82--4104909142373009110', commit timestamp: 'Timestamp(1574796671, 3)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.066-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440'. Ident: collection-85--8000595249233899911, commit timestamp: Timestamp(1574796671, 5)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.658-0500 I STORAGE [conn110] createCollection: test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 with generated UUID: 5d6fa62d-1d0b-4c3f-a788-8d271ca06181 and options: { temp: true }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.078-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad (f4a2101d-f864-403b-a1ca-601c782ee658)'. Ident: 'index-91--4104909142373009110', commit timestamp: 'Timestamp(1574796671, 3)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.068-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 4f663bef-2f3f-4bd9-be8c-e44ca2ded1c8: test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 ( 7cd99e28-0fa5-4979-824b-e49dbbfe73da ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.673-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 4339b748-57e3-4dd9-ad52-6a3c9c79f57f: test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 ( 418c9b5a-5c5b-489a-9bd0-4f2a944f24c3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.078-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad'. Ident: collection-81--4104909142373009110, commit timestamp: Timestamp(1574796671, 3)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.084-0500 I STORAGE [ReplWriterWorker-11] createCollection: test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7 with provided UUID: d8cf1fbb-f747-4b37-96cd-3963ec009453 and options: { uuid: UUID("d8cf1fbb-f747-4b37-96cd-3963ec009453"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.690-0500 I INDEX [conn112] index build: starting on test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.078-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.096-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.690-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.079-0500 I COMMAND [ReplWriterWorker-14] CMD: drop test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.097-0500 I STORAGE [ReplWriterWorker-9] createCollection: test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656 with provided UUID: c670e412-d661-4a65-8078-ec5ff359a93f and options: { uuid: UUID("c670e412-d661-4a65-8078-ec5ff359a93f"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.690-0500 I STORAGE [conn112] Index build initialized: c8dbdba2-891e-490f-aab8-5b483e383396: test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad (f4a2101d-f864-403b-a1ca-601c782ee658 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.079-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 (7ee90729-1153-4b80-b011-fcdc7ee3a014) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796671, 5), t: 1 } and commit timestamp Timestamp(1574796671, 5)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.111-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.690-0500 I INDEX [conn112] Waiting for index build to complete: c8dbdba2-891e-490f-aab8-5b483e383396
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.079-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 (7ee90729-1153-4b80-b011-fcdc7ee3a014).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.114-0500 I STORAGE [ReplWriterWorker-11] createCollection: test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b with provided UUID: c7023eec-2ecd-45da-9e5d-d9873f84474d and options: { uuid: UUID("c7023eec-2ecd-45da-9e5d-d9873f84474d"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.690-0500 I INDEX [conn108] Index build completed: 4339b748-57e3-4dd9-ad52-6a3c9c79f57f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.079-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 (7ee90729-1153-4b80-b011-fcdc7ee3a014)'. Ident: 'index-86--4104909142373009110', commit timestamp: 'Timestamp(1574796671, 5)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.129-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.690-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.079-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 (7ee90729-1153-4b80-b011-fcdc7ee3a014)'. Ident: 'index-95--4104909142373009110', commit timestamp: 'Timestamp(1574796671, 5)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.141-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52958 #51 (13 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.690-0500 I COMMAND [conn108] command test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796670, 2831), signature: { hash: BinData(0, 49A8305EA2704FB306183AA332FFFA2DB925DD76), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44870", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 7386 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 112ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.080-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440'. Ident: collection-85--4104909142373009110, commit timestamp: Timestamp(1574796671, 5)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.141-0500 I NETWORK [conn51] received client metadata from 127.0.0.1:52958 conn51: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.698-0500 I INDEX [conn46] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.080-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: c0bfe7c6-1173-4d2e-8c80-2c6a29051196: test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 ( 7cd99e28-0fa5-4979-824b-e49dbbfe73da ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.149-0500 I INDEX [ReplWriterWorker-14] index build: starting on test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.705-0500 I INDEX [conn110] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.098-0500 I STORAGE [ReplWriterWorker-3] createCollection: test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7 with provided UUID: d8cf1fbb-f747-4b37-96cd-3963ec009453 and options: { uuid: UUID("d8cf1fbb-f747-4b37-96cd-3963ec009453"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.149-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.706-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.112-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.149-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 8dc15154-debd-49b8-8929-6a7bad219349: test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7 (d8cf1fbb-f747-4b37-96cd-3963ec009453 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.708-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.113-0500 I STORAGE [ReplWriterWorker-2] createCollection: test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656 with provided UUID: c670e412-d661-4a65-8078-ec5ff359a93f and options: { uuid: UUID("c670e412-d661-4a65-8078-ec5ff359a93f"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.149-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.708-0500 I COMMAND [conn108] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.128-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.150-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.708-0500 I STORAGE [conn108] dropCollection: test1_fsmdb0.agg_out (ae2d7e04-d26b-40a8-b331-95f6b2cc3d27) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796670, 3579), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.141-0500 I STORAGE [ReplWriterWorker-1] createCollection: test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b with provided UUID: c7023eec-2ecd-45da-9e5d-d9873f84474d and options: { uuid: UUID("c7023eec-2ecd-45da-9e5d-d9873f84474d"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.151-0500 I COMMAND [ReplWriterWorker-8] CMD: drop test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.708-0500 I STORAGE [conn108] Finishing collection drop for test1_fsmdb0.agg_out (ae2d7e04-d26b-40a8-b331-95f6b2cc3d27).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.141-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52072 #55 (13 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.151-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 (5d6fa62d-1d0b-4c3f-a788-8d271ca06181) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796671, 1013), t: 1 } and commit timestamp Timestamp(1574796671, 1013)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.708-0500 I STORAGE [conn108] renameCollection: renaming collection b367778f-f90c-4b21-bd95-25d7e9b4cdde from test1_fsmdb0.tmp.agg_out.e25d643e-b552-42b2-a904-2889a823ce0b to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.141-0500 I NETWORK [conn55] received client metadata from 127.0.0.1:52072 conn55: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.151-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 (5d6fa62d-1d0b-4c3f-a788-8d271ca06181).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.708-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (ae2d7e04-d26b-40a8-b331-95f6b2cc3d27)'. Ident: 'index-57-8224331490264904478', commit timestamp: 'Timestamp(1574796670, 3579)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.156-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.151-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 (5d6fa62d-1d0b-4c3f-a788-8d271ca06181)'. Ident: 'index-90--8000595249233899911', commit timestamp: 'Timestamp(1574796671, 1013)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.708-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (ae2d7e04-d26b-40a8-b331-95f6b2cc3d27)'. Ident: 'index-66-8224331490264904478', commit timestamp: 'Timestamp(1574796670, 3579)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.177-0500 I INDEX [ReplWriterWorker-4] index build: starting on test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.151-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 (5d6fa62d-1d0b-4c3f-a788-8d271ca06181)'. Ident: 'index-97--8000595249233899911', commit timestamp: 'Timestamp(1574796671, 1013)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.708-0500 I STORAGE [conn108] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-52-8224331490264904478, commit timestamp: Timestamp(1574796670, 3579)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.177-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.151-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0'. Ident: collection-89--8000595249233899911, commit timestamp: Timestamp(1574796671, 1013)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.708-0500 I INDEX [conn46] Registering index build: 16274378-5535-45cb-a61c-782825863aef
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.177-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 040df26c-c438-4839-b37b-6be951a26f87: test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7 (d8cf1fbb-f747-4b37-96cd-3963ec009453 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.152-0500 I COMMAND [ReplWriterWorker-3] CMD: drop test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.708-0500 I INDEX [conn110] Registering index build: 1a413a98-e131-4073-86fb-4b1668553785
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.177-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.152-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 (7cd99e28-0fa5-4979-824b-e49dbbfe73da) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796671, 1014), t: 1 } and commit timestamp Timestamp(1574796671, 1014)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.708-0500 I COMMAND [conn67] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3325403964628411127, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2009420076669064712, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796670526), clusterTime: Timestamp(1574796670, 2202) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796670, 2394), signature: { hash: BinData(0, 49A8305EA2704FB306183AA332FFFA2DB925DD76), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44858", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 181ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.178-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.152-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 (7cd99e28-0fa5-4979-824b-e49dbbfe73da).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.710-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: c8dbdba2-891e-490f-aab8-5b483e383396: test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad ( f4a2101d-f864-403b-a1ca-601c782ee658 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.180-0500 I COMMAND [ReplWriterWorker-0] CMD: drop test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.152-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 (7cd99e28-0fa5-4979-824b-e49dbbfe73da)'. Ident: 'index-94--8000595249233899911', commit timestamp: 'Timestamp(1574796671, 1014)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.725-0500 I INDEX [conn46] index build: starting on test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.180-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 (5d6fa62d-1d0b-4c3f-a788-8d271ca06181) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796671, 1013), t: 1 } and commit timestamp Timestamp(1574796671, 1013)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.152-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 (7cd99e28-0fa5-4979-824b-e49dbbfe73da)'. Ident: 'index-99--8000595249233899911', commit timestamp: 'Timestamp(1574796671, 1014)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.725-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.180-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 (5d6fa62d-1d0b-4c3f-a788-8d271ca06181).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.152-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4'. Ident: collection-93--8000595249233899911, commit timestamp: Timestamp(1574796671, 1014)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.725-0500 I STORAGE [conn46] Index build initialized: 16274378-5535-45cb-a61c-782825863aef: test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 (7ee90729-1153-4b80-b011-fcdc7ee3a014 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.180-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 (5d6fa62d-1d0b-4c3f-a788-8d271ca06181)'. Ident: 'index-90--4104909142373009110', commit timestamp: 'Timestamp(1574796671, 1013)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.152-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.725-0500 I INDEX [conn46] Waiting for index build to complete: 16274378-5535-45cb-a61c-782825863aef
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.180-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 (5d6fa62d-1d0b-4c3f-a788-8d271ca06181)'. Ident: 'index-97--4104909142373009110', commit timestamp: 'Timestamp(1574796671, 1013)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.155-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 8dc15154-debd-49b8-8929-6a7bad219349: test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7 ( d8cf1fbb-f747-4b37-96cd-3963ec009453 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.725-0500 I INDEX [conn112] Index build completed: c8dbdba2-891e-490f-aab8-5b483e383396
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.180-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0'. Ident: collection-89--4104909142373009110, commit timestamp: Timestamp(1574796671, 1013)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.172-0500 I INDEX [ReplWriterWorker-5] index build: starting on test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.725-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.181-0500 I COMMAND [ReplWriterWorker-15] CMD: drop test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.172-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.726-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.181-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.172-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 5c1ae9f7-0080-4400-80cf-9141d4b6f1f3: test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656 (c670e412-d661-4a65-8078-ec5ff359a93f ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.728-0500 I STORAGE [conn108] createCollection: test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 with generated UUID: 7cd99e28-0fa5-4979-824b-e49dbbfe73da and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.181-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 (7cd99e28-0fa5-4979-824b-e49dbbfe73da) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796671, 1014), t: 1 } and commit timestamp Timestamp(1574796671, 1014)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.172-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.738-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.181-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 (7cd99e28-0fa5-4979-824b-e49dbbfe73da).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.173-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.754-0500 I INDEX [conn110] index build: starting on test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.181-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 (7cd99e28-0fa5-4979-824b-e49dbbfe73da)'. Ident: 'index-94--4104909142373009110', commit timestamp: 'Timestamp(1574796671, 1014)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.176-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.754-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.181-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 (7cd99e28-0fa5-4979-824b-e49dbbfe73da)'. Ident: 'index-99--4104909142373009110', commit timestamp: 'Timestamp(1574796671, 1014)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.178-0500 I STORAGE [ReplWriterWorker-3] createCollection: test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a with provided UUID: 07fdbe1f-c136-4799-8752-4fd099eb0027 and options: { uuid: UUID("07fdbe1f-c136-4799-8752-4fd099eb0027"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.754-0500 I STORAGE [conn110] Index build initialized: 1a413a98-e131-4073-86fb-4b1668553785: test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 (5d6fa62d-1d0b-4c3f-a788-8d271ca06181 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.181-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4'. Ident: collection-93--4104909142373009110, commit timestamp: Timestamp(1574796671, 1014)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.179-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 5c1ae9f7-0080-4400-80cf-9141d4b6f1f3: test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656 ( c670e412-d661-4a65-8078-ec5ff359a93f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.754-0500 I INDEX [conn110] Waiting for index build to complete: 1a413a98-e131-4073-86fb-4b1668553785
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.184-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 040df26c-c438-4839-b37b-6be951a26f87: test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7 ( d8cf1fbb-f747-4b37-96cd-3963ec009453 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.194-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.754-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 16274378-5535-45cb-a61c-782825863aef: test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 ( 7ee90729-1153-4b80-b011-fcdc7ee3a014 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.198-0500 I INDEX [ReplWriterWorker-12] index build: starting on test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.211-0500 I INDEX [ReplWriterWorker-1] index build: starting on test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.755-0500 I INDEX [conn46] Index build completed: 16274378-5535-45cb-a61c-782825863aef
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.198-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.211-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.761-0500 I INDEX [conn108] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.198-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: b092ac5c-5d9f-4ed3-be18-3cf122db5670: test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656 (c670e412-d661-4a65-8078-ec5ff359a93f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.211-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 714356d6-36d4-4646-84be-03a0661f3cb1: test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b (c7023eec-2ecd-45da-9e5d-d9873f84474d ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.762-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.198-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.211-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.762-0500 I INDEX [conn108] Registering index build: 3242c62a-e53d-4718-ba96-5b1802c28e2f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.199-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.212-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52982 #52 (14 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.762-0500 I COMMAND [conn114] CMD: drop test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.201-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.212-0500 I NETWORK [conn52] received client metadata from 127.0.0.1:52982 conn52: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.762-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.204-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: b092ac5c-5d9f-4ed3-be18-3cf122db5670: test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656 ( c670e412-d661-4a65-8078-ec5ff359a93f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.212-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.773-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.204-0500 I STORAGE [ReplWriterWorker-11] createCollection: test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a with provided UUID: 07fdbe1f-c136-4799-8752-4fd099eb0027 and options: { uuid: UUID("07fdbe1f-c136-4799-8752-4fd099eb0027"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.213-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7 (d8cf1fbb-f747-4b37-96cd-3963ec009453) to test1_fsmdb0.agg_out and drop b367778f-f90c-4b21-bd95-25d7e9b4cdde.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.782-0500 I INDEX [conn108] index build: starting on test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.211-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52092 #56 (14 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.215-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.782-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.211-0500 I NETWORK [conn56] received client metadata from 127.0.0.1:52092 conn56: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.215-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test1_fsmdb0.agg_out (b367778f-f90c-4b21-bd95-25d7e9b4cdde) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796671, 1521), t: 1 } and commit timestamp Timestamp(1574796671, 1521)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.782-0500 I STORAGE [conn108] Index build initialized: 3242c62a-e53d-4718-ba96-5b1802c28e2f: test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 (7cd99e28-0fa5-4979-824b-e49dbbfe73da ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.218-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.215-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test1_fsmdb0.agg_out (b367778f-f90c-4b21-bd95-25d7e9b4cdde).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.782-0500 I INDEX [conn108] Waiting for index build to complete: 3242c62a-e53d-4718-ba96-5b1802c28e2f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.222-0500 W CONTROL [conn56] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 8 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.215-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection d8cf1fbb-f747-4b37-96cd-3963ec009453 from test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.782-0500 I STORAGE [conn114] dropCollection: test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 (418c9b5a-5c5b-489a-9bd0-4f2a944f24c3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.233-0500 I INDEX [ReplWriterWorker-14] index build: starting on test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.215-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (b367778f-f90c-4b21-bd95-25d7e9b4cdde)'. Ident: 'index-80--8000595249233899911', commit timestamp: 'Timestamp(1574796671, 1521)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.782-0500 I STORAGE [conn114] Finishing collection drop for test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 (418c9b5a-5c5b-489a-9bd0-4f2a944f24c3).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.233-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.215-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (b367778f-f90c-4b21-bd95-25d7e9b4cdde)'. Ident: 'index-83--8000595249233899911', commit timestamp: 'Timestamp(1574796671, 1521)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.782-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 (418c9b5a-5c5b-489a-9bd0-4f2a944f24c3)'. Ident: 'index-70-8224331490264904478', commit timestamp: 'Timestamp(1574796670, 4653)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.233-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: ca83178a-da24-4181-8248-4e5f4b5c5223: test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b (c7023eec-2ecd-45da-9e5d-d9873f84474d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.215-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-79--8000595249233899911, commit timestamp: Timestamp(1574796671, 1521)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.782-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313 (418c9b5a-5c5b-489a-9bd0-4f2a944f24c3)'. Ident: 'index-74-8224331490264904478', commit timestamp: 'Timestamp(1574796670, 4653)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.233-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.216-0500 I STORAGE [ReplWriterWorker-6] createCollection: test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a with provided UUID: 503e4ffc-8a06-40c8-96cb-7e6f5b72a689 and options: { uuid: UUID("503e4ffc-8a06-40c8-96cb-7e6f5b72a689"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.782-0500 I STORAGE [conn114] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313'. Ident: collection-68-8224331490264904478, commit timestamp: Timestamp(1574796670, 4653)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.234-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.217-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 714356d6-36d4-4646-84be-03a0661f3cb1: test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b ( c7023eec-2ecd-45da-9e5d-d9873f84474d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.782-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 1a413a98-e131-4073-86fb-4b1668553785: test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 ( 5d6fa62d-1d0b-4c3f-a788-8d271ca06181 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.234-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7 (d8cf1fbb-f747-4b37-96cd-3963ec009453) to test1_fsmdb0.agg_out and drop b367778f-f90c-4b21-bd95-25d7e9b4cdde.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.223-0500 W CONTROL [conn52] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 4 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.236-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.232-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.236-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test1_fsmdb0.agg_out (b367778f-f90c-4b21-bd95-25d7e9b4cdde) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796671, 1521), t: 1 } and commit timestamp Timestamp(1574796671, 1521)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.232-0500 I STORAGE [ReplWriterWorker-14] createCollection: test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab with provided UUID: b284ac1b-9160-4103-86e9-6e2e91b51310 and options: { uuid: UUID("b284ac1b-9160-4103-86e9-6e2e91b51310"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.782-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.236-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test1_fsmdb0.agg_out (b367778f-f90c-4b21-bd95-25d7e9b4cdde).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.249-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.782-0500 I COMMAND [conn68] command test1_fsmdb0.agg_out appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8494613398976810200, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6777965014559624916, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796670521), clusterTime: Timestamp(1574796670, 1816) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796670, 1945), signature: { hash: BinData(0, 49A8305EA2704FB306183AA332FFFA2DB925DD76), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44870", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313\", to: \"test1_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:884 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 259ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.236-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection d8cf1fbb-f747-4b37-96cd-3963ec009453 from test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:14.014-0500 I COMMAND [conn70] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d") }, $clusterTime: { clusterTime: Timestamp(1574796671, 1014), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2886ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.272-0500 I INDEX [ReplWriterWorker-2] index build: starting on test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.043-0500 I INDEX [conn110] Index build completed: 1a413a98-e131-4073-86fb-4b1668553785
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.236-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (b367778f-f90c-4b21-bd95-25d7e9b4cdde)'. Ident: 'index-80--4104909142373009110', commit timestamp: 'Timestamp(1574796671, 1521)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.272-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:10.783-0500 I COMMAND [conn112] CMD: drop test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.236-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (b367778f-f90c-4b21-bd95-25d7e9b4cdde)'. Ident: 'index-83--4104909142373009110', commit timestamp: 'Timestamp(1574796671, 1521)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.272-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 03086b35-927d-449d-9f8f-c5edc4867cc7: test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a (07fdbe1f-c136-4799-8752-4fd099eb0027 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.043-0500 I COMMAND [conn110] command test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796670, 3576), signature: { hash: BinData(0, 49A8305EA2704FB306183AA332FFFA2DB925DD76), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58004", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 2637 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 337ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.236-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-79--4104909142373009110, commit timestamp: Timestamp(1574796671, 1521)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.272-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.043-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.237-0500 I STORAGE [ReplWriterWorker-4] createCollection: test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a with provided UUID: 503e4ffc-8a06-40c8-96cb-7e6f5b72a689 and options: { uuid: UUID("503e4ffc-8a06-40c8-96cb-7e6f5b72a689"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.273-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.046-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.237-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: ca83178a-da24-4181-8248-4e5f4b5c5223: test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b ( c7023eec-2ecd-45da-9e5d-d9873f84474d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.274-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656 (c670e412-d661-4a65-8078-ec5ff359a93f) to test1_fsmdb0.agg_out and drop d8cf1fbb-f747-4b37-96cd-3963ec009453.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.046-0500 I COMMAND [conn46] command admin.$cmd appName: "tid:1" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440", to: "test1_fsmdb0.agg_out", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796670, 5089), signature: { hash: BinData(0, 49A8305EA2704FB306183AA332FFFA2DB925DD76), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58010", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "admin" } numYields:0 ok:0 errMsg:"collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:563 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 255300 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 255ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.252-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.275-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.046-0500 I STORAGE [conn112] dropCollection: test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad (f4a2101d-f864-403b-a1ca-601c782ee658) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.254-0500 I STORAGE [ReplWriterWorker-3] createCollection: test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab with provided UUID: b284ac1b-9160-4103-86e9-6e2e91b51310 and options: { uuid: UUID("b284ac1b-9160-4103-86e9-6e2e91b51310"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.275-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test1_fsmdb0.agg_out (d8cf1fbb-f747-4b37-96cd-3963ec009453) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796671, 2027), t: 1 } and commit timestamp Timestamp(1574796671, 2027)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.046-0500 I STORAGE [conn112] Finishing collection drop for test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad (f4a2101d-f864-403b-a1ca-601c782ee658).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.268-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.275-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test1_fsmdb0.agg_out (d8cf1fbb-f747-4b37-96cd-3963ec009453).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.046-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad (f4a2101d-f864-403b-a1ca-601c782ee658)'. Ident: 'index-77-8224331490264904478', commit timestamp: 'Timestamp(1574796671, 3)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.289-0500 I INDEX [ReplWriterWorker-9] index build: starting on test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.275-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection c670e412-d661-4a65-8078-ec5ff359a93f from test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.046-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad (f4a2101d-f864-403b-a1ca-601c782ee658)'. Ident: 'index-78-8224331490264904478', commit timestamp: 'Timestamp(1574796671, 3)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.289-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.275-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (d8cf1fbb-f747-4b37-96cd-3963ec009453)'. Ident: 'index-102--8000595249233899911', commit timestamp: 'Timestamp(1574796671, 2027)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.046-0500 I STORAGE [conn112] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad'. Ident: collection-75-8224331490264904478, commit timestamp: Timestamp(1574796671, 3)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.289-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 5eced510-5aff-4cf7-a3e0-1b61c7eb53e8: test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a (07fdbe1f-c136-4799-8752-4fd099eb0027 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.275-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (d8cf1fbb-f747-4b37-96cd-3963ec009453)'. Ident: 'index-107--8000595249233899911', commit timestamp: 'Timestamp(1574796671, 2027)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.046-0500 I COMMAND [conn112] command test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad command: drop { drop: "tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad", databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, $clusterTime: { clusterTime: Timestamp(1574796670, 4717), signature: { hash: BinData(0, 49A8305EA2704FB306183AA332FFFA2DB925DD76), keyId: 6763700092420489256 } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:420 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 3083 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 263ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.289-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.276-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-101--8000595249233899911, commit timestamp: Timestamp(1574796671, 2027)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:14.021-0500 I COMMAND [conn74] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856") }, $clusterTime: { clusterTime: Timestamp(1574796671, 1521), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2854ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:14.078-0500 I COMMAND [conn33] command test1_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063") }, $clusterTime: { clusterTime: Timestamp(1574796671, 2091), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"error\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:827 protocol:op_msg 2860ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:15.154-0500 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: LockStateChangeFailed: findAndModify query predicate didn't match any lock document
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.046-0500 I COMMAND [conn112] CMD: drop test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.290-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.278-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 03086b35-927d-449d-9f8f-c5edc4867cc7: test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a ( 07fdbe1f-c136-4799-8752-4fd099eb0027 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:14.141-0500 I COMMAND [conn71] command test1_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563") }, $clusterTime: { clusterTime: Timestamp(1574796671, 2530), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"error\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:827 protocol:op_msg 2891ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:14.105-0500 I COMMAND [conn32] command test1_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee") }, $clusterTime: { clusterTime: Timestamp(1574796671, 1465), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"error\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:827 protocol:op_msg 2961ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.047-0500 I COMMAND [conn65] command test1_fsmdb0.agg_out appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5761612177472945899, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2105278491293755543, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796670586), clusterTime: Timestamp(1574796670, 2948) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796670, 3066), signature: { hash: BinData(0, 49A8305EA2704FB306183AA332FFFA2DB925DD76), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad\", to: \"test1_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:884 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 425ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.290-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656 (c670e412-d661-4a65-8078-ec5ff359a93f) to test1_fsmdb0.agg_out and drop d8cf1fbb-f747-4b37-96cd-3963ec009453.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.280-0500 I STORAGE [ReplWriterWorker-0] createCollection: test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f with provided UUID: f5f2650b-d83a-49fb-b607-dd23864e5360 and options: { uuid: UUID("f5f2650b-d83a-49fb-b607-dd23864e5360"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:14.176-0500 I COMMAND [conn70] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d") }, $clusterTime: { clusterTime: Timestamp(1574796671, 3100), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"error\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:827 protocol:op_msg 161ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:14.260-0500 I COMMAND [conn33] command test1_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063") }, $clusterTime: { clusterTime: Timestamp(1574796674, 1015), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"warn\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:820 protocol:op_msg 180ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.047-0500 I STORAGE [conn112] dropCollection: test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 (7ee90729-1153-4b80-b011-fcdc7ee3a014) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.291-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.295-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:14.183-0500 I COMMAND [conn74] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856") }, $clusterTime: { clusterTime: Timestamp(1574796674, 10), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 143ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.047-0500 I STORAGE [conn112] Finishing collection drop for test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 (7ee90729-1153-4b80-b011-fcdc7ee3a014).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.292-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test1_fsmdb0.agg_out (d8cf1fbb-f747-4b37-96cd-3963ec009453) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796671, 2027), t: 1 } and commit timestamp Timestamp(1574796671, 2027)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.299-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b (c7023eec-2ecd-45da-9e5d-d9873f84474d) to test1_fsmdb0.agg_out and drop c670e412-d661-4a65-8078-ec5ff359a93f.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.047-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 (7ee90729-1153-4b80-b011-fcdc7ee3a014)'. Ident: 'index-82-8224331490264904478', commit timestamp: 'Timestamp(1574796671, 5)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.292-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test1_fsmdb0.agg_out (d8cf1fbb-f747-4b37-96cd-3963ec009453).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.299-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test1_fsmdb0.agg_out (c670e412-d661-4a65-8078-ec5ff359a93f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796671, 2530), t: 1 } and commit timestamp Timestamp(1574796671, 2530)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.047-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440 (7ee90729-1153-4b80-b011-fcdc7ee3a014)'. Ident: 'index-84-8224331490264904478', commit timestamp: 'Timestamp(1574796671, 5)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.292-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection c670e412-d661-4a65-8078-ec5ff359a93f from test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.299-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test1_fsmdb0.agg_out (c670e412-d661-4a65-8078-ec5ff359a93f).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.047-0500 I STORAGE [conn112] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440'. Ident: collection-79-8224331490264904478, commit timestamp: Timestamp(1574796671, 5)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.292-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (d8cf1fbb-f747-4b37-96cd-3963ec009453)'. Ident: 'index-102--4104909142373009110', commit timestamp: 'Timestamp(1574796671, 2027)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.299-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection c7023eec-2ecd-45da-9e5d-d9873f84474d from test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.047-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 3242c62a-e53d-4718-ba96-5b1802c28e2f: test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 ( 7cd99e28-0fa5-4979-824b-e49dbbfe73da ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.292-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (d8cf1fbb-f747-4b37-96cd-3963ec009453)'. Ident: 'index-107--4104909142373009110', commit timestamp: 'Timestamp(1574796671, 2027)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.299-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (c670e412-d661-4a65-8078-ec5ff359a93f)'. Ident: 'index-104--8000595249233899911', commit timestamp: 'Timestamp(1574796671, 2530)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.047-0500 I INDEX [conn108] Index build completed: 3242c62a-e53d-4718-ba96-5b1802c28e2f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.292-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-101--4104909142373009110, commit timestamp: Timestamp(1574796671, 2027)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.299-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (c670e412-d661-4a65-8078-ec5ff359a93f)'. Ident: 'index-109--8000595249233899911', commit timestamp: 'Timestamp(1574796671, 2530)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.047-0500 I COMMAND [conn71] command test1_fsmdb0.agg_out appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3485909073483348551, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4689242593817337631, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796670623), clusterTime: Timestamp(1574796670, 3066) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796670, 3071), signature: { hash: BinData(0, 49A8305EA2704FB306183AA332FFFA2DB925DD76), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58010", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440\", to: \"test1_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:884 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 393ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.294-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 5eced510-5aff-4cf7-a3e0-1b61c7eb53e8: test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a ( 07fdbe1f-c136-4799-8752-4fd099eb0027 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.299-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-103--8000595249233899911, commit timestamp: Timestamp(1574796671, 2530)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.047-0500 I COMMAND [conn108] command test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796670, 4149), signature: { hash: BinData(0, 49A8305EA2704FB306183AA332FFFA2DB925DD76), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44858", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 285ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.297-0500 I STORAGE [ReplWriterWorker-5] createCollection: test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f with provided UUID: f5f2650b-d83a-49fb-b607-dd23864e5360 and options: { uuid: UUID("f5f2650b-d83a-49fb-b607-dd23864e5360"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.300-0500 I STORAGE [ReplWriterWorker-4] createCollection: test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 with provided UUID: f4608a2b-7f90-424d-bb80-675faa9006ee and options: { uuid: UUID("f4608a2b-7f90-424d-bb80-675faa9006ee"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.049-0500 I STORAGE [conn110] createCollection: test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7 with generated UUID: d8cf1fbb-f747-4b37-96cd-3963ec009453 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.311-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.315-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.050-0500 I STORAGE [conn46] createCollection: test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656 with generated UUID: c670e412-d661-4a65-8078-ec5ff359a93f and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.315-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b (c7023eec-2ecd-45da-9e5d-d9873f84474d) to test1_fsmdb0.agg_out and drop c670e412-d661-4a65-8078-ec5ff359a93f.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.330-0500 I INDEX [ReplWriterWorker-0] index build: starting on test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.051-0500 I STORAGE [conn108] createCollection: test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b with generated UUID: c7023eec-2ecd-45da-9e5d-d9873f84474d and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.316-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test1_fsmdb0.agg_out (c670e412-d661-4a65-8078-ec5ff359a93f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796671, 2530), t: 1 } and commit timestamp Timestamp(1574796671, 2530)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.330-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.080-0500 I INDEX [conn110] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.316-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test1_fsmdb0.agg_out (c670e412-d661-4a65-8078-ec5ff359a93f).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.330-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 37f2ec9a-4171-4ecf-a39a-fa317926e8f1: test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab (b284ac1b-9160-4103-86e9-6e2e91b51310 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.081-0500 I INDEX [conn112] Registering index build: e78cd8bf-3b65-43f3-bb1b-a412f69d23fb
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.316-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection c7023eec-2ecd-45da-9e5d-d9873f84474d from test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.330-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.087-0500 I INDEX [conn46] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.316-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (c670e412-d661-4a65-8078-ec5ff359a93f)'. Ident: 'index-104--4104909142373009110', commit timestamp: 'Timestamp(1574796671, 2530)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.331-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.092-0500 I INDEX [conn108] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.316-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (c670e412-d661-4a65-8078-ec5ff359a93f)'. Ident: 'index-109--4104909142373009110', commit timestamp: 'Timestamp(1574796671, 2530)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.332-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.107-0500 I INDEX [conn112] index build: starting on test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.316-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-103--4104909142373009110, commit timestamp: Timestamp(1574796671, 2530)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.338-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 37f2ec9a-4171-4ecf-a39a-fa317926e8f1: test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab ( b284ac1b-9160-4103-86e9-6e2e91b51310 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.107-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.316-0500 I STORAGE [ReplWriterWorker-8] createCollection: test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 with provided UUID: f4608a2b-7f90-424d-bb80-675faa9006ee and options: { uuid: UUID("f4608a2b-7f90-424d-bb80-675faa9006ee"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.343-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a (07fdbe1f-c136-4799-8752-4fd099eb0027) to test1_fsmdb0.agg_out and drop c7023eec-2ecd-45da-9e5d-d9873f84474d.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.107-0500 I STORAGE [conn112] Index build initialized: e78cd8bf-3b65-43f3-bb1b-a412f69d23fb: test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7 (d8cf1fbb-f747-4b37-96cd-3963ec009453 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.331-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.343-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test1_fsmdb0.agg_out (c7023eec-2ecd-45da-9e5d-d9873f84474d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796671, 3036), t: 1 } and commit timestamp Timestamp(1574796671, 3036)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.107-0500 I INDEX [conn112] Waiting for index build to complete: e78cd8bf-3b65-43f3-bb1b-a412f69d23fb
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.345-0500 I INDEX [ReplWriterWorker-12] index build: starting on test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.343-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test1_fsmdb0.agg_out (c7023eec-2ecd-45da-9e5d-d9873f84474d).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.108-0500 I INDEX [conn46] Registering index build: 565db008-c981-4941-9afd-2968a90fa74b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.345-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.343-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection 07fdbe1f-c136-4799-8752-4fd099eb0027 from test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.108-0500 I INDEX [conn108] Registering index build: a27b8083-beec-424e-b356-a43e785a0ce7
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.346-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 105f5b49-dae5-4140-a139-0e1b22324800: test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab (b284ac1b-9160-4103-86e9-6e2e91b51310 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.343-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (c7023eec-2ecd-45da-9e5d-d9873f84474d)'. Ident: 'index-106--8000595249233899911', commit timestamp: 'Timestamp(1574796671, 3036)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.108-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.346-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.343-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (c7023eec-2ecd-45da-9e5d-d9873f84474d)'. Ident: 'index-113--8000595249233899911', commit timestamp: 'Timestamp(1574796671, 3036)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.108-0500 I COMMAND [conn114] CMD: drop test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.346-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:11.343-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-105--8000595249233899911, commit timestamp: Timestamp(1574796671, 3036)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.108-0500 I COMMAND [conn110] CMD: drop test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.349-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.108-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.036-0500 I INDEX [ReplWriterWorker-3] index build: starting on test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.351-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 105f5b49-dae5-4140-a139-0e1b22324800: test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab ( b284ac1b-9160-4103-86e9-6e2e91b51310 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.111-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.036-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.355-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a (07fdbe1f-c136-4799-8752-4fd099eb0027) to test1_fsmdb0.agg_out and drop c7023eec-2ecd-45da-9e5d-d9873f84474d.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.118-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: e78cd8bf-3b65-43f3-bb1b-a412f69d23fb: test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7 ( d8cf1fbb-f747-4b37-96cd-3963ec009453 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.036-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 82c0d99c-6fe7-4c91-a6d7-380ed45907a1: test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f (f5f2650b-d83a-49fb-b607-dd23864e5360 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.355-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test1_fsmdb0.agg_out (c7023eec-2ecd-45da-9e5d-d9873f84474d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796671, 3036), t: 1 } and commit timestamp Timestamp(1574796671, 3036)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.125-0500 I INDEX [conn46] index build: starting on test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.036-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.355-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test1_fsmdb0.agg_out (c7023eec-2ecd-45da-9e5d-d9873f84474d).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.125-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.036-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.356-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 07fdbe1f-c136-4799-8752-4fd099eb0027 from test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.125-0500 I STORAGE [conn46] Index build initialized: 565db008-c981-4941-9afd-2968a90fa74b: test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656 (c670e412-d661-4a65-8078-ec5ff359a93f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.039-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.356-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (c7023eec-2ecd-45da-9e5d-d9873f84474d)'. Ident: 'index-106--4104909142373009110', commit timestamp: 'Timestamp(1574796671, 3036)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.125-0500 I INDEX [conn46] Waiting for index build to complete: 565db008-c981-4941-9afd-2968a90fa74b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.047-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 82c0d99c-6fe7-4c91-a6d7-380ed45907a1: test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f ( f5f2650b-d83a-49fb-b607-dd23864e5360 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.356-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (c7023eec-2ecd-45da-9e5d-d9873f84474d)'. Ident: 'index-113--4104909142373009110', commit timestamp: 'Timestamp(1574796671, 3036)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.125-0500 I INDEX [conn112] Index build completed: e78cd8bf-3b65-43f3-bb1b-a412f69d23fb
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.056-0500 I INDEX [ReplWriterWorker-0] index build: starting on test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:11.356-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-105--4104909142373009110, commit timestamp: Timestamp(1574796671, 3036)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.125-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.056-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.125-0500 I STORAGE [conn114] dropCollection: test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 (5d6fa62d-1d0b-4c3f-a788-8d271ca06181) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.052-0500 I INDEX [ReplWriterWorker-14] index build: starting on test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.057-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 57174fbb-484e-440b-8b8b-6dd154524e27: test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a (503e4ffc-8a06-40c8-96cb-7e6f5b72a689 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.125-0500 I STORAGE [conn110] dropCollection: test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 (7cd99e28-0fa5-4979-824b-e49dbbfe73da) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.052-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.057-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.125-0500 I STORAGE [conn114] Finishing collection drop for test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 (5d6fa62d-1d0b-4c3f-a788-8d271ca06181).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.052-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 77323f69-5a3f-4b40-bec2-5680fba90263: test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f (f5f2650b-d83a-49fb-b607-dd23864e5360 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.057-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.125-0500 I STORAGE [conn110] Finishing collection drop for test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 (7cd99e28-0fa5-4979-824b-e49dbbfe73da).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.052-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.058-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab (b284ac1b-9160-4103-86e9-6e2e91b51310) to test1_fsmdb0.agg_out and drop 07fdbe1f-c136-4799-8752-4fd099eb0027.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.125-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 (5d6fa62d-1d0b-4c3f-a788-8d271ca06181)'. Ident: 'index-83-8224331490264904478', commit timestamp: 'Timestamp(1574796671, 1013)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.053-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.060-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.125-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 (7cd99e28-0fa5-4979-824b-e49dbbfe73da)'. Ident: 'index-89-8224331490264904478', commit timestamp: 'Timestamp(1574796671, 1014)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.055-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.060-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test1_fsmdb0.agg_out (07fdbe1f-c136-4799-8752-4fd099eb0027) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796674, 6), t: 1 } and commit timestamp Timestamp(1574796674, 6)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.125-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0 (5d6fa62d-1d0b-4c3f-a788-8d271ca06181)'. Ident: 'index-86-8224331490264904478', commit timestamp: 'Timestamp(1574796671, 1013)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.056-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 77323f69-5a3f-4b40-bec2-5680fba90263: test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f ( f5f2650b-d83a-49fb-b607-dd23864e5360 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.060-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test1_fsmdb0.agg_out (07fdbe1f-c136-4799-8752-4fd099eb0027).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.125-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4 (7cd99e28-0fa5-4979-824b-e49dbbfe73da)'. Ident: 'index-90-8224331490264904478', commit timestamp: 'Timestamp(1574796671, 1014)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.074-0500 I INDEX [ReplWriterWorker-8] index build: starting on test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.060-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection b284ac1b-9160-4103-86e9-6e2e91b51310 from test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.125-0500 I STORAGE [conn114] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0'. Ident: collection-80-8224331490264904478, commit timestamp: Timestamp(1574796671, 1013)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.074-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.060-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (07fdbe1f-c136-4799-8752-4fd099eb0027)'. Ident: 'index-112--8000595249233899911', commit timestamp: 'Timestamp(1574796674, 6)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.125-0500 I STORAGE [conn110] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4'. Ident: collection-87-8224331490264904478, commit timestamp: Timestamp(1574796671, 1014)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.074-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 9e5f09e0-1fa7-49f7-b47b-223802bea4f6: test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a (503e4ffc-8a06-40c8-96cb-7e6f5b72a689 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.060-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (07fdbe1f-c136-4799-8752-4fd099eb0027)'. Ident: 'index-119--8000595249233899911', commit timestamp: 'Timestamp(1574796674, 6)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.126-0500 I COMMAND [conn67] command test1_fsmdb0.agg_out appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8310058429917547115, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7997513392986416276, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796670726), clusterTime: Timestamp(1574796670, 3645) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796670, 3709), signature: { hash: BinData(0, 49A8305EA2704FB306183AA332FFFA2DB925DD76), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44858", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:994 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 398ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.074-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.060-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-111--8000595249233899911, commit timestamp: Timestamp(1574796674, 6)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.126-0500 I COMMAND [conn70] command test1_fsmdb0.agg_out appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 237252942054943014, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6267674592717753674, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796670657), clusterTime: Timestamp(1574796670, 3071) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796670, 3072), signature: { hash: BinData(0, 49A8305EA2704FB306183AA332FFFA2DB925DD76), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58004", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0\", to: \"test1_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:866 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 468ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.074-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.061-0500 I STORAGE [ReplWriterWorker-15] createCollection: test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb with provided UUID: a88d13ca-f01d-4def-89bb-92dfbaa6c76e and options: { uuid: UUID("a88d13ca-f01d-4def-89bb-92dfbaa6c76e"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.126-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.075-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab (b284ac1b-9160-4103-86e9-6e2e91b51310) to test1_fsmdb0.agg_out and drop 07fdbe1f-c136-4799-8752-4fd099eb0027.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.062-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 57174fbb-484e-440b-8b8b-6dd154524e27: test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a ( 503e4ffc-8a06-40c8-96cb-7e6f5b72a689 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.128-0500 I COMMAND [conn70] CMD: dropIndexes test1_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.077-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.077-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.134-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.077-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test1_fsmdb0.agg_out (07fdbe1f-c136-4799-8752-4fd099eb0027) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796674, 6), t: 1 } and commit timestamp Timestamp(1574796674, 6)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.094-0500 I INDEX [ReplWriterWorker-11] index build: starting on test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.141-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39034 #116 (41 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.077-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test1_fsmdb0.agg_out (07fdbe1f-c136-4799-8752-4fd099eb0027).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.094-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.141-0500 I NETWORK [conn116] received client metadata from 127.0.0.1:39034 conn116: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.077-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection b284ac1b-9160-4103-86e9-6e2e91b51310 from test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.094-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: b9bb5b22-b025-4590-b31b-4ebb0b86174f: test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 (f4608a2b-7f90-424d-bb80-675faa9006ee ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.142-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39040 #117 (42 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.077-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (07fdbe1f-c136-4799-8752-4fd099eb0027)'. Ident: 'index-112--4104909142373009110', commit timestamp: 'Timestamp(1574796674, 6)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.094-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.142-0500 I INDEX [conn108] index build: starting on test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.077-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (07fdbe1f-c136-4799-8752-4fd099eb0027)'. Ident: 'index-119--4104909142373009110', commit timestamp: 'Timestamp(1574796674, 6)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.095-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.142-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.077-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-111--4104909142373009110, commit timestamp: Timestamp(1574796674, 6)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.097-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.142-0500 I STORAGE [conn108] Index build initialized: a27b8083-beec-424e-b356-a43e785a0ce7: test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b (c7023eec-2ecd-45da-9e5d-d9873f84474d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.078-0500 I STORAGE [ReplWriterWorker-3] createCollection: test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb with provided UUID: a88d13ca-f01d-4def-89bb-92dfbaa6c76e and options: { uuid: UUID("a88d13ca-f01d-4def-89bb-92dfbaa6c76e"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.098-0500 I STORAGE [ReplWriterWorker-2] createCollection: test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7 with provided UUID: d957eda8-107c-4316-bee2-07ec72ecf5f9 and options: { uuid: UUID("d957eda8-107c-4316-bee2-07ec72ecf5f9"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.142-0500 I INDEX [conn108] Waiting for index build to complete: a27b8083-beec-424e-b356-a43e785a0ce7
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.079-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 9e5f09e0-1fa7-49f7-b47b-223802bea4f6: test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a ( 503e4ffc-8a06-40c8-96cb-7e6f5b72a689 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.099-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: b9bb5b22-b025-4590-b31b-4ebb0b86174f: test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 ( f4608a2b-7f90-424d-bb80-675faa9006ee ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.142-0500 I NETWORK [conn117] received client metadata from 127.0.0.1:39040 conn117: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.095-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.115-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.142-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.111-0500 I INDEX [ReplWriterWorker-7] index build: starting on test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.147-0500 I INDEX [ReplWriterWorker-0] index build: starting on test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.143-0500 I STORAGE [conn110] createCollection: test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a with generated UUID: 07fdbe1f-c136-4799-8752-4fd099eb0027 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.111-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.147-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.147-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 565db008-c981-4941-9afd-2968a90fa74b: test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656 ( c670e412-d661-4a65-8078-ec5ff359a93f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.111-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 228ad0b4-e92e-4f09-b670-cf82cafa732a: test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 (f4608a2b-7f90-424d-bb80-675faa9006ee ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.147-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 64efb0be-f738-4c15-8cd4-623b64b2fea3: test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb (a88d13ca-f01d-4def-89bb-92dfbaa6c76e ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.147-0500 I INDEX [conn46] Index build completed: 565db008-c981-4941-9afd-2968a90fa74b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.111-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.148-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.147-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.111-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.148-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.156-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.114-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.149-0500 I COMMAND [ReplWriterWorker-9] CMD: drop test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.164-0500 I INDEX [conn110] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.115-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 228ad0b4-e92e-4f09-b670-cf82cafa732a: test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 ( f4608a2b-7f90-424d-bb80-675faa9006ee ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.149-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f (f5f2650b-d83a-49fb-b607-dd23864e5360) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796674, 1015), t: 1 } and commit timestamp Timestamp(1574796674, 1015)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.164-0500 I COMMAND [conn112] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.116-0500 I STORAGE [ReplWriterWorker-6] createCollection: test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7 with provided UUID: d957eda8-107c-4316-bee2-07ec72ecf5f9 and options: { uuid: UUID("d957eda8-107c-4316-bee2-07ec72ecf5f9"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.149-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f (f5f2650b-d83a-49fb-b607-dd23864e5360).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.165-0500 I STORAGE [conn112] dropCollection: test1_fsmdb0.agg_out (b367778f-f90c-4b21-bd95-25d7e9b4cdde) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796671, 1521), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.131-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.149-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f (f5f2650b-d83a-49fb-b607-dd23864e5360)'. Ident: 'index-122--8000595249233899911', commit timestamp: 'Timestamp(1574796674, 1015)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.165-0500 I STORAGE [conn112] Finishing collection drop for test1_fsmdb0.agg_out (b367778f-f90c-4b21-bd95-25d7e9b4cdde).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.164-0500 I INDEX [ReplWriterWorker-2] index build: starting on test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.149-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f (f5f2650b-d83a-49fb-b607-dd23864e5360)'. Ident: 'index-127--8000595249233899911', commit timestamp: 'Timestamp(1574796674, 1015)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.165-0500 I STORAGE [conn112] renameCollection: renaming collection d8cf1fbb-f747-4b37-96cd-3963ec009453 from test1_fsmdb0.tmp.agg_out.86cbd67a-c8aa-474a-8847-e0db221b17d7 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.164-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.149-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f'. Ident: collection-121--8000595249233899911, commit timestamp: Timestamp(1574796674, 1015)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.165-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (b367778f-f90c-4b21-bd95-25d7e9b4cdde)'. Ident: 'index-71-8224331490264904478', commit timestamp: 'Timestamp(1574796671, 1521)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.164-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 3d5f067b-05c0-429c-9b3d-dbf0aaec85de: test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb (a88d13ca-f01d-4def-89bb-92dfbaa6c76e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.151-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.165-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (b367778f-f90c-4b21-bd95-25d7e9b4cdde)'. Ident: 'index-72-8224331490264904478', commit timestamp: 'Timestamp(1574796671, 1521)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.164-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.152-0500 I STORAGE [ReplWriterWorker-7] createCollection: test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae with provided UUID: eaccdea6-06b1-45d9-883d-16ca6ae0b7d6 and options: { uuid: UUID("eaccdea6-06b1-45d9-883d-16ca6ae0b7d6"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.165-0500 I STORAGE [conn112] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-69-8224331490264904478, commit timestamp: Timestamp(1574796671, 1521)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.164-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.154-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 64efb0be-f738-4c15-8cd4-623b64b2fea3: test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb ( a88d13ca-f01d-4def-89bb-92dfbaa6c76e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.165-0500 I INDEX [conn110] Registering index build: 2527b848-1ce1-4ea7-bf7e-5c97df149dfd
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.165-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.170-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.165-0500 I COMMAND [conn68] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8034574537967120368, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7463426918084456955, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796671048), clusterTime: Timestamp(1574796671, 4) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796671, 5), signature: { hash: BinData(0, D613E78D194EC2353ABD788EFEF825FE39426034), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44870", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 116ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.165-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f (f5f2650b-d83a-49fb-b607-dd23864e5360) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796674, 1015), t: 1 } and commit timestamp Timestamp(1574796674, 1015)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.173-0500 I COMMAND [ReplWriterWorker-15] CMD: drop test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.165-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: a27b8083-beec-424e-b356-a43e785a0ce7: test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b ( c7023eec-2ecd-45da-9e5d-d9873f84474d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.165-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f (f5f2650b-d83a-49fb-b607-dd23864e5360).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.173-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a (503e4ffc-8a06-40c8-96cb-7e6f5b72a689) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796674, 1518), t: 1 } and commit timestamp Timestamp(1574796674, 1518)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.165-0500 I STORAGE [conn112] createCollection: test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a with generated UUID: 503e4ffc-8a06-40c8-96cb-7e6f5b72a689 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.165-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f (f5f2650b-d83a-49fb-b607-dd23864e5360)'. Ident: 'index-122--4104909142373009110', commit timestamp: 'Timestamp(1574796674, 1015)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.165-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f (f5f2650b-d83a-49fb-b607-dd23864e5360)'. Ident: 'index-127--4104909142373009110', commit timestamp: 'Timestamp(1574796674, 1015)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.168-0500 I STORAGE [conn46] createCollection: test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab with generated UUID: b284ac1b-9160-4103-86e9-6e2e91b51310 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.165-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f'. Ident: collection-121--4104909142373009110, commit timestamp: Timestamp(1574796674, 1015)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.196-0500 I INDEX [conn110] index build: starting on test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.167-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.196-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.173-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a (503e4ffc-8a06-40c8-96cb-7e6f5b72a689).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.170-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 3d5f067b-05c0-429c-9b3d-dbf0aaec85de: test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb ( a88d13ca-f01d-4def-89bb-92dfbaa6c76e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.196-0500 I STORAGE [conn110] Index build initialized: 2527b848-1ce1-4ea7-bf7e-5c97df149dfd: test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a (07fdbe1f-c136-4799-8752-4fd099eb0027 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.173-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a (503e4ffc-8a06-40c8-96cb-7e6f5b72a689)'. Ident: 'index-116--8000595249233899911', commit timestamp: 'Timestamp(1574796674, 1518)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.171-0500 I STORAGE [ReplWriterWorker-7] createCollection: test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae with provided UUID: eaccdea6-06b1-45d9-883d-16ca6ae0b7d6 and options: { uuid: UUID("eaccdea6-06b1-45d9-883d-16ca6ae0b7d6"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.196-0500 I INDEX [conn110] Waiting for index build to complete: 2527b848-1ce1-4ea7-bf7e-5c97df149dfd
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.173-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a (503e4ffc-8a06-40c8-96cb-7e6f5b72a689)'. Ident: 'index-129--8000595249233899911', commit timestamp: 'Timestamp(1574796674, 1518)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.186-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.196-0500 I INDEX [conn108] Index build completed: a27b8083-beec-424e-b356-a43e785a0ce7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.173-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a'. Ident: collection-115--8000595249233899911, commit timestamp: Timestamp(1574796674, 1518)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.189-0500 I COMMAND [ReplWriterWorker-1] CMD: drop test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.196-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.190-0500 I INDEX [ReplWriterWorker-2] index build: starting on test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.189-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a (503e4ffc-8a06-40c8-96cb-7e6f5b72a689) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796674, 1518), t: 1 } and commit timestamp Timestamp(1574796674, 1518)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.196-0500 I COMMAND [conn108] command test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796671, 1008), signature: { hash: BinData(0, D613E78D194EC2353ABD788EFEF825FE39426034), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 14742 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 103ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.190-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.189-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a (503e4ffc-8a06-40c8-96cb-7e6f5b72a689).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.205-0500 I INDEX [conn112] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.190-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 43c455eb-0f8f-497d-9666-618a52b48229: test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7 (d957eda8-107c-4316-bee2-07ec72ecf5f9 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.190-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a (503e4ffc-8a06-40c8-96cb-7e6f5b72a689)'. Ident: 'index-116--4104909142373009110', commit timestamp: 'Timestamp(1574796674, 1518)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.207-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39054 #118 (43 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.191-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.190-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a (503e4ffc-8a06-40c8-96cb-7e6f5b72a689)'. Ident: 'index-129--4104909142373009110', commit timestamp: 'Timestamp(1574796674, 1518)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.208-0500 I NETWORK [conn118] received client metadata from 127.0.0.1:39054 conn118: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.191-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.190-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a'. Ident: collection-115--4104909142373009110, commit timestamp: Timestamp(1574796674, 1518)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.210-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39056 #119 (44 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.192-0500 I STORAGE [ReplWriterWorker-10] createCollection: test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 with provided UUID: a41f43d1-2c89-4c41-a8f5-515233da8c7e and options: { uuid: UUID("a41f43d1-2c89-4c41-a8f5-515233da8c7e"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.206-0500 I INDEX [ReplWriterWorker-14] index build: starting on test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.210-0500 I NETWORK [conn119] received client metadata from 127.0.0.1:39056 conn119: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.194-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.206-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.212-0500 I INDEX [conn46] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.202-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 43c455eb-0f8f-497d-9666-618a52b48229: test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7 ( d957eda8-107c-4316-bee2-07ec72ecf5f9 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.206-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 7172f09f-f495-4817-b361-a8ce5bd099ff: test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7 (d957eda8-107c-4316-bee2-07ec72ecf5f9 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.212-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.209-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.206-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.215-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.215-0500 I COMMAND [ReplWriterWorker-8] CMD: drop test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.207-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.215-0500 I COMMAND [conn114] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.215-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 (f4608a2b-7f90-424d-bb80-675faa9006ee) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796674, 1971), t: 1 } and commit timestamp Timestamp(1574796674, 1971)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.209-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.215-0500 I STORAGE [conn114] dropCollection: test1_fsmdb0.agg_out (d8cf1fbb-f747-4b37-96cd-3963ec009453) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796671, 2027), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.215-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 (f4608a2b-7f90-424d-bb80-675faa9006ee).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.210-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 7172f09f-f495-4817-b361-a8ce5bd099ff: test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7 ( d957eda8-107c-4316-bee2-07ec72ecf5f9 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.215-0500 I STORAGE [conn114] Finishing collection drop for test1_fsmdb0.agg_out (d8cf1fbb-f747-4b37-96cd-3963ec009453).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.215-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 (f4608a2b-7f90-424d-bb80-675faa9006ee)'. Ident: 'index-124--8000595249233899911', commit timestamp: 'Timestamp(1574796674, 1971)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.210-0500 I STORAGE [ReplWriterWorker-3] createCollection: test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 with provided UUID: a41f43d1-2c89-4c41-a8f5-515233da8c7e and options: { uuid: UUID("a41f43d1-2c89-4c41-a8f5-515233da8c7e"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.215-0500 I STORAGE [conn114] renameCollection: renaming collection c670e412-d661-4a65-8078-ec5ff359a93f from test1_fsmdb0.tmp.agg_out.39186b3a-fb03-42b8-8673-8e88f6471656 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.215-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 (f4608a2b-7f90-424d-bb80-675faa9006ee)'. Ident: 'index-133--8000595249233899911', commit timestamp: 'Timestamp(1574796674, 1971)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.224-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.215-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (d8cf1fbb-f747-4b37-96cd-3963ec009453)'. Ident: 'index-95-8224331490264904478', commit timestamp: 'Timestamp(1574796671, 2027)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.215-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954'. Ident: collection-123--8000595249233899911, commit timestamp: Timestamp(1574796674, 1971)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.231-0500 I COMMAND [ReplWriterWorker-6] CMD: drop test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.215-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (d8cf1fbb-f747-4b37-96cd-3963ec009453)'. Ident: 'index-98-8224331490264904478', commit timestamp: 'Timestamp(1574796671, 2027)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.233-0500 I INDEX [ReplWriterWorker-5] index build: starting on test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.231-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 (f4608a2b-7f90-424d-bb80-675faa9006ee) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796674, 1971), t: 1 } and commit timestamp Timestamp(1574796674, 1971)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.215-0500 I STORAGE [conn114] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-92-8224331490264904478, commit timestamp: Timestamp(1574796671, 2027)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.233-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.231-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 (f4608a2b-7f90-424d-bb80-675faa9006ee).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.216-0500 I INDEX [conn46] Registering index build: dbda5a6e-dc99-4102-94ef-a99d1814038f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.233-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 27af3faa-29e0-4f65-a17d-0eb9ef13019c: test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae (eaccdea6-06b1-45d9-883d-16ca6ae0b7d6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.231-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 (f4608a2b-7f90-424d-bb80-675faa9006ee)'. Ident: 'index-124--4104909142373009110', commit timestamp: 'Timestamp(1574796674, 1971)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.216-0500 I INDEX [conn112] Registering index build: 48a7b0cf-4235-40bf-a28b-8a3755593d59
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.234-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.231-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 (f4608a2b-7f90-424d-bb80-675faa9006ee)'. Ident: 'index-133--4104909142373009110', commit timestamp: 'Timestamp(1574796674, 1971)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.216-0500 I COMMAND [conn71] command test1_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4181137846470563800, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5010717386382713333, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796671049), clusterTime: Timestamp(1574796671, 5) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796671, 6), signature: { hash: BinData(0, D613E78D194EC2353ABD788EFEF825FE39426034), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58010", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 166ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.234-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.231-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954'. Ident: collection-123--4104909142373009110, commit timestamp: Timestamp(1574796674, 1971)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.216-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 2527b848-1ce1-4ea7-bf7e-5c97df149dfd: test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a ( 07fdbe1f-c136-4799-8752-4fd099eb0027 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.237-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.250-0500 I INDEX [ReplWriterWorker-7] index build: starting on test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.219-0500 I STORAGE [conn114] createCollection: test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f with generated UUID: f5f2650b-d83a-49fb-b607-dd23864e5360 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.239-0500 I STORAGE [ReplWriterWorker-14] createCollection: test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 with provided UUID: 5e9a527a-e215-4481-815a-143e8412903a and options: { uuid: UUID("5e9a527a-e215-4481-815a-143e8412903a"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.251-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.222-0500 W CONTROL [conn119] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 4 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.239-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 27af3faa-29e0-4f65-a17d-0eb9ef13019c: test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae ( eaccdea6-06b1-45d9-883d-16ca6ae0b7d6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.251-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 8299ab4a-ddf6-4812-84b1-50299984f122: test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae (eaccdea6-06b1-45d9-883d-16ca6ae0b7d6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.240-0500 I INDEX [conn46] index build: starting on test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.257-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.251-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.240-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.260-0500 I COMMAND [ReplWriterWorker-6] CMD: drop test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.251-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.240-0500 I STORAGE [conn46] Index build initialized: dbda5a6e-dc99-4102-94ef-a99d1814038f: test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab (b284ac1b-9160-4103-86e9-6e2e91b51310 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.260-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb (a88d13ca-f01d-4def-89bb-92dfbaa6c76e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796674, 2528), t: 1 } and commit timestamp Timestamp(1574796674, 2528)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.253-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.240-0500 I INDEX [conn46] Waiting for index build to complete: dbda5a6e-dc99-4102-94ef-a99d1814038f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.260-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb (a88d13ca-f01d-4def-89bb-92dfbaa6c76e).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.258-0500 I STORAGE [ReplWriterWorker-14] createCollection: test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 with provided UUID: 5e9a527a-e215-4481-815a-143e8412903a and options: { uuid: UUID("5e9a527a-e215-4481-815a-143e8412903a"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.241-0500 I INDEX [conn110] Index build completed: 2527b848-1ce1-4ea7-bf7e-5c97df149dfd
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.260-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb (a88d13ca-f01d-4def-89bb-92dfbaa6c76e)'. Ident: 'index-132--8000595249233899911', commit timestamp: 'Timestamp(1574796674, 2528)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.258-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 8299ab4a-ddf6-4812-84b1-50299984f122: test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae ( eaccdea6-06b1-45d9-883d-16ca6ae0b7d6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.247-0500 I INDEX [conn114] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.260-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb (a88d13ca-f01d-4def-89bb-92dfbaa6c76e)'. Ident: 'index-137--8000595249233899911', commit timestamp: 'Timestamp(1574796674, 2528)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.272-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.248-0500 I COMMAND [conn108] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.260-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb'. Ident: collection-131--8000595249233899911, commit timestamp: Timestamp(1574796674, 2528)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.276-0500 I COMMAND [ReplWriterWorker-4] CMD: drop test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.248-0500 I STORAGE [conn108] dropCollection: test1_fsmdb0.agg_out (c670e412-d661-4a65-8078-ec5ff359a93f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796671, 2530), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.261-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7 (d957eda8-107c-4316-bee2-07ec72ecf5f9) to test1_fsmdb0.agg_out and drop b284ac1b-9160-4103-86e9-6e2e91b51310.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.276-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb (a88d13ca-f01d-4def-89bb-92dfbaa6c76e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796674, 2528), t: 1 } and commit timestamp Timestamp(1574796674, 2528)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.248-0500 I STORAGE [conn108] Finishing collection drop for test1_fsmdb0.agg_out (c670e412-d661-4a65-8078-ec5ff359a93f).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.261-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test1_fsmdb0.agg_out (b284ac1b-9160-4103-86e9-6e2e91b51310) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796674, 2529), t: 1 } and commit timestamp Timestamp(1574796674, 2529)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.276-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb (a88d13ca-f01d-4def-89bb-92dfbaa6c76e).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.248-0500 I STORAGE [conn108] renameCollection: renaming collection c7023eec-2ecd-45da-9e5d-d9873f84474d from test1_fsmdb0.tmp.agg_out.e2c27f19-1044-4f76-ac89-00f87f0ae54b to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.261-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test1_fsmdb0.agg_out (b284ac1b-9160-4103-86e9-6e2e91b51310).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.276-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb (a88d13ca-f01d-4def-89bb-92dfbaa6c76e)'. Ident: 'index-132--4104909142373009110', commit timestamp: 'Timestamp(1574796674, 2528)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.248-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (c670e412-d661-4a65-8078-ec5ff359a93f)'. Ident: 'index-96-8224331490264904478', commit timestamp: 'Timestamp(1574796671, 2530)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.261-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection d957eda8-107c-4316-bee2-07ec72ecf5f9 from test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.276-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb (a88d13ca-f01d-4def-89bb-92dfbaa6c76e)'. Ident: 'index-137--4104909142373009110', commit timestamp: 'Timestamp(1574796674, 2528)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.248-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (c670e412-d661-4a65-8078-ec5ff359a93f)'. Ident: 'index-100-8224331490264904478', commit timestamp: 'Timestamp(1574796671, 2530)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.261-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (b284ac1b-9160-4103-86e9-6e2e91b51310)'. Ident: 'index-118--8000595249233899911', commit timestamp: 'Timestamp(1574796674, 2529)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.276-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb'. Ident: collection-131--4104909142373009110, commit timestamp: Timestamp(1574796674, 2528)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.248-0500 I STORAGE [conn108] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-93-8224331490264904478, commit timestamp: Timestamp(1574796671, 2530)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.261-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (b284ac1b-9160-4103-86e9-6e2e91b51310)'. Ident: 'index-125--8000595249233899911', commit timestamp: 'Timestamp(1574796674, 2529)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.277-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7 (d957eda8-107c-4316-bee2-07ec72ecf5f9) to test1_fsmdb0.agg_out and drop b284ac1b-9160-4103-86e9-6e2e91b51310.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.248-0500 I INDEX [conn114] Registering index build: c5b24f56-0349-48dd-936a-fca1f25efccc
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.261-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-117--8000595249233899911, commit timestamp: Timestamp(1574796674, 2529)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.277-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test1_fsmdb0.agg_out (b284ac1b-9160-4103-86e9-6e2e91b51310) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796674, 2529), t: 1 } and commit timestamp Timestamp(1574796674, 2529)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.248-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.278-0500 I INDEX [ReplWriterWorker-14] index build: starting on test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.277-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test1_fsmdb0.agg_out (b284ac1b-9160-4103-86e9-6e2e91b51310).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.248-0500 I COMMAND [conn65] command test1_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4608109259034281398, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4503194923680955697, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796671050), clusterTime: Timestamp(1574796671, 5) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796671, 6), signature: { hash: BinData(0, D613E78D194EC2353ABD788EFEF825FE39426034), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 197ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.278-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.277-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection d957eda8-107c-4316-bee2-07ec72ecf5f9 from test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.249-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.278-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 30900d52-25fd-40ca-861a-1d13aaf9807d: test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 (a41f43d1-2c89-4c41-a8f5-515233da8c7e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.277-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (b284ac1b-9160-4103-86e9-6e2e91b51310)'. Ident: 'index-118--4104909142373009110', commit timestamp: 'Timestamp(1574796674, 2529)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.251-0500 I STORAGE [conn110] createCollection: test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 with generated UUID: f4608a2b-7f90-424d-bb80-675faa9006ee and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.278-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.277-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (b284ac1b-9160-4103-86e9-6e2e91b51310)'. Ident: 'index-125--4104909142373009110', commit timestamp: 'Timestamp(1574796674, 2529)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.251-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.279-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.277-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-117--4104909142373009110, commit timestamp: Timestamp(1574796674, 2529)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.264-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: dbda5a6e-dc99-4102-94ef-a99d1814038f: test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab ( b284ac1b-9160-4103-86e9-6e2e91b51310 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.282-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.295-0500 I INDEX [ReplWriterWorker-0] index build: starting on test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.272-0500 I INDEX [conn112] index build: starting on test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.283-0500 I STORAGE [ReplWriterWorker-10] createCollection: test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf with provided UUID: 98d69c0e-084e-4706-a268-0475b0e8b641 and options: { uuid: UUID("98d69c0e-084e-4706-a268-0475b0e8b641"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.295-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.272-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.284-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 30900d52-25fd-40ca-861a-1d13aaf9807d: test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 ( a41f43d1-2c89-4c41-a8f5-515233da8c7e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.295-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 855cad46-7cb3-4d3e-b007-9ead6bd3a722: test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 (a41f43d1-2c89-4c41-a8f5-515233da8c7e ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.272-0500 I STORAGE [conn112] Index build initialized: 48a7b0cf-4235-40bf-a28b-8a3755593d59: test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a (503e4ffc-8a06-40c8-96cb-7e6f5b72a689 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.298-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.295-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.272-0500 I INDEX [conn112] Waiting for index build to complete: 48a7b0cf-4235-40bf-a28b-8a3755593d59
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.299-0500 I STORAGE [ReplWriterWorker-4] createCollection: test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945 with provided UUID: 2566decf-8e7b-4c6f-a88c-b3ff026ae4a6 and options: { uuid: UUID("2566decf-8e7b-4c6f-a88c-b3ff026ae4a6"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.296-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.272-0500 I INDEX [conn46] Index build completed: dbda5a6e-dc99-4102-94ef-a99d1814038f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.313-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.297-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.278-0500 I INDEX [conn110] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.338-0500 I INDEX [ReplWriterWorker-6] index build: starting on test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.299-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 855cad46-7cb3-4d3e-b007-9ead6bd3a722: test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 ( a41f43d1-2c89-4c41-a8f5-515233da8c7e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.292-0500 I INDEX [conn114] index build: starting on test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.338-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.299-0500 I STORAGE [ReplWriterWorker-14] createCollection: test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf with provided UUID: 98d69c0e-084e-4706-a268-0475b0e8b641 and options: { uuid: UUID("98d69c0e-084e-4706-a268-0475b0e8b641"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.292-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.338-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 33d77835-b22e-4ac4-8ae4-6c97249fb35a: test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 (5e9a527a-e215-4481-815a-143e8412903a ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.313-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.292-0500 I STORAGE [conn114] Index build initialized: c5b24f56-0349-48dd-936a-fca1f25efccc: test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f (f5f2650b-d83a-49fb-b607-dd23864e5360 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.338-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.314-0500 I STORAGE [ReplWriterWorker-15] createCollection: test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945 with provided UUID: 2566decf-8e7b-4c6f-a88c-b3ff026ae4a6 and options: { uuid: UUID("2566decf-8e7b-4c6f-a88c-b3ff026ae4a6"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.292-0500 I INDEX [conn114] Waiting for index build to complete: c5b24f56-0349-48dd-936a-fca1f25efccc
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.339-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.325-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.292-0500 I COMMAND [conn46] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.341-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.332-0500 I COMMAND [conn56] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796674, 3035) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("c65a601f-c957-428e-adeb-3bd85740d639") }, $clusterTime: { clusterTime: Timestamp(1574796674, 3035), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 4133 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 111ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.292-0500 I STORAGE [conn46] dropCollection: test1_fsmdb0.agg_out (c7023eec-2ecd-45da-9e5d-d9873f84474d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796671, 3036), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.344-0500 I COMMAND [ReplWriterWorker-3] CMD: drop test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.352-0500 I INDEX [ReplWriterWorker-5] index build: starting on test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.292-0500 I STORAGE [conn46] Finishing collection drop for test1_fsmdb0.agg_out (c7023eec-2ecd-45da-9e5d-d9873f84474d).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.344-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae (eaccdea6-06b1-45d9-883d-16ca6ae0b7d6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796674, 3539), t: 1 } and commit timestamp Timestamp(1574796674, 3539)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.352-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.292-0500 I STORAGE [conn46] renameCollection: renaming collection 07fdbe1f-c136-4799-8752-4fd099eb0027 from test1_fsmdb0.tmp.agg_out.ac70d9e7-5025-4521-9a0b-5a778718489a to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.344-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 33d77835-b22e-4ac4-8ae4-6c97249fb35a: test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 ( 5e9a527a-e215-4481-815a-143e8412903a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.352-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: d3cdbd41-1985-4c59-8526-bc6c76ebbcb3: test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 (5e9a527a-e215-4481-815a-143e8412903a ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.292-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (c7023eec-2ecd-45da-9e5d-d9873f84474d)'. Ident: 'index-97-8224331490264904478', commit timestamp: 'Timestamp(1574796671, 3036)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.344-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae (eaccdea6-06b1-45d9-883d-16ca6ae0b7d6).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.352-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.292-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (c7023eec-2ecd-45da-9e5d-d9873f84474d)'. Ident: 'index-102-8224331490264904478', commit timestamp: 'Timestamp(1574796671, 3036)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.344-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae (eaccdea6-06b1-45d9-883d-16ca6ae0b7d6)'. Ident: 'index-140--8000595249233899911', commit timestamp: 'Timestamp(1574796674, 3539)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.353-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.292-0500 I STORAGE [conn46] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-94-8224331490264904478, commit timestamp: Timestamp(1574796671, 3036)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.344-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae (eaccdea6-06b1-45d9-883d-16ca6ae0b7d6)'. Ident: 'index-145--8000595249233899911', commit timestamp: 'Timestamp(1574796674, 3539)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.354-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.292-0500 I INDEX [conn110] Registering index build: f2b08533-52fa-416d-8d07-04a9e62c2257
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:14.344-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae'. Ident: collection-139--8000595249233899911, commit timestamp: Timestamp(1574796674, 3539)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.357-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: d3cdbd41-1985-4c59-8526-bc6c76ebbcb3: test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 ( 5e9a527a-e215-4481-815a-143e8412903a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.292-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.357-0500 I COMMAND [ReplWriterWorker-6] CMD: drop test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.292-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.357-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae (eaccdea6-06b1-45d9-883d-16ca6ae0b7d6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796674, 3539), t: 1 } and commit timestamp Timestamp(1574796674, 3539)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.292-0500 I COMMAND [conn67] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4749182629244063469, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7170238183445983394, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796671127), clusterTime: Timestamp(1574796671, 1014) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796671, 1014), signature: { hash: BinData(0, D613E78D194EC2353ABD788EFEF825FE39426034), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44858", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 13705 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 163ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.357-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae (eaccdea6-06b1-45d9-883d-16ca6ae0b7d6).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.357-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae (eaccdea6-06b1-45d9-883d-16ca6ae0b7d6)'. Ident: 'index-140--4104909142373009110', commit timestamp: 'Timestamp(1574796674, 3539)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.357-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae (eaccdea6-06b1-45d9-883d-16ca6ae0b7d6)'. Ident: 'index-145--4104909142373009110', commit timestamp: 'Timestamp(1574796674, 3539)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:14.358-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae'. Ident: collection-139--4104909142373009110, commit timestamp: Timestamp(1574796674, 3539)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.293-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.294-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:11.310-0500 I INDEX [conn110] index build: starting on test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.014-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.014-0500 I STORAGE [conn110] Index build initialized: f2b08533-52fa-416d-8d07-04a9e62c2257: test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 (f4608a2b-7f90-424d-bb80-675faa9006ee ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.014-0500 I INDEX [conn110] Waiting for index build to complete: f2b08533-52fa-416d-8d07-04a9e62c2257
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.016-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.020-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.020-0500 I COMMAND [conn108] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.020-0500 I STORAGE [conn108] dropCollection: test1_fsmdb0.agg_out (07fdbe1f-c136-4799-8752-4fd099eb0027) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796674, 6), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.020-0500 I STORAGE [conn108] Finishing collection drop for test1_fsmdb0.agg_out (07fdbe1f-c136-4799-8752-4fd099eb0027).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.020-0500 I STORAGE [conn108] renameCollection: renaming collection b284ac1b-9160-4103-86e9-6e2e91b51310 from test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.020-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (07fdbe1f-c136-4799-8752-4fd099eb0027)'. Ident: 'index-105-8224331490264904478', commit timestamp: 'Timestamp(1574796674, 6)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.020-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (07fdbe1f-c136-4799-8752-4fd099eb0027)'. Ident: 'index-106-8224331490264904478', commit timestamp: 'Timestamp(1574796674, 6)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.020-0500 I STORAGE [conn108] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-104-8224331490264904478, commit timestamp: Timestamp(1574796674, 6)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.020-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.020-0500 I COMMAND [conn108] command test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab appName: "tid:2" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test1_fsmdb0.tmp.agg_out.a7508c3a-9b3b-481e-8bca-2469e4ed9eab", to: "test1_fsmdb0.agg_out", collectionOptions: { validationLevel: "moderate", validationAction: "error" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796671, 3536), signature: { hash: BinData(0, D613E78D194EC2353ABD788EFEF825FE39426034), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44870", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 2709678 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2710ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.021-0500 I COMMAND [conn119] command test1_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796671, 2530), lsid: { id: UUID("c8c15e08-f1a6-4edc-831c-249e4d0ea0c0") }, $clusterTime: { clusterTime: Timestamp(1574796671, 2530), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796671, 2530). Collection minimum timestamp is Timestamp(1574796674, 5)" errName:SnapshotUnavailable errCode:246 reslen:579 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2702693 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 2702ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.021-0500 I COMMAND [conn68] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 9142984922892772980, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1698416762940634913, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796671167), clusterTime: Timestamp(1574796671, 1521) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796671, 1522), signature: { hash: BinData(0, D613E78D194EC2353ABD788EFEF825FE39426034), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44870", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2853ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.021-0500 I STORAGE [conn108] createCollection: test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb with generated UUID: a88d13ca-f01d-4def-89bb-92dfbaa6c76e and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.021-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: c5b24f56-0349-48dd-936a-fca1f25efccc: test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f ( f5f2650b-d83a-49fb-b607-dd23864e5360 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.022-0500 I INDEX [conn114] Index build completed: c5b24f56-0349-48dd-936a-fca1f25efccc
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.022-0500 I COMMAND [conn114] command test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796671, 2530), signature: { hash: BinData(0, D613E78D194EC2353ABD788EFEF825FE39426034), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58010", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 85 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2773ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.022-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 48a7b0cf-4235-40bf-a28b-8a3755593d59: test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a ( 503e4ffc-8a06-40c8-96cb-7e6f5b72a689 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.022-0500 I INDEX [conn112] Index build completed: 48a7b0cf-4235-40bf-a28b-8a3755593d59
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.023-0500 I COMMAND [conn112] command test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796671, 2024), signature: { hash: BinData(0, D613E78D194EC2353ABD788EFEF825FE39426034), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58004", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 17363 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2816ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.023-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.032-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.039-0500 I INDEX [conn108] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.040-0500 I INDEX [conn108] Registering index build: 75faa46d-ac48-4550-b041-2983fa39b3cc
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.040-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: f2b08533-52fa-416d-8d07-04a9e62c2257: test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 ( f4608a2b-7f90-424d-bb80-675faa9006ee ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.042-0500 I STORAGE [conn114] createCollection: test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7 with generated UUID: d957eda8-107c-4316-bee2-07ec72ecf5f9 and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.064-0500 I INDEX [conn108] index build: starting on test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:16.304-0500 I COMMAND [conn32] command test1_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee") }, $clusterTime: { clusterTime: Timestamp(1574796674, 1518), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"warn\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:820 protocol:op_msg 2197ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.064-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.064-0500 I STORAGE [conn108] Index build initialized: 75faa46d-ac48-4550-b041-2983fa39b3cc: test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb (a88d13ca-f01d-4def-89bb-92dfbaa6c76e ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.064-0500 I INDEX [conn108] Waiting for index build to complete: 75faa46d-ac48-4550-b041-2983fa39b3cc
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.064-0500 I INDEX [conn110] Index build completed: f2b08533-52fa-416d-8d07-04a9e62c2257
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.064-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.064-0500 I COMMAND [conn110] command test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796671, 3034), signature: { hash: BinData(0, D613E78D194EC2353ABD788EFEF825FE39426034), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 13375 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2785ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.073-0500 I INDEX [conn114] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.073-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.077-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.077-0500 I COMMAND [conn112] CMD: drop test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.077-0500 I INDEX [conn114] Registering index build: 78dca148-e535-4a54-b4ff-1c614c088089
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.077-0500 I STORAGE [conn112] dropCollection: test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f (f5f2650b-d83a-49fb-b607-dd23864e5360) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.077-0500 I STORAGE [conn112] Finishing collection drop for test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f (f5f2650b-d83a-49fb-b607-dd23864e5360).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.077-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f (f5f2650b-d83a-49fb-b607-dd23864e5360)'. Ident: 'index-115-8224331490264904478', commit timestamp: 'Timestamp(1574796674, 1015)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.077-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f (f5f2650b-d83a-49fb-b607-dd23864e5360)'. Ident: 'index-120-8224331490264904478', commit timestamp: 'Timestamp(1574796674, 1015)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.077-0500 I STORAGE [conn112] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f'. Ident: collection-113-8224331490264904478, commit timestamp: Timestamp(1574796674, 1015)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.078-0500 I COMMAND [conn46] CMD: drop test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.078-0500 I COMMAND [conn71] command test1_fsmdb0.agg_out appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3397777473291980237, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 155210035621970036, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796671217), clusterTime: Timestamp(1574796671, 2091) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796671, 2091), signature: { hash: BinData(0, D613E78D194EC2353ABD788EFEF825FE39426034), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58010", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"error\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:997 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2859ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.079-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 75faa46d-ac48-4550-b041-2983fa39b3cc: test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb ( a88d13ca-f01d-4def-89bb-92dfbaa6c76e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.081-0500 I STORAGE [conn112] createCollection: test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae with generated UUID: eaccdea6-06b1-45d9-883d-16ca6ae0b7d6 and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.104-0500 I INDEX [conn114] index build: starting on test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.104-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.104-0500 I STORAGE [conn114] Index build initialized: 78dca148-e535-4a54-b4ff-1c614c088089: test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7 (d957eda8-107c-4316-bee2-07ec72ecf5f9 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.104-0500 I INDEX [conn114] Waiting for index build to complete: 78dca148-e535-4a54-b4ff-1c614c088089
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.104-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.104-0500 I INDEX [conn108] Index build completed: 75faa46d-ac48-4550-b041-2983fa39b3cc
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.104-0500 I STORAGE [conn46] dropCollection: test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a (503e4ffc-8a06-40c8-96cb-7e6f5b72a689) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.104-0500 I STORAGE [conn46] Finishing collection drop for test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a (503e4ffc-8a06-40c8-96cb-7e6f5b72a689).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.104-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a (503e4ffc-8a06-40c8-96cb-7e6f5b72a689)'. Ident: 'index-110-8224331490264904478', commit timestamp: 'Timestamp(1574796674, 1518)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.104-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a (503e4ffc-8a06-40c8-96cb-7e6f5b72a689)'. Ident: 'index-116-8224331490264904478', commit timestamp: 'Timestamp(1574796674, 1518)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.104-0500 I STORAGE [conn46] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a'. Ident: collection-107-8224331490264904478, commit timestamp: Timestamp(1574796674, 1518)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.105-0500 I COMMAND [conn70] command test1_fsmdb0.agg_out appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3325424061510226017, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5947410425836416208, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796671143), clusterTime: Timestamp(1574796671, 1465) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796671, 1518), signature: { hash: BinData(0, D613E78D194EC2353ABD788EFEF825FE39426034), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58004", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"error\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:997 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 20267 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2960ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.112-0500 I INDEX [conn112] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.112-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.115-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.115-0500 I INDEX [conn112] Registering index build: d0c16cbd-d811-40f0-859b-60aaec4a7ff3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.116-0500 I COMMAND [conn46] CMD: drop test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.117-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 78dca148-e535-4a54-b4ff-1c614c088089: test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7 ( d957eda8-107c-4316-bee2-07ec72ecf5f9 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.117-0500 I STORAGE [conn110] createCollection: test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 with generated UUID: a41f43d1-2c89-4c41-a8f5-515233da8c7e and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.140-0500 I INDEX [conn112] index build: starting on test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.140-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.140-0500 I STORAGE [conn112] Index build initialized: d0c16cbd-d811-40f0-859b-60aaec4a7ff3: test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae (eaccdea6-06b1-45d9-883d-16ca6ae0b7d6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.140-0500 I INDEX [conn112] Waiting for index build to complete: d0c16cbd-d811-40f0-859b-60aaec4a7ff3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.140-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.140-0500 I INDEX [conn114] Index build completed: 78dca148-e535-4a54-b4ff-1c614c088089
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.140-0500 I STORAGE [conn46] dropCollection: test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 (f4608a2b-7f90-424d-bb80-675faa9006ee) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.140-0500 I STORAGE [conn46] Finishing collection drop for test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 (f4608a2b-7f90-424d-bb80-675faa9006ee).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.140-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 (f4608a2b-7f90-424d-bb80-675faa9006ee)'. Ident: 'index-119-8224331490264904478', commit timestamp: 'Timestamp(1574796674, 1971)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.140-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954 (f4608a2b-7f90-424d-bb80-675faa9006ee)'. Ident: 'index-122-8224331490264904478', commit timestamp: 'Timestamp(1574796674, 1971)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.140-0500 I STORAGE [conn46] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954'. Ident: collection-117-8224331490264904478, commit timestamp: Timestamp(1574796674, 1971)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.140-0500 I COMMAND [conn65] command test1_fsmdb0.agg_out appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 294869230466132394, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1823057077059384070, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796671249), clusterTime: Timestamp(1574796671, 2530) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796671, 2530), signature: { hash: BinData(0, D613E78D194EC2353ABD788EFEF825FE39426034), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"error\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:997 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2889ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.147-0500 I INDEX [conn110] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.147-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.149-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.150-0500 I INDEX [conn110] Registering index build: f521dec5-4b36-4b0a-9881-8e7d1a17ec8a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.150-0500 I COMMAND [conn108] CMD: drop test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.150-0500 I STORAGE [conn46] createCollection: test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 with generated UUID: 5e9a527a-e215-4481-815a-143e8412903a and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.153-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: d0c16cbd-d811-40f0-859b-60aaec4a7ff3: test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae ( eaccdea6-06b1-45d9-883d-16ca6ae0b7d6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.175-0500 I INDEX [conn110] index build: starting on test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.175-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.175-0500 I STORAGE [conn110] Index build initialized: f521dec5-4b36-4b0a-9881-8e7d1a17ec8a: test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 (a41f43d1-2c89-4c41-a8f5-515233da8c7e ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.175-0500 I INDEX [conn110] Waiting for index build to complete: f521dec5-4b36-4b0a-9881-8e7d1a17ec8a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.175-0500 I INDEX [conn112] Index build completed: d0c16cbd-d811-40f0-859b-60aaec4a7ff3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.175-0500 I STORAGE [conn108] dropCollection: test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb (a88d13ca-f01d-4def-89bb-92dfbaa6c76e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.175-0500 I STORAGE [conn108] Finishing collection drop for test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb (a88d13ca-f01d-4def-89bb-92dfbaa6c76e).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.175-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb (a88d13ca-f01d-4def-89bb-92dfbaa6c76e)'. Ident: 'index-125-8224331490264904478', commit timestamp: 'Timestamp(1574796674, 2528)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.175-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb (a88d13ca-f01d-4def-89bb-92dfbaa6c76e)'. Ident: 'index-126-8224331490264904478', commit timestamp: 'Timestamp(1574796674, 2528)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.175-0500 I STORAGE [conn108] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb'. Ident: collection-124-8224331490264904478, commit timestamp: Timestamp(1574796674, 2528)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.176-0500 I COMMAND [conn67] command test1_fsmdb0.agg_out appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7878667198757022358, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3608311809175719592, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796674015), clusterTime: Timestamp(1574796671, 3100) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796674, 6), signature: { hash: BinData(0, A55B86B158A835B757448A7B58B97BD9BF94D47D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44858", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"error\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:997 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 154ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.182-0500 I INDEX [conn46] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.182-0500 I COMMAND [conn114] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.183-0500 I STORAGE [conn114] dropCollection: test1_fsmdb0.agg_out (b284ac1b-9160-4103-86e9-6e2e91b51310) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796674, 2529), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.183-0500 I STORAGE [conn114] Finishing collection drop for test1_fsmdb0.agg_out (b284ac1b-9160-4103-86e9-6e2e91b51310).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.183-0500 I STORAGE [conn114] renameCollection: renaming collection d957eda8-107c-4316-bee2-07ec72ecf5f9 from test1_fsmdb0.tmp.agg_out.7c1cba8c-f359-4de6-8e81-1663f2d4c4a7 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.183-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (b284ac1b-9160-4103-86e9-6e2e91b51310)'. Ident: 'index-111-8224331490264904478', commit timestamp: 'Timestamp(1574796674, 2529)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.183-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (b284ac1b-9160-4103-86e9-6e2e91b51310)'. Ident: 'index-112-8224331490264904478', commit timestamp: 'Timestamp(1574796674, 2529)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.183-0500 I STORAGE [conn114] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-108-8224331490264904478, commit timestamp: Timestamp(1574796674, 2529)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.183-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.183-0500 I COMMAND [conn68] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7787961328077740044, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8843613688318192295, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796674040), clusterTime: Timestamp(1574796674, 10) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796674, 10), signature: { hash: BinData(0, A55B86B158A835B757448A7B58B97BD9BF94D47D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44870", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 142ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.183-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.187-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.187-0500 I INDEX [conn46] Registering index build: 773bb7f7-ef5e-4158-a698-c9848e8af1cd
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.188-0500 I STORAGE [conn114] createCollection: test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf with generated UUID: 98d69c0e-084e-4706-a268-0475b0e8b641 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.189-0500 I STORAGE [conn112] createCollection: test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945 with generated UUID: 2566decf-8e7b-4c6f-a88c-b3ff026ae4a6 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.190-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: f521dec5-4b36-4b0a-9881-8e7d1a17ec8a: test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 ( a41f43d1-2c89-4c41-a8f5-515233da8c7e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.219-0500 I INDEX [conn46] index build: starting on test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.219-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.219-0500 I STORAGE [conn46] Index build initialized: 773bb7f7-ef5e-4158-a698-c9848e8af1cd: test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 (5e9a527a-e215-4481-815a-143e8412903a ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.219-0500 I INDEX [conn46] Waiting for index build to complete: 773bb7f7-ef5e-4158-a698-c9848e8af1cd
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.219-0500 I INDEX [conn110] Index build completed: f521dec5-4b36-4b0a-9881-8e7d1a17ec8a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.219-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.227-0500 I INDEX [conn114] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.235-0500 I INDEX [conn112] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.235-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.238-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.238-0500 I INDEX [conn114] Registering index build: 57b5d276-25bb-4174-ab37-089384f50925
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.238-0500 I INDEX [conn112] Registering index build: 4a7bc309-a8ee-464f-adda-e4394cc6b4d0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.239-0500 I COMMAND [conn108] CMD: drop test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.241-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 773bb7f7-ef5e-4158-a698-c9848e8af1cd: test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 ( 5e9a527a-e215-4481-815a-143e8412903a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.259-0500 I INDEX [conn114] index build: starting on test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.259-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.259-0500 I STORAGE [conn114] Index build initialized: 57b5d276-25bb-4174-ab37-089384f50925: test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf (98d69c0e-084e-4706-a268-0475b0e8b641 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.259-0500 I INDEX [conn114] Waiting for index build to complete: 57b5d276-25bb-4174-ab37-089384f50925
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.259-0500 I INDEX [conn46] Index build completed: 773bb7f7-ef5e-4158-a698-c9848e8af1cd
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.259-0500 I STORAGE [conn108] dropCollection: test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae (eaccdea6-06b1-45d9-883d-16ca6ae0b7d6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.260-0500 I STORAGE [conn108] Finishing collection drop for test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae (eaccdea6-06b1-45d9-883d-16ca6ae0b7d6).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.260-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae (eaccdea6-06b1-45d9-883d-16ca6ae0b7d6)'. Ident: 'index-133-8224331490264904478', commit timestamp: 'Timestamp(1574796674, 3539)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.260-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae (eaccdea6-06b1-45d9-883d-16ca6ae0b7d6)'. Ident: 'index-134-8224331490264904478', commit timestamp: 'Timestamp(1574796674, 3539)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.260-0500 I STORAGE [conn108] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae'. Ident: collection-131-8224331490264904478, commit timestamp: Timestamp(1574796674, 3539)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.260-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.260-0500 I COMMAND [conn71] command test1_fsmdb0.agg_out appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2203415423166284614, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7006859591216201572, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796674079), clusterTime: Timestamp(1574796674, 1015) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796674, 1015), signature: { hash: BinData(0, A55B86B158A835B757448A7B58B97BD9BF94D47D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58010", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"warn\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:990 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 179ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.260-0500 I COMMAND [conn110] CMD: drop test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.260-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.263-0500 I STORAGE [conn46] createCollection: test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c with generated UUID: ac8800bb-b22a-4476-8820-e9299799a2cb and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.272-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.290-0500 I INDEX [conn112] index build: starting on test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.307-0500 I STORAGE [ReplWriterWorker-15] createCollection: test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c with provided UUID: ac8800bb-b22a-4476-8820-e9299799a2cb and options: { uuid: UUID("ac8800bb-b22a-4476-8820-e9299799a2cb"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.292-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 57b5d276-25bb-4174-ab37-089384f50925: test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf ( 98d69c0e-084e-4706-a268-0475b0e8b641 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.303-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:14.300-0500 I INDEX [conn46] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.303-0500 I STORAGE [conn112] Index build initialized: 4a7bc309-a8ee-464f-adda-e4394cc6b4d0: test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945 (2566decf-8e7b-4c6f-a88c-b3ff026ae4a6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.303-0500 I INDEX [conn112] Waiting for index build to complete: 4a7bc309-a8ee-464f-adda-e4394cc6b4d0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.303-0500 I INDEX [conn114] Index build completed: 57b5d276-25bb-4174-ab37-089384f50925
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.303-0500 I COMMAND [conn46] command test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c appName: "tid:1" command: create { create: "tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c", temp: true, validationLevel: "off", validationAction: "warn", databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796674, 3539), signature: { hash: BinData(0, A55B86B158A835B757448A7B58B97BD9BF94D47D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58010", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2040ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.303-0500 I COMMAND [conn114] command test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796674, 3035), signature: { hash: BinData(0, A55B86B158A835B757448A7B58B97BD9BF94D47D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44870", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 10133 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2074ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.303-0500 I STORAGE [conn110] dropCollection: test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 (a41f43d1-2c89-4c41-a8f5-515233da8c7e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.303-0500 I STORAGE [conn110] Finishing collection drop for test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 (a41f43d1-2c89-4c41-a8f5-515233da8c7e).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.303-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 (a41f43d1-2c89-4c41-a8f5-515233da8c7e)'. Ident: 'index-137-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 2)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.303-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 (a41f43d1-2c89-4c41-a8f5-515233da8c7e)'. Ident: 'index-138-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 2)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.303-0500 I STORAGE [conn110] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555'. Ident: collection-135-8224331490264904478, commit timestamp: Timestamp(1574796676, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.303-0500 I COMMAND [conn110] command test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 command: drop { drop: "tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555", databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, $clusterTime: { clusterTime: Timestamp(1574796674, 3539), signature: { hash: BinData(0, A55B86B158A835B757448A7B58B97BD9BF94D47D), keyId: 6763700092420489256 } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:420 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2042ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.303-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.303-0500 I COMMAND [conn108] command admin.$cmd appName: "tid:4" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896", to: "test1_fsmdb0.agg_out", collectionOptions: { validationLevel: "moderate", validationAction: "warn" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796674, 4042), signature: { hash: BinData(0, A55B86B158A835B757448A7B58B97BD9BF94D47D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "admin" } numYields:0 ok:0 errMsg:"collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"warn\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:614 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 2012687 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2012ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.303-0500 I INDEX [conn114] Registering index build: 568abe5b-847a-41d2-9ae0-8d863c930fab
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.303-0500 I COMMAND [conn119] command test1_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796674, 3035), lsid: { id: UUID("c8c15e08-f1a6-4edc-831c-249e4d0ea0c0") }, $clusterTime: { clusterTime: Timestamp(1574796674, 3035), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796674, 3035). Collection minimum timestamp is Timestamp(1574796676, 1)" errName:SnapshotUnavailable errCode:246 reslen:579 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 1969401 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 1969ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.303-0500 I COMMAND [conn70] command test1_fsmdb0.agg_out appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3562985516191852249, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3617775868398877361, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796674106), clusterTime: Timestamp(1574796674, 1518) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796674, 1520), signature: { hash: BinData(0, A55B86B158A835B757448A7B58B97BD9BF94D47D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58004", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"warn\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:990 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2187ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.304-0500 I COMMAND [conn108] CMD: drop test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.304-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.306-0500 I STORAGE [conn110] createCollection: test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b with generated UUID: 34bb5cc6-2325-443c-b88b-046e8238b7c1 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.306-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.320-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 4a7bc309-a8ee-464f-adda-e4394cc6b4d0: test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945 ( 2566decf-8e7b-4c6f-a88c-b3ff026ae4a6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.322-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.323-0500 I STORAGE [ReplWriterWorker-15] createCollection: test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c with provided UUID: ac8800bb-b22a-4476-8820-e9299799a2cb and options: { uuid: UUID("ac8800bb-b22a-4476-8820-e9299799a2cb"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.329-0500 I INDEX [conn114] index build: starting on test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.329-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.329-0500 I STORAGE [conn114] Index build initialized: 568abe5b-847a-41d2-9ae0-8d863c930fab: test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c (ac8800bb-b22a-4476-8820-e9299799a2cb ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.329-0500 I INDEX [conn114] Waiting for index build to complete: 568abe5b-847a-41d2-9ae0-8d863c930fab
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.329-0500 I INDEX [conn112] Index build completed: 4a7bc309-a8ee-464f-adda-e4394cc6b4d0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.329-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.329-0500 I STORAGE [conn108] dropCollection: test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 (5e9a527a-e215-4481-815a-143e8412903a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.329-0500 I STORAGE [conn108] Finishing collection drop for test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 (5e9a527a-e215-4481-815a-143e8412903a).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.329-0500 I COMMAND [conn112] command test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796674, 3035), signature: { hash: BinData(0, A55B86B158A835B757448A7B58B97BD9BF94D47D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44858", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 3293 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2093ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.329-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 (5e9a527a-e215-4481-815a-143e8412903a)'. Ident: 'index-141-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 507)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.329-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 (5e9a527a-e215-4481-815a-143e8412903a)'. Ident: 'index-142-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 507)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.329-0500 I STORAGE [conn108] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896'. Ident: collection-139-8224331490264904478, commit timestamp: Timestamp(1574796676, 507)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.330-0500 I COMMAND [conn65] command test1_fsmdb0.agg_out appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5747379988015046575, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7232278218434505592, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796674142), clusterTime: Timestamp(1574796674, 1971) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796674, 2023), signature: { hash: BinData(0, A55B86B158A835B757448A7B58B97BD9BF94D47D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"warn\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:990 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 7059 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2186ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:16.330-0500 I COMMAND [conn71] command test1_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563") }, $clusterTime: { clusterTime: Timestamp(1574796674, 1971), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"warn\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:820 protocol:op_msg 2188ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.335-0500 I INDEX [conn110] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.336-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.338-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.338-0500 I COMMAND [conn46] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.338-0500 I STORAGE [conn46] dropCollection: test1_fsmdb0.agg_out (d957eda8-107c-4316-bee2-07ec72ecf5f9) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 510), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.338-0500 I STORAGE [conn46] Finishing collection drop for test1_fsmdb0.agg_out (d957eda8-107c-4316-bee2-07ec72ecf5f9).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.338-0500 I STORAGE [conn46] renameCollection: renaming collection 98d69c0e-084e-4706-a268-0475b0e8b641 from test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.338-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (d957eda8-107c-4316-bee2-07ec72ecf5f9)'. Ident: 'index-129-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 510)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.338-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (d957eda8-107c-4316-bee2-07ec72ecf5f9)'. Ident: 'index-130-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 510)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.338-0500 I STORAGE [conn46] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-127-8224331490264904478, commit timestamp: Timestamp(1574796676, 510)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.338-0500 I INDEX [conn110] Registering index build: 98f3282e-c950-40b3-8a75-1e9b3b70c604
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.339-0500 I COMMAND [conn68] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1002029980406399106, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4298777399336783289, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796674184), clusterTime: Timestamp(1574796674, 2529) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796674, 2532), signature: { hash: BinData(0, A55B86B158A835B757448A7B58B97BD9BF94D47D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44870", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2151ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.339-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.339-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 568abe5b-847a-41d2-9ae0-8d863c930fab: test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c ( ac8800bb-b22a-4476-8820-e9299799a2cb ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:16.339-0500 I COMMAND [conn74] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856") }, $clusterTime: { clusterTime: Timestamp(1574796674, 2529), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2154ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.339-0500 I STORAGE [conn46] createCollection: test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 with generated UUID: da4b85bd-5887-4c1a-8d81-529b905cc7c1 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.340-0500 I INDEX [ReplWriterWorker-13] index build: starting on test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.340-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.340-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: ed13fe40-1a1c-43a0-9f55-4ed84dbecf76: test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf (98d69c0e-084e-4706-a268-0475b0e8b641 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.340-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.342-0500 I STORAGE [conn108] createCollection: test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 with generated UUID: 9fe9e09d-52a0-4bdd-8243-7e3ba8a99021 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.342-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.344-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.345-0500 I COMMAND [ReplWriterWorker-15] CMD: drop test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.345-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 (a41f43d1-2c89-4c41-a8f5-515233da8c7e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 2), t: 1 } and commit timestamp Timestamp(1574796676, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.345-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 (a41f43d1-2c89-4c41-a8f5-515233da8c7e).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.345-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 (a41f43d1-2c89-4c41-a8f5-515233da8c7e)'. Ident: 'index-144--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 2)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.345-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 (a41f43d1-2c89-4c41-a8f5-515233da8c7e)'. Ident: 'index-149--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 2)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.345-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555'. Ident: collection-143--8000595249233899911, commit timestamp: Timestamp(1574796676, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.346-0500 I STORAGE [ReplWriterWorker-5] createCollection: test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b with provided UUID: 34bb5cc6-2325-443c-b88b-046e8238b7c1 and options: { uuid: UUID("34bb5cc6-2325-443c-b88b-046e8238b7c1"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.351-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: ed13fe40-1a1c-43a0-9f55-4ed84dbecf76: test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf ( 98d69c0e-084e-4706-a268-0475b0e8b641 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.357-0500 I INDEX [ReplWriterWorker-11] index build: starting on test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.357-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.357-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 222d1e03-d2df-49da-b0de-b3c9c5e6694b: test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf (98d69c0e-084e-4706-a268-0475b0e8b641 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.358-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.358-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.360-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.362-0500 I COMMAND [ReplWriterWorker-12] CMD: drop test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.362-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 (a41f43d1-2c89-4c41-a8f5-515233da8c7e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 2), t: 1 } and commit timestamp Timestamp(1574796676, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.362-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 (a41f43d1-2c89-4c41-a8f5-515233da8c7e).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.362-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 (a41f43d1-2c89-4c41-a8f5-515233da8c7e)'. Ident: 'index-144--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 2)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.362-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555 (a41f43d1-2c89-4c41-a8f5-515233da8c7e)'. Ident: 'index-149--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 2)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.362-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555'. Ident: collection-143--4104909142373009110, commit timestamp: Timestamp(1574796676, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.365-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 222d1e03-d2df-49da-b0de-b3c9c5e6694b: test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf ( 98d69c0e-084e-4706-a268-0475b0e8b641 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.366-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.367-0500 I STORAGE [ReplWriterWorker-0] createCollection: test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b with provided UUID: 34bb5cc6-2325-443c-b88b-046e8238b7c1 and options: { uuid: UUID("34bb5cc6-2325-443c-b88b-046e8238b7c1"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.371-0500 I INDEX [conn110] index build: starting on test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.371-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.371-0500 I STORAGE [conn110] Index build initialized: 98f3282e-c950-40b3-8a75-1e9b3b70c604: test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b (34bb5cc6-2325-443c-b88b-046e8238b7c1 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.371-0500 I INDEX [conn110] Waiting for index build to complete: 98f3282e-c950-40b3-8a75-1e9b3b70c604
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.371-0500 I INDEX [conn114] Index build completed: 568abe5b-847a-41d2-9ae0-8d863c930fab
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.378-0500 I INDEX [conn46] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.382-0500 I INDEX [ReplWriterWorker-3] index build: starting on test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.382-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.382-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: efda0894-4f33-4a5d-b671-fe1de8492b3a: test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945 (2566decf-8e7b-4c6f-a88c-b3ff026ae4a6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.382-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.383-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.384-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.386-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.386-0500 I INDEX [conn108] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.386-0500 I COMMAND [conn112] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.386-0500 I STORAGE [conn112] dropCollection: test1_fsmdb0.agg_out (98d69c0e-084e-4706-a268-0475b0e8b641) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 1014), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.386-0500 I STORAGE [conn112] Finishing collection drop for test1_fsmdb0.agg_out (98d69c0e-084e-4706-a268-0475b0e8b641).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.386-0500 I STORAGE [conn112] renameCollection: renaming collection 2566decf-8e7b-4c6f-a88c-b3ff026ae4a6 from test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.386-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (98d69c0e-084e-4706-a268-0475b0e8b641)'. Ident: 'index-146-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 1014)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.386-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (98d69c0e-084e-4706-a268-0475b0e8b641)'. Ident: 'index-148-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 1014)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.386-0500 I STORAGE [conn112] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-143-8224331490264904478, commit timestamp: Timestamp(1574796676, 1014)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.386-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.386-0500 I INDEX [conn108] Registering index build: da4107b5-e393-4070-8b72-46c9bdc1fbf5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.387-0500 I INDEX [conn46] Registering index build: 967a65e0-104a-4671-8c87-5a79d681fb13
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.387-0500 I COMMAND [conn67] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7737189123573810131, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2768504010139811115, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796674188), clusterTime: Timestamp(1574796674, 2532) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796674, 2532), signature: { hash: BinData(0, A55B86B158A835B757448A7B58B97BD9BF94D47D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44858", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2198ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.387-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:16.387-0500 I COMMAND [conn70] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d") }, $clusterTime: { clusterTime: Timestamp(1574796674, 2532), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2199ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.390-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: efda0894-4f33-4a5d-b671-fe1de8492b3a: test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945 ( 2566decf-8e7b-4c6f-a88c-b3ff026ae4a6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.390-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.390-0500 I STORAGE [conn112] createCollection: test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a with generated UUID: 4727367f-2f01-4813-8c72-29b9fcb77a6b and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.390-0500 I COMMAND [ReplWriterWorker-15] CMD: drop test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.390-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 (5e9a527a-e215-4481-815a-143e8412903a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 507), t: 1 } and commit timestamp Timestamp(1574796676, 507)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.390-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 (5e9a527a-e215-4481-815a-143e8412903a).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.390-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 (5e9a527a-e215-4481-815a-143e8412903a)'. Ident: 'index-148--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 507)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.390-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 (5e9a527a-e215-4481-815a-143e8412903a)'. Ident: 'index-155--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 507)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.390-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896'. Ident: collection-147--8000595249233899911, commit timestamp: Timestamp(1574796676, 507)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.398-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 98f3282e-c950-40b3-8a75-1e9b3b70c604: test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b ( 34bb5cc6-2325-443c-b88b-046e8238b7c1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.399-0500 I INDEX [ReplWriterWorker-2] index build: starting on test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.399-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.399-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: a56d0974-5a3d-4729-a1f9-c6639b0e03b5: test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945 (2566decf-8e7b-4c6f-a88c-b3ff026ae4a6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.399-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.399-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.402-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.405-0500 I COMMAND [ReplWriterWorker-4] CMD: drop test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.405-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 (5e9a527a-e215-4481-815a-143e8412903a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 507), t: 1 } and commit timestamp Timestamp(1574796676, 507)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.405-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 (5e9a527a-e215-4481-815a-143e8412903a).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.405-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 (5e9a527a-e215-4481-815a-143e8412903a)'. Ident: 'index-148--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 507)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.405-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896 (5e9a527a-e215-4481-815a-143e8412903a)'. Ident: 'index-155--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 507)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.405-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896'. Ident: collection-147--4104909142373009110, commit timestamp: Timestamp(1574796676, 507)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.406-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: a56d0974-5a3d-4729-a1f9-c6639b0e03b5: test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945 ( 2566decf-8e7b-4c6f-a88c-b3ff026ae4a6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.406-0500 I INDEX [ReplWriterWorker-14] index build: starting on test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.406-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.406-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: b3c890d0-76c9-414e-b505-f19c949f8357: test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c (ac8800bb-b22a-4476-8820-e9299799a2cb ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.407-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.407-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.408-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf (98d69c0e-084e-4706-a268-0475b0e8b641) to test1_fsmdb0.agg_out and drop d957eda8-107c-4316-bee2-07ec72ecf5f9.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.410-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.410-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test1_fsmdb0.agg_out (d957eda8-107c-4316-bee2-07ec72ecf5f9) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 510), t: 1 } and commit timestamp Timestamp(1574796676, 510)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.410-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test1_fsmdb0.agg_out (d957eda8-107c-4316-bee2-07ec72ecf5f9).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.410-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection 98d69c0e-084e-4706-a268-0475b0e8b641 from test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.410-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (d957eda8-107c-4316-bee2-07ec72ecf5f9)'. Ident: 'index-136--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 510)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.410-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (d957eda8-107c-4316-bee2-07ec72ecf5f9)'. Ident: 'index-141--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 510)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.410-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-135--8000595249233899911, commit timestamp: Timestamp(1574796676, 510)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.411-0500 I STORAGE [ReplWriterWorker-13] createCollection: test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 with provided UUID: da4b85bd-5887-4c1a-8d81-529b905cc7c1 and options: { uuid: UUID("da4b85bd-5887-4c1a-8d81-529b905cc7c1"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.412-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: b3c890d0-76c9-414e-b505-f19c949f8357: test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c ( ac8800bb-b22a-4476-8820-e9299799a2cb ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.414-0500 I INDEX [conn108] index build: starting on test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.414-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.414-0500 I STORAGE [conn108] Index build initialized: da4107b5-e393-4070-8b72-46c9bdc1fbf5: test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 (9fe9e09d-52a0-4bdd-8243-7e3ba8a99021 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.414-0500 I INDEX [conn108] Waiting for index build to complete: da4107b5-e393-4070-8b72-46c9bdc1fbf5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.414-0500 I INDEX [conn110] Index build completed: 98f3282e-c950-40b3-8a75-1e9b3b70c604
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.422-0500 I INDEX [conn112] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.422-0500 I INDEX [ReplWriterWorker-12] index build: starting on test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.422-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.422-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: b0152dc6-8eeb-46b1-be95-998681bf1a80: test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c (ac8800bb-b22a-4476-8820-e9299799a2cb ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.422-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.423-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.424-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf (98d69c0e-084e-4706-a268-0475b0e8b641) to test1_fsmdb0.agg_out and drop d957eda8-107c-4316-bee2-07ec72ecf5f9.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.426-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.426-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test1_fsmdb0.agg_out (d957eda8-107c-4316-bee2-07ec72ecf5f9) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 510), t: 1 } and commit timestamp Timestamp(1574796676, 510)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.426-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test1_fsmdb0.agg_out (d957eda8-107c-4316-bee2-07ec72ecf5f9).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.426-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 98d69c0e-084e-4706-a268-0475b0e8b641 from test1_fsmdb0.tmp.agg_out.6fbaa62a-0793-4a03-adf6-23cf14bcc8cf to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.426-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (d957eda8-107c-4316-bee2-07ec72ecf5f9)'. Ident: 'index-136--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 510)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.426-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (d957eda8-107c-4316-bee2-07ec72ecf5f9)'. Ident: 'index-141--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 510)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.426-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-135--4104909142373009110, commit timestamp: Timestamp(1574796676, 510)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.427-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: b0152dc6-8eeb-46b1-be95-998681bf1a80: test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c ( ac8800bb-b22a-4476-8820-e9299799a2cb ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.428-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.429-0500 I STORAGE [ReplWriterWorker-8] createCollection: test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 with provided UUID: da4b85bd-5887-4c1a-8d81-529b905cc7c1 and options: { uuid: UUID("da4b85bd-5887-4c1a-8d81-529b905cc7c1"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.432-0500 I STORAGE [ReplWriterWorker-1] createCollection: test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 with provided UUID: 9fe9e09d-52a0-4bdd-8243-7e3ba8a99021 and options: { uuid: UUID("9fe9e09d-52a0-4bdd-8243-7e3ba8a99021"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.437-0500 I INDEX [conn46] index build: starting on test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.437-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.437-0500 I STORAGE [conn46] Index build initialized: 967a65e0-104a-4671-8c87-5a79d681fb13: test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 (da4b85bd-5887-4c1a-8d81-529b905cc7c1 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.437-0500 I INDEX [conn46] Waiting for index build to complete: 967a65e0-104a-4671-8c87-5a79d681fb13
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.437-0500 I COMMAND [conn114] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.438-0500 I STORAGE [conn114] dropCollection: test1_fsmdb0.agg_out (2566decf-8e7b-4c6f-a88c-b3ff026ae4a6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 1520), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.438-0500 I STORAGE [conn114] Finishing collection drop for test1_fsmdb0.agg_out (2566decf-8e7b-4c6f-a88c-b3ff026ae4a6).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.438-0500 I STORAGE [conn114] renameCollection: renaming collection ac8800bb-b22a-4476-8820-e9299799a2cb from test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.438-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (2566decf-8e7b-4c6f-a88c-b3ff026ae4a6)'. Ident: 'index-147-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 1520)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.438-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (2566decf-8e7b-4c6f-a88c-b3ff026ae4a6)'. Ident: 'index-150-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 1520)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.438-0500 I STORAGE [conn114] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-144-8224331490264904478, commit timestamp: Timestamp(1574796676, 1520)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.438-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.438-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.438-0500 I INDEX [conn112] Registering index build: a04afd5d-cf1c-48e0-b332-32e5856e0ad7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.438-0500 I COMMAND [conn71] command test1_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 595150895974993706, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7957180948621932267, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796674261), clusterTime: Timestamp(1574796674, 3539) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796674, 3539), signature: { hash: BinData(0, A55B86B158A835B757448A7B58B97BD9BF94D47D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58010", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2175ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:16.438-0500 I COMMAND [conn33] command test1_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063") }, $clusterTime: { clusterTime: Timestamp(1574796674, 3539), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2177ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.438-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.442-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.445-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.448-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.449-0500 I STORAGE [ReplWriterWorker-6] createCollection: test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 with provided UUID: 9fe9e09d-52a0-4bdd-8243-7e3ba8a99021 and options: { uuid: UUID("9fe9e09d-52a0-4bdd-8243-7e3ba8a99021"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.450-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.451-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: da4107b5-e393-4070-8b72-46c9bdc1fbf5: test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 ( 9fe9e09d-52a0-4bdd-8243-7e3ba8a99021 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.452-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945 (2566decf-8e7b-4c6f-a88c-b3ff026ae4a6) to test1_fsmdb0.agg_out and drop 98d69c0e-084e-4706-a268-0475b0e8b641.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.452-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test1_fsmdb0.agg_out (98d69c0e-084e-4706-a268-0475b0e8b641) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 1014), t: 1 } and commit timestamp Timestamp(1574796676, 1014)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.452-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test1_fsmdb0.agg_out (98d69c0e-084e-4706-a268-0475b0e8b641).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.452-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 2566decf-8e7b-4c6f-a88c-b3ff026ae4a6 from test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.453-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (98d69c0e-084e-4706-a268-0475b0e8b641)'. Ident: 'index-152--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 1014)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.453-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (98d69c0e-084e-4706-a268-0475b0e8b641)'. Ident: 'index-159--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 1014)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.453-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-151--8000595249233899911, commit timestamp: Timestamp(1574796676, 1014)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.459-0500 I INDEX [conn112] index build: starting on test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.459-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.459-0500 I STORAGE [conn112] Index build initialized: a04afd5d-cf1c-48e0-b332-32e5856e0ad7: test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a (4727367f-2f01-4813-8c72-29b9fcb77a6b ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.459-0500 I INDEX [conn112] Waiting for index build to complete: a04afd5d-cf1c-48e0-b332-32e5856e0ad7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.459-0500 I INDEX [conn108] Index build completed: da4107b5-e393-4070-8b72-46c9bdc1fbf5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.462-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.462-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.463-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 967a65e0-104a-4671-8c87-5a79d681fb13: test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 ( da4b85bd-5887-4c1a-8d81-529b905cc7c1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.463-0500 I INDEX [conn46] Index build completed: 967a65e0-104a-4671-8c87-5a79d681fb13
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.463-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.464-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.465-0500 I STORAGE [conn108] createCollection: test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef with generated UUID: cf91dd77-ff2d-476a-85e2-c995dc77edf0 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.466-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.467-0500 I INDEX [ReplWriterWorker-0] index build: starting on test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.467-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.467-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 504310ee-0129-4289-91d9-6624bc7c92bf: test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b (34bb5cc6-2325-443c-b88b-046e8238b7c1 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.467-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.468-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.468-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945 (2566decf-8e7b-4c6f-a88c-b3ff026ae4a6) to test1_fsmdb0.agg_out and drop 98d69c0e-084e-4706-a268-0475b0e8b641.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.468-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test1_fsmdb0.agg_out (98d69c0e-084e-4706-a268-0475b0e8b641) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 1014), t: 1 } and commit timestamp Timestamp(1574796676, 1014)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.468-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test1_fsmdb0.agg_out (98d69c0e-084e-4706-a268-0475b0e8b641).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.468-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection 2566decf-8e7b-4c6f-a88c-b3ff026ae4a6 from test1_fsmdb0.tmp.agg_out.4a67ae17-2750-498a-9f8e-dd5b89d36945 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.468-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (98d69c0e-084e-4706-a268-0475b0e8b641)'. Ident: 'index-152--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 1014)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.468-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (98d69c0e-084e-4706-a268-0475b0e8b641)'. Ident: 'index-159--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 1014)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.468-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-151--4104909142373009110, commit timestamp: Timestamp(1574796676, 1014)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.469-0500 I STORAGE [ReplWriterWorker-9] createCollection: test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a with provided UUID: 4727367f-2f01-4813-8c72-29b9fcb77a6b and options: { uuid: UUID("4727367f-2f01-4813-8c72-29b9fcb77a6b"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.469-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.477-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: a04afd5d-cf1c-48e0-b332-32e5856e0ad7: test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a ( 4727367f-2f01-4813-8c72-29b9fcb77a6b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.477-0500 I INDEX [conn112] Index build completed: a04afd5d-cf1c-48e0-b332-32e5856e0ad7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.477-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 504310ee-0129-4289-91d9-6624bc7c92bf: test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b ( 34bb5cc6-2325-443c-b88b-046e8238b7c1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.484-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.486-0500 I INDEX [conn108] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.486-0500 I INDEX [conn108] Registering index build: 3df77d9e-bdbc-4688-bd7b-4c5d977122c3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.486-0500 I COMMAND [conn110] CMD: drop test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.491-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c (ac8800bb-b22a-4476-8820-e9299799a2cb) to test1_fsmdb0.agg_out and drop 2566decf-8e7b-4c6f-a88c-b3ff026ae4a6.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.491-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test1_fsmdb0.agg_out (2566decf-8e7b-4c6f-a88c-b3ff026ae4a6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 1520), t: 1 } and commit timestamp Timestamp(1574796676, 1520)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.491-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test1_fsmdb0.agg_out (2566decf-8e7b-4c6f-a88c-b3ff026ae4a6).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.491-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection ac8800bb-b22a-4476-8820-e9299799a2cb from test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.491-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (2566decf-8e7b-4c6f-a88c-b3ff026ae4a6)'. Ident: 'index-154--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 1520)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.491-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (2566decf-8e7b-4c6f-a88c-b3ff026ae4a6)'. Ident: 'index-163--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 1520)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.491-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-153--8000595249233899911, commit timestamp: Timestamp(1574796676, 1520)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.501-0500 I INDEX [ReplWriterWorker-10] index build: starting on test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.501-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.501-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: ca80f2a4-f809-47c2-9db1-7e9ab3766884: test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b (34bb5cc6-2325-443c-b88b-046e8238b7c1 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.501-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.502-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.502-0500 I STORAGE [ReplWriterWorker-0] createCollection: test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a with provided UUID: 4727367f-2f01-4813-8c72-29b9fcb77a6b and options: { uuid: UUID("4727367f-2f01-4813-8c72-29b9fcb77a6b"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.504-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.506-0500 I INDEX [conn108] index build: starting on test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.506-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.506-0500 I STORAGE [conn108] Index build initialized: 3df77d9e-bdbc-4688-bd7b-4c5d977122c3: test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef (cf91dd77-ff2d-476a-85e2-c995dc77edf0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.506-0500 I INDEX [conn108] Waiting for index build to complete: 3df77d9e-bdbc-4688-bd7b-4c5d977122c3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.506-0500 I STORAGE [conn110] dropCollection: test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b (34bb5cc6-2325-443c-b88b-046e8238b7c1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.506-0500 I STORAGE [conn110] Finishing collection drop for test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b (34bb5cc6-2325-443c-b88b-046e8238b7c1).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.506-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b (34bb5cc6-2325-443c-b88b-046e8238b7c1)'. Ident: 'index-157-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 3095)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.506-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b (34bb5cc6-2325-443c-b88b-046e8238b7c1)'. Ident: 'index-158-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 3095)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.506-0500 I STORAGE [conn110] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b'. Ident: collection-155-8224331490264904478, commit timestamp: Timestamp(1574796676, 3095)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.506-0500 I COMMAND [conn70] command test1_fsmdb0.agg_out appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7219727896015576341, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8426196627437416291, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796676305), clusterTime: Timestamp(1574796676, 2) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796676, 2), signature: { hash: BinData(0, C67B551C1C4F64DBC6B77DC783768280A39A1DDC), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58004", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796670, 536), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:983 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 200ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.506-0500 I COMMAND [conn46] CMD: drop test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:16.507-0500 I COMMAND [conn32] command test1_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee") }, $clusterTime: { clusterTime: Timestamp(1574796676, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:813 protocol:op_msg 201ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.507-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.508-0500 I STORAGE [conn46] dropCollection: test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 (da4b85bd-5887-4c1a-8d81-529b905cc7c1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.508-0500 I STORAGE [conn46] Finishing collection drop for test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 (da4b85bd-5887-4c1a-8d81-529b905cc7c1).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.508-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 (da4b85bd-5887-4c1a-8d81-529b905cc7c1)'. Ident: 'index-162-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 3224)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.508-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 (da4b85bd-5887-4c1a-8d81-529b905cc7c1)'. Ident: 'index-168-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 3224)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.508-0500 I STORAGE [conn46] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7'. Ident: collection-159-8224331490264904478, commit timestamp: Timestamp(1574796676, 3224)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.508-0500 I COMMAND [conn114] CMD: drop test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.508-0500 I COMMAND [conn65] command test1_fsmdb0.agg_out appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5782760390449900563, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5434209678325323612, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796676331), clusterTime: Timestamp(1574796676, 507) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796676, 510), signature: { hash: BinData(0, C67B551C1C4F64DBC6B77DC783768280A39A1DDC), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796675, 2), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:983 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 169ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:16.508-0500 I COMMAND [conn71] command test1_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563") }, $clusterTime: { clusterTime: Timestamp(1574796676, 507), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:813 protocol:op_msg 177ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.512-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: ca80f2a4-f809-47c2-9db1-7e9ab3766884: test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b ( 34bb5cc6-2325-443c-b88b-046e8238b7c1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.528-0500 I INDEX [ReplWriterWorker-3] index build: starting on test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:16.611-0500 I COMMAND [conn33] command test1_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063") }, $clusterTime: { clusterTime: Timestamp(1574796676, 1526), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"warn\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:816 protocol:op_msg 147ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.508-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:16.511-0500 I COMMAND [conn74] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856") }, $clusterTime: { clusterTime: Timestamp(1574796676, 510), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:813 protocol:op_msg 170ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.528-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.519-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:16.702-0500 I COMMAND [conn32] command test1_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee") }, $clusterTime: { clusterTime: Timestamp(1574796676, 3227), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 191ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.510-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:16.560-0500 I COMMAND [conn70] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d") }, $clusterTime: { clusterTime: Timestamp(1574796676, 1014), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 171ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.525-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c (ac8800bb-b22a-4476-8820-e9299799a2cb) to test1_fsmdb0.agg_out and drop 2566decf-8e7b-4c6f-a88c-b3ff026ae4a6.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.528-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: a137c0ef-a969-4847-b31e-8971af4a3045: test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 (9fe9e09d-52a0-4bdd-8243-7e3ba8a99021 ): indexes: 1
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:16.799-0500 I COMMAND [conn33] command test1_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063") }, $clusterTime: { clusterTime: Timestamp(1574796676, 4048), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 172ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.510-0500 I STORAGE [conn114] dropCollection: test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 (9fe9e09d-52a0-4bdd-8243-7e3ba8a99021) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:16.665-0500 I COMMAND [conn74] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856") }, $clusterTime: { clusterTime: Timestamp(1574796676, 3292), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 153ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.525-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test1_fsmdb0.agg_out (2566decf-8e7b-4c6f-a88c-b3ff026ae4a6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 1520), t: 1 } and commit timestamp Timestamp(1574796676, 1520)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.528-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.510-0500 I STORAGE [conn114] Finishing collection drop for test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 (9fe9e09d-52a0-4bdd-8243-7e3ba8a99021).
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:16.703-0500 I COMMAND [conn71] command test1_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563") }, $clusterTime: { clusterTime: Timestamp(1574796676, 3224), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 193ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:20.054-0500 I COMMAND [conn32] command test1_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee") }, $clusterTime: { clusterTime: Timestamp(1574796676, 5563), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3350ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.525-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test1_fsmdb0.agg_out (2566decf-8e7b-4c6f-a88c-b3ff026ae4a6).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.529-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.510-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 (9fe9e09d-52a0-4bdd-8243-7e3ba8a99021)'. Ident: 'index-163-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 3292)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:16.746-0500 I COMMAND [conn70] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d") }, $clusterTime: { clusterTime: Timestamp(1574796676, 3604), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 185ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.525-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection ac8800bb-b22a-4476-8820-e9299799a2cb from test1_fsmdb0.tmp.agg_out.5f4cbc52-dc9c-4a86-99a5-cc2da061847c to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.531-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.510-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 (9fe9e09d-52a0-4bdd-8243-7e3ba8a99021)'. Ident: 'index-164-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 3292)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:20.053-0500 I COMMAND [conn74] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856") }, $clusterTime: { clusterTime: Timestamp(1574796676, 4555), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3386ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.525-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (2566decf-8e7b-4c6f-a88c-b3ff026ae4a6)'. Ident: 'index-154--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 1520)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.525-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (2566decf-8e7b-4c6f-a88c-b3ff026ae4a6)'. Ident: 'index-163--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 1520)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.510-0500 I STORAGE [conn114] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990'. Ident: collection-160-8224331490264904478, commit timestamp: Timestamp(1574796676, 3292)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:20.054-0500 I COMMAND [conn71] command test1_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563") }, $clusterTime: { clusterTime: Timestamp(1574796676, 5627), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3350ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.525-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-153--4104909142373009110, commit timestamp: Timestamp(1574796676, 1520)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.511-0500 I COMMAND [conn68] command test1_fsmdb0.agg_out appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4645465083523627394, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4072539666739271109, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796676340), clusterTime: Timestamp(1574796676, 510) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796676, 511), signature: { hash: BinData(0, C67B551C1C4F64DBC6B77DC783768280A39A1DDC), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44870", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796675, 2), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:983 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 169ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.540-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: a137c0ef-a969-4847-b31e-8971af4a3045: test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 ( 9fe9e09d-52a0-4bdd-8243-7e3ba8a99021 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.545-0500 I INDEX [ReplWriterWorker-1] index build: starting on test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.511-0500 I STORAGE [conn114] createCollection: test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c with generated UUID: 3d8202b5-3b02-43cf-813e-3fe1b15660d0 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.549-0500 I INDEX [ReplWriterWorker-2] index build: starting on test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.545-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.512-0500 I STORAGE [conn46] createCollection: test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081 with generated UUID: cc08d3d6-5de2-43fa-8864-5f6ff374c478 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.549-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.545-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: d54edda8-f64f-4f2c-af7f-b57414db3cde: test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 (9fe9e09d-52a0-4bdd-8243-7e3ba8a99021 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.512-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 3df77d9e-bdbc-4688-bd7b-4c5d977122c3: test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef ( cf91dd77-ff2d-476a-85e2-c995dc77edf0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.549-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: c8b7e9ca-d6bf-4059-87c8-877732330510: test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 (da4b85bd-5887-4c1a-8d81-529b905cc7c1 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.546-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.512-0500 I INDEX [conn108] Index build completed: 3df77d9e-bdbc-4688-bd7b-4c5d977122c3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.549-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.547-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.513-0500 I STORAGE [conn108] createCollection: test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca with generated UUID: 86f8052a-dcfe-4a87-99e5-c27d843654fd and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.549-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.550-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.543-0500 I INDEX [conn114] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.551-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.559-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: d54edda8-f64f-4f2c-af7f-b57414db3cde: test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 ( 9fe9e09d-52a0-4bdd-8243-7e3ba8a99021 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.551-0500 I INDEX [conn46] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.554-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: c8b7e9ca-d6bf-4059-87c8-877732330510: test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 ( da4b85bd-5887-4c1a-8d81-529b905cc7c1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.567-0500 I INDEX [ReplWriterWorker-9] index build: starting on test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.559-0500 I INDEX [conn108] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.556-0500 I STORAGE [ReplWriterWorker-15] createCollection: test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef with provided UUID: cf91dd77-ff2d-476a-85e2-c995dc77edf0 and options: { uuid: UUID("cf91dd77-ff2d-476a-85e2-c995dc77edf0"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.567-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.559-0500 I COMMAND [conn112] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.570-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.567-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: d3f8b79d-c53a-40b9-a412-ad560ffd7b2d: test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 (da4b85bd-5887-4c1a-8d81-529b905cc7c1 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.559-0500 I STORAGE [conn112] dropCollection: test1_fsmdb0.agg_out (ac8800bb-b22a-4476-8820-e9299799a2cb) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 3540), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.598-0500 I INDEX [ReplWriterWorker-5] index build: starting on test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.567-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.559-0500 I STORAGE [conn112] Finishing collection drop for test1_fsmdb0.agg_out (ac8800bb-b22a-4476-8820-e9299799a2cb).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.598-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.567-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.559-0500 I STORAGE [conn112] renameCollection: renaming collection 4727367f-2f01-4813-8c72-29b9fcb77a6b from test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.598-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: b446b09c-a13a-4cc5-8327-3d1fddea2a7d: test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a (4727367f-2f01-4813-8c72-29b9fcb77a6b ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.569-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.559-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (ac8800bb-b22a-4476-8820-e9299799a2cb)'. Ident: 'index-153-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 3540)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.598-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.573-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: d3f8b79d-c53a-40b9-a412-ad560ffd7b2d: test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 ( da4b85bd-5887-4c1a-8d81-529b905cc7c1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.559-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (ac8800bb-b22a-4476-8820-e9299799a2cb)'. Ident: 'index-154-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 3540)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.599-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.575-0500 I STORAGE [ReplWriterWorker-0] createCollection: test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef with provided UUID: cf91dd77-ff2d-476a-85e2-c995dc77edf0 and options: { uuid: UUID("cf91dd77-ff2d-476a-85e2-c995dc77edf0"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.559-0500 I STORAGE [conn112] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-151-8224331490264904478, commit timestamp: Timestamp(1574796676, 3540)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.602-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.590-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.559-0500 I INDEX [conn108] Registering index build: 2c0941ae-da98-4950-b1a7-ea63545b5537
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.607-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: b446b09c-a13a-4cc5-8327-3d1fddea2a7d: test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a ( 4727367f-2f01-4813-8c72-29b9fcb77a6b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.614-0500 I INDEX [ReplWriterWorker-0] index build: starting on test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.560-0500 I INDEX [conn114] Registering index build: caa9368f-2d32-4eb2-af92-65f76b1fe96b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.610-0500 I COMMAND [ReplWriterWorker-14] CMD: drop test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.614-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.560-0500 I INDEX [conn46] Registering index build: 1e778b4f-61c2-4507-99b0-64c014f11cbe
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.610-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b (34bb5cc6-2325-443c-b88b-046e8238b7c1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 3095), t: 1 } and commit timestamp Timestamp(1574796676, 3095)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.614-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 9f7bd371-e492-457a-bf45-1cc031c69377: test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a (4727367f-2f01-4813-8c72-29b9fcb77a6b ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.560-0500 I COMMAND [conn67] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4868989200716908024, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4984853786772044057, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796676388), clusterTime: Timestamp(1574796676, 1014) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796676, 1014), signature: { hash: BinData(0, C67B551C1C4F64DBC6B77DC783768280A39A1DDC), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44858", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796675, 2), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 170ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.610-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b (34bb5cc6-2325-443c-b88b-046e8238b7c1).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.614-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.562-0500 I STORAGE [conn112] createCollection: test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610 with generated UUID: 70727221-a864-47e1-8bf4-7c65bcaffb7d and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.610-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b (34bb5cc6-2325-443c-b88b-046e8238b7c1)'. Ident: 'index-162--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 3095)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.615-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.585-0500 I INDEX [conn108] index build: starting on test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.610-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b (34bb5cc6-2325-443c-b88b-046e8238b7c1)'. Ident: 'index-171--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 3095)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.616-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.585-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.610-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b'. Ident: collection-161--8000595249233899911, commit timestamp: Timestamp(1574796676, 3095)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.620-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 9f7bd371-e492-457a-bf45-1cc031c69377: test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a ( 4727367f-2f01-4813-8c72-29b9fcb77a6b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.585-0500 I STORAGE [conn108] Index build initialized: 2c0941ae-da98-4950-b1a7-ea63545b5537: test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca (86f8052a-dcfe-4a87-99e5-c27d843654fd ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.612-0500 I COMMAND [ReplWriterWorker-8] CMD: drop test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.624-0500 I COMMAND [conn56] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796676, 1849) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("c65a601f-c957-428e-adeb-3bd85740d639") }, $clusterTime: { clusterTime: Timestamp(1574796676, 1977), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 29822 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 152ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.585-0500 I INDEX [conn108] Waiting for index build to complete: 2c0941ae-da98-4950-b1a7-ea63545b5537
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.613-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 (da4b85bd-5887-4c1a-8d81-529b905cc7c1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 3224), t: 1 } and commit timestamp Timestamp(1574796676, 3224)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.625-0500 I COMMAND [ReplWriterWorker-13] CMD: drop test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.593-0500 I INDEX [conn112] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.613-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 (da4b85bd-5887-4c1a-8d81-529b905cc7c1).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.625-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b (34bb5cc6-2325-443c-b88b-046e8238b7c1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 3095), t: 1 } and commit timestamp Timestamp(1574796676, 3095)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.593-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.613-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 (da4b85bd-5887-4c1a-8d81-529b905cc7c1)'. Ident: 'index-168--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 3224)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.625-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b (34bb5cc6-2325-443c-b88b-046e8238b7c1).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.593-0500 I INDEX [conn112] Registering index build: ccb096c6-eabc-486e-b41c-534cc7995d65
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.613-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 (da4b85bd-5887-4c1a-8d81-529b905cc7c1)'. Ident: 'index-177--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 3224)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.625-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b (34bb5cc6-2325-443c-b88b-046e8238b7c1)'. Ident: 'index-162--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 3095)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.593-0500 I COMMAND [conn110] CMD: drop test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.613-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7'. Ident: collection-167--8000595249233899911, commit timestamp: Timestamp(1574796676, 3224)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.625-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b (34bb5cc6-2325-443c-b88b-046e8238b7c1)'. Ident: 'index-171--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 3095)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.593-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.628-0500 I INDEX [ReplWriterWorker-12] index build: starting on test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.625-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b'. Ident: collection-161--4104909142373009110, commit timestamp: Timestamp(1574796676, 3095)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.596-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.628-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.628-0500 I COMMAND [ReplWriterWorker-11] CMD: drop test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.602-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 2c0941ae-da98-4950-b1a7-ea63545b5537: test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca ( 86f8052a-dcfe-4a87-99e5-c27d843654fd ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.628-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 097c35da-b4f6-4ded-b7b6-4fc298d6e68f: test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef (cf91dd77-ff2d-476a-85e2-c995dc77edf0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.628-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 (da4b85bd-5887-4c1a-8d81-529b905cc7c1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 3224), t: 1 } and commit timestamp Timestamp(1574796676, 3224)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.610-0500 I INDEX [conn114] index build: starting on test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.628-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.628-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 (da4b85bd-5887-4c1a-8d81-529b905cc7c1).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.610-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.629-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.628-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 (da4b85bd-5887-4c1a-8d81-529b905cc7c1)'. Ident: 'index-168--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 3224)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.610-0500 I STORAGE [conn114] Index build initialized: caa9368f-2d32-4eb2-af92-65f76b1fe96b: test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c (3d8202b5-3b02-43cf-813e-3fe1b15660d0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.631-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.628-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7 (da4b85bd-5887-4c1a-8d81-529b905cc7c1)'. Ident: 'index-177--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 3224)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.610-0500 I INDEX [conn114] Waiting for index build to complete: caa9368f-2d32-4eb2-af92-65f76b1fe96b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.633-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 097c35da-b4f6-4ded-b7b6-4fc298d6e68f: test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef ( cf91dd77-ff2d-476a-85e2-c995dc77edf0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.628-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7'. Ident: collection-167--4104909142373009110, commit timestamp: Timestamp(1574796676, 3224)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.610-0500 I INDEX [conn108] Index build completed: 2c0941ae-da98-4950-b1a7-ea63545b5537
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.633-0500 I COMMAND [ReplWriterWorker-9] CMD: drop test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.633-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 (9fe9e09d-52a0-4bdd-8243-7e3ba8a99021) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 3292), t: 1 } and commit timestamp Timestamp(1574796676, 3292)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.610-0500 I STORAGE [conn110] dropCollection: test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef (cf91dd77-ff2d-476a-85e2-c995dc77edf0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.644-0500 I INDEX [ReplWriterWorker-15] index build: starting on test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.633-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 (9fe9e09d-52a0-4bdd-8243-7e3ba8a99021).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.610-0500 I STORAGE [conn110] Finishing collection drop for test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef (cf91dd77-ff2d-476a-85e2-c995dc77edf0).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.644-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.633-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 (9fe9e09d-52a0-4bdd-8243-7e3ba8a99021)'. Ident: 'index-170--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 3292)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.610-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef (cf91dd77-ff2d-476a-85e2-c995dc77edf0)'. Ident: 'index-173-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 4046)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.644-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: c68401a6-2502-4efe-8be9-b87723099f67: test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef (cf91dd77-ff2d-476a-85e2-c995dc77edf0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.633-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 (9fe9e09d-52a0-4bdd-8243-7e3ba8a99021)'. Ident: 'index-175--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 3292)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.610-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef (cf91dd77-ff2d-476a-85e2-c995dc77edf0)'. Ident: 'index-174-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 4046)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.644-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.633-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990'. Ident: collection-169--8000595249233899911, commit timestamp: Timestamp(1574796676, 3292)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.610-0500 I STORAGE [conn110] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef'. Ident: collection-172-8224331490264904478, commit timestamp: Timestamp(1574796676, 4046)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.645-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.634-0500 I STORAGE [ReplWriterWorker-7] createCollection: test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c with provided UUID: 3d8202b5-3b02-43cf-813e-3fe1b15660d0 and options: { uuid: UUID("3d8202b5-3b02-43cf-813e-3fe1b15660d0"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.611-0500 I COMMAND [conn71] command test1_fsmdb0.agg_out appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 455905991788625698, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5953885637881742420, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796676463), clusterTime: Timestamp(1574796676, 1526) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796676, 1590), signature: { hash: BinData(0, C67B551C1C4F64DBC6B77DC783768280A39A1DDC), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58010", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796675, 2), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"warn\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:986 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 146ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.647-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.651-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.625-0500 I INDEX [conn46] index build: starting on test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.648-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: c68401a6-2502-4efe-8be9-b87723099f67: test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef ( cf91dd77-ff2d-476a-85e2-c995dc77edf0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.653-0500 I STORAGE [ReplWriterWorker-0] createCollection: test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081 with provided UUID: cc08d3d6-5de2-43fa-8864-5f6ff374c478 and options: { uuid: UUID("cc08d3d6-5de2-43fa-8864-5f6ff374c478"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.625-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.649-0500 I COMMAND [ReplWriterWorker-5] CMD: drop test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.668-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.625-0500 I STORAGE [conn46] Index build initialized: 1e778b4f-61c2-4507-99b0-64c014f11cbe: test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081 (cc08d3d6-5de2-43fa-8864-5f6ff374c478 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.649-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 (9fe9e09d-52a0-4bdd-8243-7e3ba8a99021) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 3292), t: 1 } and commit timestamp Timestamp(1574796676, 3292)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.670-0500 I STORAGE [ReplWriterWorker-1] createCollection: test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca with provided UUID: 86f8052a-dcfe-4a87-99e5-c27d843654fd and options: { uuid: UUID("86f8052a-dcfe-4a87-99e5-c27d843654fd"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.625-0500 I INDEX [conn46] Waiting for index build to complete: 1e778b4f-61c2-4507-99b0-64c014f11cbe
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.649-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 (9fe9e09d-52a0-4bdd-8243-7e3ba8a99021).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.686-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.626-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.649-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 (9fe9e09d-52a0-4bdd-8243-7e3ba8a99021)'. Ident: 'index-170--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 3292)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.689-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a (4727367f-2f01-4813-8c72-29b9fcb77a6b) to test1_fsmdb0.agg_out and drop ac8800bb-b22a-4476-8820-e9299799a2cb.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.626-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.649-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990 (9fe9e09d-52a0-4bdd-8243-7e3ba8a99021)'. Ident: 'index-175--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 3292)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.689-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test1_fsmdb0.agg_out (ac8800bb-b22a-4476-8820-e9299799a2cb) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 3540), t: 1 } and commit timestamp Timestamp(1574796676, 3540)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.626-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.649-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990'. Ident: collection-169--4104909142373009110, commit timestamp: Timestamp(1574796676, 3292)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.689-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test1_fsmdb0.agg_out (ac8800bb-b22a-4476-8820-e9299799a2cb).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.627-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.652-0500 I STORAGE [ReplWriterWorker-14] createCollection: test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c with provided UUID: 3d8202b5-3b02-43cf-813e-3fe1b15660d0 and options: { uuid: UUID("3d8202b5-3b02-43cf-813e-3fe1b15660d0"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.689-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection 4727367f-2f01-4813-8c72-29b9fcb77a6b from test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.628-0500 I STORAGE [conn110] createCollection: test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a with generated UUID: dc38db08-6db6-4798-9510-6241acb8868b and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.668-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.689-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (ac8800bb-b22a-4476-8820-e9299799a2cb)'. Ident: 'index-158--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 3540)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.629-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.670-0500 I STORAGE [ReplWriterWorker-0] createCollection: test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081 with provided UUID: cc08d3d6-5de2-43fa-8864-5f6ff374c478 and options: { uuid: UUID("cc08d3d6-5de2-43fa-8864-5f6ff374c478"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.689-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (ac8800bb-b22a-4476-8820-e9299799a2cb)'. Ident: 'index-165--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 3540)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.633-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.687-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.689-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-157--8000595249233899911, commit timestamp: Timestamp(1574796676, 3540)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.648-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: caa9368f-2d32-4eb2-af92-65f76b1fe96b: test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c ( 3d8202b5-3b02-43cf-813e-3fe1b15660d0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.689-0500 I STORAGE [ReplWriterWorker-8] createCollection: test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca with provided UUID: 86f8052a-dcfe-4a87-99e5-c27d843654fd and options: { uuid: UUID("86f8052a-dcfe-4a87-99e5-c27d843654fd"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.703-0500 I STORAGE [ReplWriterWorker-4] createCollection: test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610 with provided UUID: 70727221-a864-47e1-8bf4-7c65bcaffb7d and options: { uuid: UUID("70727221-a864-47e1-8bf4-7c65bcaffb7d"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.649-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 1e778b4f-61c2-4507-99b0-64c014f11cbe: test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081 ( cc08d3d6-5de2-43fa-8864-5f6ff374c478 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.704-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.718-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.657-0500 I INDEX [conn112] index build: starting on test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.707-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a (4727367f-2f01-4813-8c72-29b9fcb77a6b) to test1_fsmdb0.agg_out and drop ac8800bb-b22a-4476-8820-e9299799a2cb.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.737-0500 I INDEX [ReplWriterWorker-10] index build: starting on test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.657-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.707-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test1_fsmdb0.agg_out (ac8800bb-b22a-4476-8820-e9299799a2cb) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 3540), t: 1 } and commit timestamp Timestamp(1574796676, 3540)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.737-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.657-0500 I STORAGE [conn112] Index build initialized: ccb096c6-eabc-486e-b41c-534cc7995d65: test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610 (70727221-a864-47e1-8bf4-7c65bcaffb7d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.707-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test1_fsmdb0.agg_out (ac8800bb-b22a-4476-8820-e9299799a2cb).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.737-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: bd9d7a2c-0613-4107-91ad-483b143c4d2a: test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca (86f8052a-dcfe-4a87-99e5-c27d843654fd ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.657-0500 I INDEX [conn112] Waiting for index build to complete: ccb096c6-eabc-486e-b41c-534cc7995d65
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.707-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection 4727367f-2f01-4813-8c72-29b9fcb77a6b from test1_fsmdb0.tmp.agg_out.90659ce5-f9b9-497e-b0c3-27a00c60447a to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.737-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.657-0500 I INDEX [conn114] Index build completed: caa9368f-2d32-4eb2-af92-65f76b1fe96b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.707-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (ac8800bb-b22a-4476-8820-e9299799a2cb)'. Ident: 'index-158--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 3540)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.737-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.657-0500 I INDEX [conn46] Index build completed: 1e778b4f-61c2-4507-99b0-64c014f11cbe
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.707-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (ac8800bb-b22a-4476-8820-e9299799a2cb)'. Ident: 'index-165--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 3540)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.738-0500 I COMMAND [ReplWriterWorker-12] CMD: drop test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.657-0500 I COMMAND [conn114] command test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796676, 3539), signature: { hash: BinData(0, C67B551C1C4F64DBC6B77DC783768280A39A1DDC), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796675, 2), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 24171 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 114ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.707-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-157--4104909142373009110, commit timestamp: Timestamp(1574796676, 3540)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.739-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef (cf91dd77-ff2d-476a-85e2-c995dc77edf0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 4046), t: 1 } and commit timestamp Timestamp(1574796676, 4046)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.657-0500 I COMMAND [conn46] command test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081 appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796676, 3539), signature: { hash: BinData(0, C67B551C1C4F64DBC6B77DC783768280A39A1DDC), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58004", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796675, 2), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 7536 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 105ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.719-0500 I STORAGE [ReplWriterWorker-0] createCollection: test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610 with provided UUID: 70727221-a864-47e1-8bf4-7c65bcaffb7d and options: { uuid: UUID("70727221-a864-47e1-8bf4-7c65bcaffb7d"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.739-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef (cf91dd77-ff2d-476a-85e2-c995dc77edf0).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.664-0500 I INDEX [conn110] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.733-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.739-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef (cf91dd77-ff2d-476a-85e2-c995dc77edf0)'. Ident: 'index-180--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 4046)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.664-0500 I COMMAND [conn108] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.752-0500 I INDEX [ReplWriterWorker-13] index build: starting on test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.739-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef (cf91dd77-ff2d-476a-85e2-c995dc77edf0)'. Ident: 'index-183--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 4046)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.665-0500 I STORAGE [conn108] dropCollection: test1_fsmdb0.agg_out (4727367f-2f01-4813-8c72-29b9fcb77a6b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 4555), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.752-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.739-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef'. Ident: collection-179--8000595249233899911, commit timestamp: Timestamp(1574796676, 4046)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.665-0500 I STORAGE [conn108] Finishing collection drop for test1_fsmdb0.agg_out (4727367f-2f01-4813-8c72-29b9fcb77a6b).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.752-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: e4551674-2920-4a73-ad73-4bed2d45aac4: test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca (86f8052a-dcfe-4a87-99e5-c27d843654fd ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.740-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.665-0500 I STORAGE [conn108] renameCollection: renaming collection 86f8052a-dcfe-4a87-99e5-c27d843654fd from test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.752-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.742-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: bd9d7a2c-0613-4107-91ad-483b143c4d2a: test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca ( 86f8052a-dcfe-4a87-99e5-c27d843654fd ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.665-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (4727367f-2f01-4813-8c72-29b9fcb77a6b)'. Ident: 'index-167-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 4555)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.753-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.744-0500 I STORAGE [ReplWriterWorker-4] createCollection: test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a with provided UUID: dc38db08-6db6-4798-9510-6241acb8868b and options: { uuid: UUID("dc38db08-6db6-4798-9510-6241acb8868b"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.665-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (4727367f-2f01-4813-8c72-29b9fcb77a6b)'. Ident: 'index-170-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 4555)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.754-0500 I COMMAND [ReplWriterWorker-9] CMD: drop test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.759-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.665-0500 I STORAGE [conn108] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-165-8224331490264904478, commit timestamp: Timestamp(1574796676, 4555)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.754-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef (cf91dd77-ff2d-476a-85e2-c995dc77edf0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 4046), t: 1 } and commit timestamp Timestamp(1574796676, 4046)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.774-0500 I INDEX [ReplWriterWorker-14] index build: starting on test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.665-0500 I INDEX [conn110] Registering index build: bd22ed5d-0f40-4ba6-9157-60287d534bd3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.754-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef (cf91dd77-ff2d-476a-85e2-c995dc77edf0).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.774-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.665-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.754-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef (cf91dd77-ff2d-476a-85e2-c995dc77edf0)'. Ident: 'index-180--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 4046)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.774-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: c0f82862-3fa7-4066-bc10-3c7a6baced6c: test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c (3d8202b5-3b02-43cf-813e-3fe1b15660d0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.665-0500 I COMMAND [conn68] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 863630489964802991, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7603745365306247586, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796676512), clusterTime: Timestamp(1574796676, 3292) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796676, 3357), signature: { hash: BinData(0, C67B551C1C4F64DBC6B77DC783768280A39A1DDC), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44870", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796675, 2), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 152ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.754-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef (cf91dd77-ff2d-476a-85e2-c995dc77edf0)'. Ident: 'index-183--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 4046)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.774-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.666-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.754-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.775-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.667-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.754-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef'. Ident: collection-179--4104909142373009110, commit timestamp: Timestamp(1574796676, 4046)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.777-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.668-0500 I STORAGE [conn114] createCollection: test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde with generated UUID: e83770de-b52e-4578-ac6a-fd02d0651031 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.758-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: e4551674-2920-4a73-ad73-4bed2d45aac4: test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca ( 86f8052a-dcfe-4a87-99e5-c27d843654fd ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.781-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: c0f82862-3fa7-4066-bc10-3c7a6baced6c: test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c ( 3d8202b5-3b02-43cf-813e-3fe1b15660d0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.675-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: ccb096c6-eabc-486e-b41c-534cc7995d65: test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610 ( 70727221-a864-47e1-8bf4-7c65bcaffb7d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.760-0500 I COMMAND [conn56] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796676, 4048) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("c65a601f-c957-428e-adeb-3bd85740d639") }, $clusterTime: { clusterTime: Timestamp(1574796676, 4112), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 128ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.796-0500 I INDEX [ReplWriterWorker-7] index build: starting on test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.692-0500 I INDEX [conn110] index build: starting on test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.760-0500 I STORAGE [ReplWriterWorker-4] createCollection: test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a with provided UUID: dc38db08-6db6-4798-9510-6241acb8868b and options: { uuid: UUID("dc38db08-6db6-4798-9510-6241acb8868b"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.796-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.692-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.775-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.796-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 42009bd6-9858-46ea-bed0-9f8d7d457aaa: test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081 (cc08d3d6-5de2-43fa-8864-5f6ff374c478 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.692-0500 I STORAGE [conn110] Index build initialized: bd22ed5d-0f40-4ba6-9157-60287d534bd3: test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a (dc38db08-6db6-4798-9510-6241acb8868b ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.792-0500 I INDEX [ReplWriterWorker-14] index build: starting on test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.796-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.692-0500 I INDEX [conn110] Waiting for index build to complete: bd22ed5d-0f40-4ba6-9157-60287d534bd3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.792-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.797-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.692-0500 I INDEX [conn112] Index build completed: ccb096c6-eabc-486e-b41c-534cc7995d65
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.792-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 07a6744b-4b0b-46cd-afdd-b292db457cac: test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c (3d8202b5-3b02-43cf-813e-3fe1b15660d0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.800-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.692-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.792-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.801-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca (86f8052a-dcfe-4a87-99e5-c27d843654fd) to test1_fsmdb0.agg_out and drop 4727367f-2f01-4813-8c72-29b9fcb77a6b.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.699-0500 I INDEX [conn114] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.793-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.802-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test1_fsmdb0.agg_out (4727367f-2f01-4813-8c72-29b9fcb77a6b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 4555), t: 1 } and commit timestamp Timestamp(1574796676, 4555)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.699-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.795-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.802-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test1_fsmdb0.agg_out (4727367f-2f01-4813-8c72-29b9fcb77a6b).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.701-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.798-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 07a6744b-4b0b-46cd-afdd-b292db457cac: test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c ( 3d8202b5-3b02-43cf-813e-3fe1b15660d0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.802-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 86f8052a-dcfe-4a87-99e5-c27d843654fd from test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.701-0500 I COMMAND [conn46] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.813-0500 I INDEX [ReplWriterWorker-5] index build: starting on test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.802-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (4727367f-2f01-4813-8c72-29b9fcb77a6b)'. Ident: 'index-174--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 4555)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.702-0500 I STORAGE [conn46] dropCollection: test1_fsmdb0.agg_out (86f8052a-dcfe-4a87-99e5-c27d843654fd) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 5562), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.813-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.802-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (4727367f-2f01-4813-8c72-29b9fcb77a6b)'. Ident: 'index-181--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 4555)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.702-0500 I STORAGE [conn46] Finishing collection drop for test1_fsmdb0.agg_out (86f8052a-dcfe-4a87-99e5-c27d843654fd).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.813-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: ef0d03b6-f4c6-4e41-8496-5fd5addbe8f1: test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081 (cc08d3d6-5de2-43fa-8864-5f6ff374c478 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.802-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-173--8000595249233899911, commit timestamp: Timestamp(1574796676, 4555)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.702-0500 I STORAGE [conn46] renameCollection: renaming collection cc08d3d6-5de2-43fa-8864-5f6ff374c478 from test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.813-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.802-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 42009bd6-9858-46ea-bed0-9f8d7d457aaa: test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081 ( cc08d3d6-5de2-43fa-8864-5f6ff374c478 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.702-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (86f8052a-dcfe-4a87-99e5-c27d843654fd)'. Ident: 'index-181-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 5562)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.813-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.818-0500 I INDEX [ReplWriterWorker-1] index build: starting on test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.702-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (86f8052a-dcfe-4a87-99e5-c27d843654fd)'. Ident: 'index-182-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 5562)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.815-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.818-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.702-0500 I STORAGE [conn46] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-178-8224331490264904478, commit timestamp: Timestamp(1574796676, 5562)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.817-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca (86f8052a-dcfe-4a87-99e5-c27d843654fd) to test1_fsmdb0.agg_out and drop 4727367f-2f01-4813-8c72-29b9fcb77a6b.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.818-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 0322c9fb-fe62-4d63-9ccf-bed8c0c5d5e6: test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610 (70727221-a864-47e1-8bf4-7c65bcaffb7d ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.702-0500 I COMMAND [conn112] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.818-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test1_fsmdb0.agg_out (4727367f-2f01-4813-8c72-29b9fcb77a6b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 4555), t: 1 } and commit timestamp Timestamp(1574796676, 4555)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.818-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.702-0500 I COMMAND [conn70] command test1_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7500801740488247790, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8322966332567935153, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796676511), clusterTime: Timestamp(1574796676, 3227) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796676, 3293), signature: { hash: BinData(0, C67B551C1C4F64DBC6B77DC783768280A39A1DDC), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58004", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796675, 2), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 190ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.818-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test1_fsmdb0.agg_out (4727367f-2f01-4813-8c72-29b9fcb77a6b).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.819-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.702-0500 I STORAGE [conn112] dropCollection: test1_fsmdb0.agg_out (cc08d3d6-5de2-43fa-8864-5f6ff374c478) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 5563), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.818-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection 86f8052a-dcfe-4a87-99e5-c27d843654fd from test1_fsmdb0.tmp.agg_out.a6af346e-e31b-434d-b0e4-32e48e7572ca to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.820-0500 I STORAGE [ReplWriterWorker-11] createCollection: test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde with provided UUID: e83770de-b52e-4578-ac6a-fd02d0651031 and options: { uuid: UUID("e83770de-b52e-4578-ac6a-fd02d0651031"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.702-0500 I STORAGE [conn112] Finishing collection drop for test1_fsmdb0.agg_out (cc08d3d6-5de2-43fa-8864-5f6ff374c478).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.818-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (4727367f-2f01-4813-8c72-29b9fcb77a6b)'. Ident: 'index-174--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 4555)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.821-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.702-0500 I STORAGE [conn112] renameCollection: renaming collection 3d8202b5-3b02-43cf-813e-3fe1b15660d0 from test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.818-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (4727367f-2f01-4813-8c72-29b9fcb77a6b)'. Ident: 'index-181--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 4555)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.831-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 0322c9fb-fe62-4d63-9ccf-bed8c0c5d5e6: test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610 ( 70727221-a864-47e1-8bf4-7c65bcaffb7d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.702-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (cc08d3d6-5de2-43fa-8864-5f6ff374c478)'. Ident: 'index-180-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 5563)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.818-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-173--4104909142373009110, commit timestamp: Timestamp(1574796676, 4555)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.838-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.702-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (cc08d3d6-5de2-43fa-8864-5f6ff374c478)'. Ident: 'index-188-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 5563)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.818-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: ef0d03b6-f4c6-4e41-8496-5fd5addbe8f1: test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081 ( cc08d3d6-5de2-43fa-8864-5f6ff374c478 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.861-0500 I INDEX [ReplWriterWorker-12] index build: starting on test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.702-0500 I STORAGE [conn112] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-177-8224331490264904478, commit timestamp: Timestamp(1574796676, 5563)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.835-0500 I INDEX [ReplWriterWorker-9] index build: starting on test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.861-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.702-0500 I INDEX [conn114] Registering index build: 9ac7b36e-ab41-459f-bac5-309cc5bc60fc
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.835-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.861-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 67f7b083-7bb2-4349-979c-38cd8df949f1: test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a (dc38db08-6db6-4798-9510-6241acb8868b ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.702-0500 I COMMAND [conn65] command test1_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7254161323564187913, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4054962232016439313, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796676509), clusterTime: Timestamp(1574796676, 3224) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796676, 3291), signature: { hash: BinData(0, C67B551C1C4F64DBC6B77DC783768280A39A1DDC), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796675, 2), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 191ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.835-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 8b137e36-e19f-4176-bbde-debdd4ae9619: test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610 (70727221-a864-47e1-8bf4-7c65bcaffb7d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.861-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.702-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: bd22ed5d-0f40-4ba6-9157-60287d534bd3: test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a ( dc38db08-6db6-4798-9510-6241acb8868b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.835-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.862-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.705-0500 I STORAGE [conn112] createCollection: test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130 with generated UUID: 4d8de200-c658-4c08-af3b-b8306fa1c260 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.835-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.863-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081 (cc08d3d6-5de2-43fa-8864-5f6ff374c478) to test1_fsmdb0.agg_out and drop 86f8052a-dcfe-4a87-99e5-c27d843654fd.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.705-0500 I STORAGE [conn46] createCollection: test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84 with generated UUID: e3d9437e-388e-412a-aeee-97c2519eabc1 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.838-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.864-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.733-0500 I INDEX [conn114] index build: starting on test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.839-0500 I STORAGE [ReplWriterWorker-8] createCollection: test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde with provided UUID: e83770de-b52e-4578-ac6a-fd02d0651031 and options: { uuid: UUID("e83770de-b52e-4578-ac6a-fd02d0651031"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.865-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test1_fsmdb0.agg_out (86f8052a-dcfe-4a87-99e5-c27d843654fd) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 5562), t: 1 } and commit timestamp Timestamp(1574796676, 5562)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.733-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.841-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 8b137e36-e19f-4176-bbde-debdd4ae9619: test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610 ( 70727221-a864-47e1-8bf4-7c65bcaffb7d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.865-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test1_fsmdb0.agg_out (86f8052a-dcfe-4a87-99e5-c27d843654fd).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.733-0500 I STORAGE [conn114] Index build initialized: 9ac7b36e-ab41-459f-bac5-309cc5bc60fc: test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde (e83770de-b52e-4578-ac6a-fd02d0651031 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.856-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.865-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection cc08d3d6-5de2-43fa-8864-5f6ff374c478 from test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.733-0500 I INDEX [conn114] Waiting for index build to complete: 9ac7b36e-ab41-459f-bac5-309cc5bc60fc
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.879-0500 I INDEX [ReplWriterWorker-0] index build: starting on test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.865-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (86f8052a-dcfe-4a87-99e5-c27d843654fd)'. Ident: 'index-190--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 5562)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.733-0500 I INDEX [conn110] Index build completed: bd22ed5d-0f40-4ba6-9157-60287d534bd3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.879-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.865-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (86f8052a-dcfe-4a87-99e5-c27d843654fd)'. Ident: 'index-193--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 5562)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.737-0500 I INDEX [conn112] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.879-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: ce48c6b0-0cd0-45b2-976e-3f94261bb59e: test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a (dc38db08-6db6-4798-9510-6241acb8868b ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.865-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-189--8000595249233899911, commit timestamp: Timestamp(1574796676, 5562)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.745-0500 I INDEX [conn46] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.879-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.865-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c (3d8202b5-3b02-43cf-813e-3fe1b15660d0) to test1_fsmdb0.agg_out and drop cc08d3d6-5de2-43fa-8864-5f6ff374c478.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.745-0500 I COMMAND [conn108] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.880-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.866-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test1_fsmdb0.agg_out (cc08d3d6-5de2-43fa-8864-5f6ff374c478) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 5563), t: 1 } and commit timestamp Timestamp(1574796676, 5563)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.745-0500 I STORAGE [conn108] dropCollection: test1_fsmdb0.agg_out (3d8202b5-3b02-43cf-813e-3fe1b15660d0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 6067), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.880-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081 (cc08d3d6-5de2-43fa-8864-5f6ff374c478) to test1_fsmdb0.agg_out and drop 86f8052a-dcfe-4a87-99e5-c27d843654fd.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.866-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test1_fsmdb0.agg_out (cc08d3d6-5de2-43fa-8864-5f6ff374c478).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.745-0500 I STORAGE [conn108] Finishing collection drop for test1_fsmdb0.agg_out (3d8202b5-3b02-43cf-813e-3fe1b15660d0).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.882-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.866-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 3d8202b5-3b02-43cf-813e-3fe1b15660d0 from test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.745-0500 I STORAGE [conn108] renameCollection: renaming collection 70727221-a864-47e1-8bf4-7c65bcaffb7d from test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.882-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test1_fsmdb0.agg_out (86f8052a-dcfe-4a87-99e5-c27d843654fd) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 5562), t: 1 } and commit timestamp Timestamp(1574796676, 5562)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.866-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (cc08d3d6-5de2-43fa-8864-5f6ff374c478)'. Ident: 'index-188--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 5563)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.745-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (3d8202b5-3b02-43cf-813e-3fe1b15660d0)'. Ident: 'index-179-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 6067)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.882-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test1_fsmdb0.agg_out (86f8052a-dcfe-4a87-99e5-c27d843654fd).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.866-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (cc08d3d6-5de2-43fa-8864-5f6ff374c478)'. Ident: 'index-199--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 5563)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.745-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (3d8202b5-3b02-43cf-813e-3fe1b15660d0)'. Ident: 'index-186-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 6067)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.882-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection cc08d3d6-5de2-43fa-8864-5f6ff374c478 from test1_fsmdb0.tmp.agg_out.67fd8893-c626-4f0b-9d37-bbfbd24a2081 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.866-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-187--8000595249233899911, commit timestamp: Timestamp(1574796676, 5563)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.745-0500 I STORAGE [conn108] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-176-8224331490264904478, commit timestamp: Timestamp(1574796676, 6067)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.882-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (86f8052a-dcfe-4a87-99e5-c27d843654fd)'. Ident: 'index-190--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 5562)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.866-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 67f7b083-7bb2-4349-979c-38cd8df949f1: test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a ( dc38db08-6db6-4798-9510-6241acb8868b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.745-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.882-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (86f8052a-dcfe-4a87-99e5-c27d843654fd)'. Ident: 'index-193--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 5562)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.869-0500 I STORAGE [ReplWriterWorker-13] createCollection: test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130 with provided UUID: 4d8de200-c658-4c08-af3b-b8306fa1c260 and options: { uuid: UUID("4d8de200-c658-4c08-af3b-b8306fa1c260"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.746-0500 I INDEX [conn46] Registering index build: ae591c8e-d8ef-4364-90dd-1a75ea984b03
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.882-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-189--4104909142373009110, commit timestamp: Timestamp(1574796676, 5562)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.882-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.746-0500 I INDEX [conn112] Registering index build: 49e213a1-87c6-46ca-a3cb-28285d64e0c0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.883-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c (3d8202b5-3b02-43cf-813e-3fe1b15660d0) to test1_fsmdb0.agg_out and drop cc08d3d6-5de2-43fa-8864-5f6ff374c478.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.883-0500 I STORAGE [ReplWriterWorker-11] createCollection: test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84 with provided UUID: e3d9437e-388e-412a-aeee-97c2519eabc1 and options: { uuid: UUID("e3d9437e-388e-412a-aeee-97c2519eabc1"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.746-0500 I COMMAND [conn67] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 254554328849793292, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1293041613855259089, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796676561), clusterTime: Timestamp(1574796676, 3604) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796676, 3604), signature: { hash: BinData(0, C67B551C1C4F64DBC6B77DC783768280A39A1DDC), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44858", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796675, 2), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 184ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.883-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test1_fsmdb0.agg_out (cc08d3d6-5de2-43fa-8864-5f6ff374c478) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 5563), t: 1 } and commit timestamp Timestamp(1574796676, 5563)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.895-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.746-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.883-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test1_fsmdb0.agg_out (cc08d3d6-5de2-43fa-8864-5f6ff374c478).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.899-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610 (70727221-a864-47e1-8bf4-7c65bcaffb7d) to test1_fsmdb0.agg_out and drop 3d8202b5-3b02-43cf-813e-3fe1b15660d0.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.748-0500 I STORAGE [conn110] createCollection: test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981 with generated UUID: 976a0b0c-38a0-46aa-945b-11d8b16efca0 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.883-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection 3d8202b5-3b02-43cf-813e-3fe1b15660d0 from test1_fsmdb0.tmp.agg_out.d6ddf6d4-3cc4-40aa-aba1-ae6e6968302c to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.900-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test1_fsmdb0.agg_out (3d8202b5-3b02-43cf-813e-3fe1b15660d0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 6067), t: 1 } and commit timestamp Timestamp(1574796676, 6067)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.754-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.883-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (cc08d3d6-5de2-43fa-8864-5f6ff374c478)'. Ident: 'index-188--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 5563)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.900-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test1_fsmdb0.agg_out (3d8202b5-3b02-43cf-813e-3fe1b15660d0).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.772-0500 I INDEX [conn46] index build: starting on test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.883-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (cc08d3d6-5de2-43fa-8864-5f6ff374c478)'. Ident: 'index-199--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 5563)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.900-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection 70727221-a864-47e1-8bf4-7c65bcaffb7d from test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.772-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.883-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-187--4104909142373009110, commit timestamp: Timestamp(1574796676, 5563)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.900-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (3d8202b5-3b02-43cf-813e-3fe1b15660d0)'. Ident: 'index-186--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 6067)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.772-0500 I STORAGE [conn46] Index build initialized: ae591c8e-d8ef-4364-90dd-1a75ea984b03: test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84 (e3d9437e-388e-412a-aeee-97c2519eabc1 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.884-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: ce48c6b0-0cd0-45b2-976e-3f94261bb59e: test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a ( dc38db08-6db6-4798-9510-6241acb8868b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.900-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (3d8202b5-3b02-43cf-813e-3fe1b15660d0)'. Ident: 'index-197--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 6067)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.772-0500 I INDEX [conn46] Waiting for index build to complete: ae591c8e-d8ef-4364-90dd-1a75ea984b03
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.886-0500 I STORAGE [ReplWriterWorker-6] createCollection: test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130 with provided UUID: 4d8de200-c658-4c08-af3b-b8306fa1c260 and options: { uuid: UUID("4d8de200-c658-4c08-af3b-b8306fa1c260"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.900-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-185--8000595249233899911, commit timestamp: Timestamp(1574796676, 6067)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.774-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 9ac7b36e-ab41-459f-bac5-309cc5bc60fc: test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde ( e83770de-b52e-4578-ac6a-fd02d0651031 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.900-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.900-0500 I STORAGE [ReplWriterWorker-6] createCollection: test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981 with provided UUID: 976a0b0c-38a0-46aa-945b-11d8b16efca0 and options: { uuid: UUID("976a0b0c-38a0-46aa-945b-11d8b16efca0"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.782-0500 I INDEX [conn110] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.901-0500 I STORAGE [ReplWriterWorker-7] createCollection: test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84 with provided UUID: e3d9437e-388e-412a-aeee-97c2519eabc1 and options: { uuid: UUID("e3d9437e-388e-412a-aeee-97c2519eabc1"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.915-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.798-0500 I INDEX [conn112] index build: starting on test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.916-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.931-0500 I INDEX [ReplWriterWorker-2] index build: starting on test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.798-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.920-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610 (70727221-a864-47e1-8bf4-7c65bcaffb7d) to test1_fsmdb0.agg_out and drop 3d8202b5-3b02-43cf-813e-3fe1b15660d0.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.931-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.798-0500 I STORAGE [conn112] Index build initialized: 49e213a1-87c6-46ca-a3cb-28285d64e0c0: test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130 (4d8de200-c658-4c08-af3b-b8306fa1c260 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.920-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test1_fsmdb0.agg_out (3d8202b5-3b02-43cf-813e-3fe1b15660d0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 6067), t: 1 } and commit timestamp Timestamp(1574796676, 6067)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.932-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 0d043d8e-3b8a-4f4c-a862-5689dcfffefa: test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde (e83770de-b52e-4578-ac6a-fd02d0651031 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.798-0500 I INDEX [conn112] Waiting for index build to complete: 49e213a1-87c6-46ca-a3cb-28285d64e0c0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.920-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test1_fsmdb0.agg_out (3d8202b5-3b02-43cf-813e-3fe1b15660d0).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.932-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.798-0500 I INDEX [conn114] Index build completed: 9ac7b36e-ab41-459f-bac5-309cc5bc60fc
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.920-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 70727221-a864-47e1-8bf4-7c65bcaffb7d from test1_fsmdb0.tmp.agg_out.7867277a-6d0f-42d2-bd8a-0acc763f3610 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.932-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.798-0500 I COMMAND [conn108] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.920-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (3d8202b5-3b02-43cf-813e-3fe1b15660d0)'. Ident: 'index-186--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 6067)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.934-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.798-0500 I STORAGE [conn108] dropCollection: test1_fsmdb0.agg_out (70727221-a864-47e1-8bf4-7c65bcaffb7d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 6573), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.920-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (3d8202b5-3b02-43cf-813e-3fe1b15660d0)'. Ident: 'index-197--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 6067)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.937-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a (dc38db08-6db6-4798-9510-6241acb8868b) to test1_fsmdb0.agg_out and drop 70727221-a864-47e1-8bf4-7c65bcaffb7d.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.798-0500 I STORAGE [conn108] Finishing collection drop for test1_fsmdb0.agg_out (70727221-a864-47e1-8bf4-7c65bcaffb7d).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.920-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-185--4104909142373009110, commit timestamp: Timestamp(1574796676, 6067)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.938-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test1_fsmdb0.agg_out (70727221-a864-47e1-8bf4-7c65bcaffb7d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 6573), t: 1 } and commit timestamp Timestamp(1574796676, 6573)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.798-0500 I STORAGE [conn108] renameCollection: renaming collection dc38db08-6db6-4798-9510-6241acb8868b from test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.921-0500 I STORAGE [ReplWriterWorker-1] createCollection: test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981 with provided UUID: 976a0b0c-38a0-46aa-945b-11d8b16efca0 and options: { uuid: UUID("976a0b0c-38a0-46aa-945b-11d8b16efca0"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.938-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test1_fsmdb0.agg_out (70727221-a864-47e1-8bf4-7c65bcaffb7d).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.798-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (70727221-a864-47e1-8bf4-7c65bcaffb7d)'. Ident: 'index-185-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 6573)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.935-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.938-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection dc38db08-6db6-4798-9510-6241acb8868b from test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.798-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (70727221-a864-47e1-8bf4-7c65bcaffb7d)'. Ident: 'index-190-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 6573)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.982-0500 I INDEX [ReplWriterWorker-15] index build: starting on test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.938-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (70727221-a864-47e1-8bf4-7c65bcaffb7d)'. Ident: 'index-192--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 6573)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.798-0500 I STORAGE [conn108] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-183-8224331490264904478, commit timestamp: Timestamp(1574796676, 6573)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.982-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.938-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (70727221-a864-47e1-8bf4-7c65bcaffb7d)'. Ident: 'index-201--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 6573)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.798-0500 I INDEX [conn110] Registering index build: e709720b-32d7-4974-92ee-e33fd3e0f153
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.982-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: b321cbe2-3161-495e-b1c4-88cbf34594a2: test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde (e83770de-b52e-4578-ac6a-fd02d0651031 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.938-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-191--8000595249233899911, commit timestamp: Timestamp(1574796676, 6573)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.798-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.982-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.938-0500 I STORAGE [ReplWriterWorker-3] createCollection: test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365 with provided UUID: 41bb520a-fcd0-4263-91f0-dcf65f03e1da and options: { uuid: UUID("41bb520a-fcd0-4263-91f0-dcf65f03e1da"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.798-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.983-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.968-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 0d043d8e-3b8a-4f4c-a862-5689dcfffefa: test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde ( e83770de-b52e-4578-ac6a-fd02d0651031 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.799-0500 I COMMAND [conn71] command test1_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4752869586128190048, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 9191875515785598142, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796676626), clusterTime: Timestamp(1574796676, 4048) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796676, 4112), signature: { hash: BinData(0, C67B551C1C4F64DBC6B77DC783768280A39A1DDC), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58010", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796675, 2), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 171ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.985-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:16.983-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.799-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.985-0500 I COMMAND [conn56] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796676, 6134) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("c65a601f-c957-428e-adeb-3bd85740d639") }, $clusterTime: { clusterTime: Timestamp(1574796676, 6134), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 13135 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 221ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.000-0500 I INDEX [ReplWriterWorker-1] index build: starting on test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.800-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.987-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: b321cbe2-3161-495e-b1c4-88cbf34594a2: test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde ( e83770de-b52e-4578-ac6a-fd02d0651031 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.000-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.802-0500 I STORAGE [conn108] createCollection: test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365 with generated UUID: 41bb520a-fcd0-4263-91f0-dcf65f03e1da and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.987-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a (dc38db08-6db6-4798-9510-6241acb8868b) to test1_fsmdb0.agg_out and drop 70727221-a864-47e1-8bf4-7c65bcaffb7d.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.000-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: cc167e11-9aee-475a-b397-cb9994b9a176: test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84 (e3d9437e-388e-412a-aeee-97c2519eabc1 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.810-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.987-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test1_fsmdb0.agg_out (70727221-a864-47e1-8bf4-7c65bcaffb7d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 6573), t: 1 } and commit timestamp Timestamp(1574796676, 6573)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.000-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.814-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.987-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test1_fsmdb0.agg_out (70727221-a864-47e1-8bf4-7c65bcaffb7d).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.001-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.831-0500 I INDEX [conn110] index build: starting on test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.987-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection dc38db08-6db6-4798-9510-6241acb8868b from test1_fsmdb0.tmp.agg_out.26e4c3d1-1bd4-4bc5-9eab-7b9c599a364a to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.003-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.831-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.987-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (70727221-a864-47e1-8bf4-7c65bcaffb7d)'. Ident: 'index-192--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 6573)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.011-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: cc167e11-9aee-475a-b397-cb9994b9a176: test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84 ( e3d9437e-388e-412a-aeee-97c2519eabc1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.831-0500 I STORAGE [conn110] Index build initialized: e709720b-32d7-4974-92ee-e33fd3e0f153: test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981 (976a0b0c-38a0-46aa-945b-11d8b16efca0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.987-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (70727221-a864-47e1-8bf4-7c65bcaffb7d)'. Ident: 'index-201--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 6573)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.018-0500 I INDEX [ReplWriterWorker-15] index build: starting on test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.831-0500 I INDEX [conn110] Waiting for index build to complete: e709720b-32d7-4974-92ee-e33fd3e0f153
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.987-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-191--4104909142373009110, commit timestamp: Timestamp(1574796676, 6573)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.018-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.832-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:16.988-0500 I STORAGE [ReplWriterWorker-5] createCollection: test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365 with provided UUID: 41bb520a-fcd0-4263-91f0-dcf65f03e1da and options: { uuid: UUID("41bb520a-fcd0-4263-91f0-dcf65f03e1da"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.018-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: ec7347ee-c0c3-4f0c-9355-96fba1a87908: test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130 (4d8de200-c658-4c08-af3b-b8306fa1c260 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.834-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: ae591c8e-d8ef-4364-90dd-1a75ea984b03: test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84 ( e3d9437e-388e-412a-aeee-97c2519eabc1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.001-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.018-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.834-0500 I INDEX [conn46] Index build completed: ae591c8e-d8ef-4364-90dd-1a75ea984b03
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.019-0500 I INDEX [ReplWriterWorker-5] index build: starting on test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.019-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.836-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 49e213a1-87c6-46ca-a3cb-28285d64e0c0: test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130 ( 4d8de200-c658-4c08-af3b-b8306fa1c260 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.020-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.021-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.837-0500 I INDEX [conn112] Index build completed: 49e213a1-87c6-46ca-a3cb-28285d64e0c0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.020-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 9e4f0026-470c-4043-8af4-17bbbb1d1e99: test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84 (e3d9437e-388e-412a-aeee-97c2519eabc1 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.024-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: ec7347ee-c0c3-4f0c-9355-96fba1a87908: test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130 ( 4d8de200-c658-4c08-af3b-b8306fa1c260 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.844-0500 I INDEX [conn108] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.020-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.039-0500 I INDEX [ReplWriterWorker-3] index build: starting on test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.844-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.020-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.039-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.846-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.023-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.039-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 64595596-b0f8-4ab2-8423-54ceb8079540: test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981 (976a0b0c-38a0-46aa-945b-11d8b16efca0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.846-0500 I COMMAND [conn114] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.031-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 9e4f0026-470c-4043-8af4-17bbbb1d1e99: test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84 ( e3d9437e-388e-412a-aeee-97c2519eabc1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.039-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.847-0500 I STORAGE [conn114] dropCollection: test1_fsmdb0.agg_out (dc38db08-6db6-4798-9510-6241acb8868b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 7082), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.038-0500 I INDEX [ReplWriterWorker-6] index build: starting on test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.039-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.847-0500 I STORAGE [conn114] Finishing collection drop for test1_fsmdb0.agg_out (dc38db08-6db6-4798-9510-6241acb8868b).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.038-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.040-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde (e83770de-b52e-4578-ac6a-fd02d0651031) to test1_fsmdb0.agg_out and drop dc38db08-6db6-4798-9510-6241acb8868b.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.847-0500 I STORAGE [conn114] renameCollection: renaming collection e83770de-b52e-4578-ac6a-fd02d0651031 from test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.038-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 81448bdc-bf06-4b8d-abda-af60acbffd26: test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130 (4d8de200-c658-4c08-af3b-b8306fa1c260 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.042-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.847-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (dc38db08-6db6-4798-9510-6241acb8868b)'. Ident: 'index-193-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 7082)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.038-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.042-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test1_fsmdb0.agg_out (dc38db08-6db6-4798-9510-6241acb8868b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 7082), t: 1 } and commit timestamp Timestamp(1574796676, 7082)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.847-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (dc38db08-6db6-4798-9510-6241acb8868b)'. Ident: 'index-194-8224331490264904478', commit timestamp: 'Timestamp(1574796676, 7082)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.038-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.042-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test1_fsmdb0.agg_out (dc38db08-6db6-4798-9510-6241acb8868b).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.847-0500 I STORAGE [conn114] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-191-8224331490264904478, commit timestamp: Timestamp(1574796676, 7082)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.040-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.042-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection e83770de-b52e-4578-ac6a-fd02d0651031 from test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.847-0500 I INDEX [conn108] Registering index build: 13bd0b6a-c4c2-4ef4-b90e-85903e4e8e66
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.046-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 81448bdc-bf06-4b8d-abda-af60acbffd26: test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130 ( 4d8de200-c658-4c08-af3b-b8306fa1c260 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.042-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (dc38db08-6db6-4798-9510-6241acb8868b)'. Ident: 'index-196--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 7082)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.847-0500 I COMMAND [conn68] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5261533044386768279, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2711017130774790052, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796676667), clusterTime: Timestamp(1574796676, 4555) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796676, 4555), signature: { hash: BinData(0, C67B551C1C4F64DBC6B77DC783768280A39A1DDC), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44870", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796675, 2), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 179ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.057-0500 I INDEX [ReplWriterWorker-11] index build: starting on test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.042-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (dc38db08-6db6-4798-9510-6241acb8868b)'. Ident: 'index-205--8000595249233899911', commit timestamp: 'Timestamp(1574796676, 7082)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.057-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.848-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: e709720b-32d7-4974-92ee-e33fd3e0f153: test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981 ( 976a0b0c-38a0-46aa-945b-11d8b16efca0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.042-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-195--8000595249233899911, commit timestamp: Timestamp(1574796676, 7082)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.057-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 7746c1ff-c1c9-4358-9095-36504f47f03c: test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981 (976a0b0c-38a0-46aa-945b-11d8b16efca0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:16.865-0500 I INDEX [conn108] index build: starting on test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.044-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.057-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.053-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:17.044-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 64595596-b0f8-4ab2-8423-54ceb8079540: test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981 ( 976a0b0c-38a0-46aa-945b-11d8b16efca0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.058-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:17.044-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39070 #120 (45 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.058-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde (e83770de-b52e-4578-ac6a-fd02d0651031) to test1_fsmdb0.agg_out and drop dc38db08-6db6-4798-9510-6241acb8868b.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.059-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84 (e3d9437e-388e-412a-aeee-97c2519eabc1) to test1_fsmdb0.agg_out and drop e83770de-b52e-4578-ac6a-fd02d0651031.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.053-0500 I STORAGE [conn108] Index build initialized: 13bd0b6a-c4c2-4ef4-b90e-85903e4e8e66: test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365 (41bb520a-fcd0-4263-91f0-dcf65f03e1da ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.060-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.059-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test1_fsmdb0.agg_out (e83770de-b52e-4578-ac6a-fd02d0651031) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 2), t: 1 } and commit timestamp Timestamp(1574796680, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.053-0500 I INDEX [conn108] Waiting for index build to complete: 13bd0b6a-c4c2-4ef4-b90e-85903e4e8e66
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.060-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test1_fsmdb0.agg_out (dc38db08-6db6-4798-9510-6241acb8868b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796676, 7082), t: 1 } and commit timestamp Timestamp(1574796676, 7082)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.059-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test1_fsmdb0.agg_out (e83770de-b52e-4578-ac6a-fd02d0651031).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.053-0500 I INDEX [conn110] Index build completed: e709720b-32d7-4974-92ee-e33fd3e0f153
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.060-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test1_fsmdb0.agg_out (dc38db08-6db6-4798-9510-6241acb8868b).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.059-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection e3d9437e-388e-412a-aeee-97c2519eabc1 from test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.053-0500 I COMMAND [conn112] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.060-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection e83770de-b52e-4578-ac6a-fd02d0651031 from test1_fsmdb0.tmp.agg_out.aac6542e-f6a5-4f9f-b655-036d929c7fde to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.059-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (e83770de-b52e-4578-ac6a-fd02d0651031)'. Ident: 'index-204--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 2)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.053-0500 I COMMAND [conn110] command test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796676, 6571), signature: { hash: BinData(0, C67B551C1C4F64DBC6B77DC783768280A39A1DDC), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44858", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796675, 2), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 16055 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 3270ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.060-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (dc38db08-6db6-4798-9510-6241acb8868b)'. Ident: 'index-196--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 7082)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.059-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (e83770de-b52e-4578-ac6a-fd02d0651031)'. Ident: 'index-213--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 2)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.053-0500 I NETWORK [conn120] received client metadata from 127.0.0.1:39070 conn120: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.060-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (dc38db08-6db6-4798-9510-6241acb8868b)'. Ident: 'index-205--4104909142373009110', commit timestamp: 'Timestamp(1574796676, 7082)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.059-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-203--8000595249233899911, commit timestamp: Timestamp(1574796680, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.060-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130 (4d8de200-c658-4c08-af3b-b8306fa1c260) to test1_fsmdb0.agg_out and drop e3d9437e-388e-412a-aeee-97c2519eabc1.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.060-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-195--4104909142373009110, commit timestamp: Timestamp(1574796676, 7082)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.053-0500 I STORAGE [conn112] dropCollection: test1_fsmdb0.agg_out (e83770de-b52e-4578-ac6a-fd02d0651031) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 2), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.060-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test1_fsmdb0.agg_out (e3d9437e-388e-412a-aeee-97c2519eabc1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 3), t: 1 } and commit timestamp Timestamp(1574796680, 3)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:17.061-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 7746c1ff-c1c9-4358-9095-36504f47f03c: test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981 ( 976a0b0c-38a0-46aa-945b-11d8b16efca0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.053-0500 I STORAGE [conn112] Finishing collection drop for test1_fsmdb0.agg_out (e83770de-b52e-4578-ac6a-fd02d0651031).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.060-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test1_fsmdb0.agg_out (e3d9437e-388e-412a-aeee-97c2519eabc1).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.053-0500 I STORAGE [conn112] renameCollection: renaming collection e3d9437e-388e-412a-aeee-97c2519eabc1 from test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.060-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84 (e3d9437e-388e-412a-aeee-97c2519eabc1) to test1_fsmdb0.agg_out and drop e83770de-b52e-4578-ac6a-fd02d0651031.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.060-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 4d8de200-c658-4c08-af3b-b8306fa1c260 from test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.053-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (e83770de-b52e-4578-ac6a-fd02d0651031)'. Ident: 'index-197-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 2)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.061-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test1_fsmdb0.agg_out (e83770de-b52e-4578-ac6a-fd02d0651031) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 2), t: 1 } and commit timestamp Timestamp(1574796680, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.060-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (e3d9437e-388e-412a-aeee-97c2519eabc1)'. Ident: 'index-210--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 3)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.053-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (e83770de-b52e-4578-ac6a-fd02d0651031)'. Ident: 'index-198-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 2)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.061-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test1_fsmdb0.agg_out (e83770de-b52e-4578-ac6a-fd02d0651031).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.060-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (e3d9437e-388e-412a-aeee-97c2519eabc1)'. Ident: 'index-217--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 3)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.053-0500 I STORAGE [conn112] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-195-8224331490264904478, commit timestamp: Timestamp(1574796680, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.061-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection e3d9437e-388e-412a-aeee-97c2519eabc1 from test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.060-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-209--8000595249233899911, commit timestamp: Timestamp(1574796680, 3)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.053-0500 I COMMAND [conn112] command test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84 appName: "tid:4" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test1_fsmdb0.tmp.agg_out.61034d39-ea75-428a-a4e8-4ad7def31d84", to: "test1_fsmdb0.agg_out", collectionOptions: { validationLevel: "off", validationAction: "warn" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796676, 8082), signature: { hash: BinData(0, C67B551C1C4F64DBC6B77DC783768280A39A1DDC), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796675, 2), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 3186036 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 3186ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.061-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (e83770de-b52e-4578-ac6a-fd02d0651031)'. Ident: 'index-204--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 2)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.086-0500 I STORAGE [ReplWriterWorker-2] createCollection: test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1 with provided UUID: 0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0 and options: { uuid: UUID("0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.053-0500 I COMMAND [conn46] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.061-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (e83770de-b52e-4578-ac6a-fd02d0651031)'. Ident: 'index-213--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 2)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.053-0500 I STORAGE [conn46] dropCollection: test1_fsmdb0.agg_out (e3d9437e-388e-412a-aeee-97c2519eabc1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 3), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.061-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-203--4104909142373009110, commit timestamp: Timestamp(1574796680, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.053-0500 I STORAGE [conn46] Finishing collection drop for test1_fsmdb0.agg_out (e3d9437e-388e-412a-aeee-97c2519eabc1).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.063-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130 (4d8de200-c658-4c08-af3b-b8306fa1c260) to test1_fsmdb0.agg_out and drop e3d9437e-388e-412a-aeee-97c2519eabc1.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.053-0500 I STORAGE [conn46] renameCollection: renaming collection 4d8de200-c658-4c08-af3b-b8306fa1c260 from test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.063-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test1_fsmdb0.agg_out (e3d9437e-388e-412a-aeee-97c2519eabc1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 3), t: 1 } and commit timestamp Timestamp(1574796680, 3)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.054-0500 I COMMAND [conn65] command test1_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 95873510558192195, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8170393857803577029, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796676704), clusterTime: Timestamp(1574796676, 5627) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796676, 5627), signature: { hash: BinData(0, C67B551C1C4F64DBC6B77DC783768280A39A1DDC), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796675, 2), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3348ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.063-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test1_fsmdb0.agg_out (e3d9437e-388e-412a-aeee-97c2519eabc1).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.054-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (e3d9437e-388e-412a-aeee-97c2519eabc1)'. Ident: 'index-203-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 3)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.063-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection 4d8de200-c658-4c08-af3b-b8306fa1c260 from test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.054-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (e3d9437e-388e-412a-aeee-97c2519eabc1)'. Ident: 'index-204-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 3)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.063-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (e3d9437e-388e-412a-aeee-97c2519eabc1)'. Ident: 'index-210--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 3)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.054-0500 I STORAGE [conn46] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-200-8224331490264904478, commit timestamp: Timestamp(1574796680, 3)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.063-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (e3d9437e-388e-412a-aeee-97c2519eabc1)'. Ident: 'index-217--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 3)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.054-0500 I COMMAND [conn46] command test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130 appName: "tid:3" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test1_fsmdb0.tmp.agg_out.dc8eab37-ca44-43a1-bd21-4f2f1e06a130", to: "test1_fsmdb0.agg_out", collectionOptions: { validationLevel: "off", validationAction: "warn" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796676, 8082), signature: { hash: BinData(0, C67B551C1C4F64DBC6B77DC783768280A39A1DDC), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58004", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796675, 2), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 3184708 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 3185ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.063-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-209--4104909142373009110, commit timestamp: Timestamp(1574796680, 3)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.054-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.054-0500 I COMMAND [conn119] command test1_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796676, 6134), lsid: { id: UUID("c8c15e08-f1a6-4edc-831c-249e4d0ea0c0") }, $clusterTime: { clusterTime: Timestamp(1574796676, 6134), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796676, 6134). Collection minimum timestamp is Timestamp(1574796680, 1)" errName:SnapshotUnavailable errCode:246 reslen:579 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 3066843 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 3067ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.054-0500 I COMMAND [conn70] command test1_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7935035189191304127, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 9001452589855803767, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796676704), clusterTime: Timestamp(1574796676, 5563) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796676, 5627), signature: { hash: BinData(0, C67B551C1C4F64DBC6B77DC783768280A39A1DDC), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58004", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796675, 2), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3349ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.054-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.055-0500 I STORAGE [conn46] createCollection: test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1 with generated UUID: 0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.056-0500 I STORAGE [conn112] createCollection: test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747 with generated UUID: 35dbc508-59b9-48c4-b237-ef711653cd51 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.057-0500 I STORAGE [conn114] createCollection: test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1 with generated UUID: 317e4cb4-8991-4401-bad1-1943f5aa7f79 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.058-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.076-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 13bd0b6a-c4c2-4ef4-b90e-85903e4e8e66: test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365 ( 41bb520a-fcd0-4263-91f0-dcf65f03e1da ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.076-0500 I INDEX [conn108] Index build completed: 13bd0b6a-c4c2-4ef4-b90e-85903e4e8e66
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.076-0500 I COMMAND [conn108] command test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365 appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796676, 7079), signature: { hash: BinData(0, C67B551C1C4F64DBC6B77DC783768280A39A1DDC), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58010", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796675, 2), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 2790 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 3232ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.083-0500 I INDEX [conn46] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.083-0500 I INDEX [conn46] Registering index build: 6048edae-9d30-4ac9-9e0d-476ab2591d6c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.090-0500 I INDEX [conn112] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.097-0500 I INDEX [conn114] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.100-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.101-0500 I STORAGE [ReplWriterWorker-14] createCollection: test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747 with provided UUID: 35dbc508-59b9-48c4-b237-ef711653cd51 and options: { uuid: UUID("35dbc508-59b9-48c4-b237-ef711653cd51"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.101-0500 I STORAGE [ReplWriterWorker-3] createCollection: test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1 with provided UUID: 0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0 and options: { uuid: UUID("0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.111-0500 I INDEX [conn46] index build: starting on test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.111-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.111-0500 I STORAGE [conn46] Index build initialized: 6048edae-9d30-4ac9-9e0d-476ab2591d6c: test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1 (0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.111-0500 I INDEX [conn46] Waiting for index build to complete: 6048edae-9d30-4ac9-9e0d-476ab2591d6c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.111-0500 I COMMAND [conn110] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.111-0500 I STORAGE [conn110] dropCollection: test1_fsmdb0.agg_out (4d8de200-c658-4c08-af3b-b8306fa1c260) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 574), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.111-0500 I STORAGE [conn110] Finishing collection drop for test1_fsmdb0.agg_out (4d8de200-c658-4c08-af3b-b8306fa1c260).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.111-0500 I STORAGE [conn110] renameCollection: renaming collection 976a0b0c-38a0-46aa-945b-11d8b16efca0 from test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.111-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (4d8de200-c658-4c08-af3b-b8306fa1c260)'. Ident: 'index-202-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 574)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.111-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (4d8de200-c658-4c08-af3b-b8306fa1c260)'. Ident: 'index-208-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 574)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.111-0500 I STORAGE [conn110] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-199-8224331490264904478, commit timestamp: Timestamp(1574796680, 574)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.111-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.111-0500 I INDEX [conn112] Registering index build: 46193baa-6fb7-4ac1-918f-ab9526042943
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.111-0500 I INDEX [conn114] Registering index build: ec04e647-8de2-43dc-8afd-a73cf48316a1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.112-0500 I COMMAND [conn67] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1351217302646477215, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1041960495672640132, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796676747), clusterTime: Timestamp(1574796676, 6067) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796676, 6067), signature: { hash: BinData(0, C67B551C1C4F64DBC6B77DC783768280A39A1DDC), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44858", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796675, 2), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3363ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:20.112-0500 I COMMAND [conn70] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d") }, $clusterTime: { clusterTime: Timestamp(1574796676, 6067), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3364ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.112-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.114-0500 I STORAGE [conn110] createCollection: test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c with generated UUID: ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.115-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.118-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.119-0500 I STORAGE [ReplWriterWorker-15] createCollection: test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1 with provided UUID: 317e4cb4-8991-4401-bad1-1943f5aa7f79 and options: { uuid: UUID("317e4cb4-8991-4401-bad1-1943f5aa7f79"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.119-0500 I STORAGE [ReplWriterWorker-10] createCollection: test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747 with provided UUID: 35dbc508-59b9-48c4-b237-ef711653cd51 and options: { uuid: UUID("35dbc508-59b9-48c4-b237-ef711653cd51"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.122-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.133-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.134-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.135-0500 I STORAGE [ReplWriterWorker-1] createCollection: test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1 with provided UUID: 317e4cb4-8991-4401-bad1-1943f5aa7f79 and options: { uuid: UUID("317e4cb4-8991-4401-bad1-1943f5aa7f79"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.137-0500 I INDEX [conn112] index build: starting on test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.137-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.137-0500 I STORAGE [conn112] Index build initialized: 46193baa-6fb7-4ac1-918f-ab9526042943: test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747 (35dbc508-59b9-48c4-b237-ef711653cd51 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.137-0500 I INDEX [conn112] Waiting for index build to complete: 46193baa-6fb7-4ac1-918f-ab9526042943
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.138-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 6048edae-9d30-4ac9-9e0d-476ab2591d6c: test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1 ( 0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.138-0500 I INDEX [conn46] Index build completed: 6048edae-9d30-4ac9-9e0d-476ab2591d6c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.146-0500 I INDEX [conn110] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.146-0500 I COMMAND [conn108] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.146-0500 I STORAGE [conn108] dropCollection: test1_fsmdb0.agg_out (976a0b0c-38a0-46aa-945b-11d8b16efca0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 1015), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.146-0500 I STORAGE [conn108] Finishing collection drop for test1_fsmdb0.agg_out (976a0b0c-38a0-46aa-945b-11d8b16efca0).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.147-0500 I STORAGE [conn108] renameCollection: renaming collection 41bb520a-fcd0-4263-91f0-dcf65f03e1da from test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.147-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (976a0b0c-38a0-46aa-945b-11d8b16efca0)'. Ident: 'index-207-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 1015)'
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:20.147-0500 I COMMAND [conn33] command test1_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063") }, $clusterTime: { clusterTime: Timestamp(1574796676, 6573), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3346ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.149-0500 I INDEX [ReplWriterWorker-9] index build: starting on test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.151-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:20.202-0500 I COMMAND [conn74] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856") }, $clusterTime: { clusterTime: Timestamp(1574796676, 7082), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 148ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.147-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (976a0b0c-38a0-46aa-945b-11d8b16efca0)'. Ident: 'index-210-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 1015)'
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:20.279-0500 I COMMAND [conn32] command test1_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee") }, $clusterTime: { clusterTime: Timestamp(1574796680, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 224ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.149-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.166-0500 I INDEX [ReplWriterWorker-4] index build: starting on test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.147-0500 I STORAGE [conn108] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-205-8224331490264904478, commit timestamp: Timestamp(1574796680, 1015)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:20.244-0500 I COMMAND [conn71] command test1_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563") }, $clusterTime: { clusterTime: Timestamp(1574796680, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 188ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:20.372-0500 I COMMAND [conn33] command test1_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063") }, $clusterTime: { clusterTime: Timestamp(1574796680, 1015), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"warn\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:811 protocol:op_msg 223ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.149-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: fb17445e-8202-4161-9397-69888db05d67: test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365 (41bb520a-fcd0-4263-91f0-dcf65f03e1da ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.166-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.147-0500 I INDEX [conn110] Registering index build: 5417d84e-f2ee-44b0-a405-e9d9e121982b
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:20.314-0500 I COMMAND [conn70] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d") }, $clusterTime: { clusterTime: Timestamp(1574796680, 638), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 201ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:20.446-0500 I COMMAND [conn32] command test1_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee") }, $clusterTime: { clusterTime: Timestamp(1574796680, 2661), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:814 protocol:op_msg 166ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.150-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.166-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 44c7167e-84b3-40ce-9857-bafc089714e9: test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365 (41bb520a-fcd0-4263-91f0-dcf65f03e1da ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.147-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:20.373-0500 I COMMAND [conn74] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856") }, $clusterTime: { clusterTime: Timestamp(1574796680, 1523), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"warn\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:811 protocol:op_msg 168ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.150-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.166-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.147-0500 I COMMAND [conn71] command test1_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5558240341605810119, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5221005466526197573, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796676800), clusterTime: Timestamp(1574796676, 6573) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796676, 6573), signature: { hash: BinData(0, C67B551C1C4F64DBC6B77DC783768280A39A1DDC), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58010", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796675, 2), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3345ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:20.425-0500 I COMMAND [conn71] command test1_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563") }, $clusterTime: { clusterTime: Timestamp(1574796680, 2094), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:814 protocol:op_msg 179ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.156-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: drain applied 64 side writes (inserted: 64, deleted: 0) for '_id_hashed' in 2 ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.167-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.147-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:20.447-0500 I COMMAND [conn70] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d") }, $clusterTime: { clusterTime: Timestamp(1574796680, 3234), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:816 protocol:op_msg 115ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.156-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.171-0500 I COMMAND [conn56] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796680, 8) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("c65a601f-c957-428e-adeb-3bd85740d639") }, $clusterTime: { clusterTime: Timestamp(1574796680, 72), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 100ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.150-0500 I STORAGE [conn46] createCollection: test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 with generated UUID: 574f635c-9624-41ce-b51e-5cd717176488 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.156-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981 (976a0b0c-38a0-46aa-945b-11d8b16efca0) to test1_fsmdb0.agg_out and drop 4d8de200-c658-4c08-af3b-b8306fa1c260.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.172-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 64 side writes (inserted: 64, deleted: 0) for '_id_hashed' in 2 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.157-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.156-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test1_fsmdb0.agg_out (4d8de200-c658-4c08-af3b-b8306fa1c260) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 574), t: 1 } and commit timestamp Timestamp(1574796680, 574)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.172-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.174-0500 I INDEX [conn114] index build: starting on test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.157-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test1_fsmdb0.agg_out (4d8de200-c658-4c08-af3b-b8306fa1c260).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.172-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981 (976a0b0c-38a0-46aa-945b-11d8b16efca0) to test1_fsmdb0.agg_out and drop 4d8de200-c658-4c08-af3b-b8306fa1c260.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.174-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.157-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 976a0b0c-38a0-46aa-945b-11d8b16efca0 from test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.172-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test1_fsmdb0.agg_out (4d8de200-c658-4c08-af3b-b8306fa1c260) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 574), t: 1 } and commit timestamp Timestamp(1574796680, 574)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.174-0500 I STORAGE [conn114] Index build initialized: ec04e647-8de2-43dc-8afd-a73cf48316a1: test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1 (317e4cb4-8991-4401-bad1-1943f5aa7f79 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.157-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (4d8de200-c658-4c08-af3b-b8306fa1c260)'. Ident: 'index-208--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 574)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.172-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test1_fsmdb0.agg_out (4d8de200-c658-4c08-af3b-b8306fa1c260).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.174-0500 I INDEX [conn114] Waiting for index build to complete: ec04e647-8de2-43dc-8afd-a73cf48316a1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.157-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (4d8de200-c658-4c08-af3b-b8306fa1c260)'. Ident: 'index-219--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 574)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.172-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection 976a0b0c-38a0-46aa-945b-11d8b16efca0 from test1_fsmdb0.tmp.agg_out.3baa5bf6-f436-4eeb-9aa9-0c8fab047981 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.174-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.157-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-207--8000595249233899911, commit timestamp: Timestamp(1574796680, 574)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.172-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (4d8de200-c658-4c08-af3b-b8306fa1c260)'. Ident: 'index-208--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 574)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.176-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 46193baa-6fb7-4ac1-918f-ab9526042943: test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747 ( 35dbc508-59b9-48c4-b237-ef711653cd51 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.158-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: fb17445e-8202-4161-9397-69888db05d67: test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365 ( 41bb520a-fcd0-4263-91f0-dcf65f03e1da ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.173-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (4d8de200-c658-4c08-af3b-b8306fa1c260)'. Ident: 'index-219--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 574)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.184-0500 I INDEX [conn46] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.160-0500 I STORAGE [ReplWriterWorker-6] createCollection: test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c with provided UUID: ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b and options: { uuid: UUID("ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.173-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-207--4104909142373009110, commit timestamp: Timestamp(1574796680, 574)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.185-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.175-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.176-0500 I STORAGE [ReplWriterWorker-13] createCollection: test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c with provided UUID: ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b and options: { uuid: UUID("ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.194-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.192-0500 I INDEX [ReplWriterWorker-6] index build: starting on test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.176-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 44c7167e-84b3-40ce-9857-bafc089714e9: test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365 ( 41bb520a-fcd0-4263-91f0-dcf65f03e1da ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.201-0500 I INDEX [conn110] index build: starting on test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.192-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.192-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.201-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.192-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 8e256131-8929-4a60-8515-ad220f72e373: test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1 (0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.211-0500 I INDEX [ReplWriterWorker-14] index build: starting on test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.201-0500 I STORAGE [conn110] Index build initialized: 5417d84e-f2ee-44b0-a405-e9d9e121982b: test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c (ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.192-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.211-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.202-0500 I INDEX [conn110] Waiting for index build to complete: 5417d84e-f2ee-44b0-a405-e9d9e121982b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.193-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.211-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 6d11e354-02e6-40f3-90c0-cd012abecc1d: test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1 (0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.202-0500 I INDEX [conn112] Index build completed: 46193baa-6fb7-4ac1-918f-ab9526042943
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.194-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365 (41bb520a-fcd0-4263-91f0-dcf65f03e1da) to test1_fsmdb0.agg_out and drop 976a0b0c-38a0-46aa-945b-11d8b16efca0.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.211-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.202-0500 I COMMAND [conn108] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.195-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.212-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.202-0500 I COMMAND [conn112] command test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747 appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796680, 572), signature: { hash: BinData(0, FE2E42C4631029F55810F53F2A579A29DAE3C7CA), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 20324 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 110ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.195-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test1_fsmdb0.agg_out (976a0b0c-38a0-46aa-945b-11d8b16efca0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 1015), t: 1 } and commit timestamp Timestamp(1574796680, 1015)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.213-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365 (41bb520a-fcd0-4263-91f0-dcf65f03e1da) to test1_fsmdb0.agg_out and drop 976a0b0c-38a0-46aa-945b-11d8b16efca0.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.202-0500 I STORAGE [conn108] dropCollection: test1_fsmdb0.agg_out (41bb520a-fcd0-4263-91f0-dcf65f03e1da) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 1523), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.195-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test1_fsmdb0.agg_out (976a0b0c-38a0-46aa-945b-11d8b16efca0).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.215-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.202-0500 I STORAGE [conn108] Finishing collection drop for test1_fsmdb0.agg_out (41bb520a-fcd0-4263-91f0-dcf65f03e1da).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.195-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 41bb520a-fcd0-4263-91f0-dcf65f03e1da from test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.216-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test1_fsmdb0.agg_out (976a0b0c-38a0-46aa-945b-11d8b16efca0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 1015), t: 1 } and commit timestamp Timestamp(1574796680, 1015)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.202-0500 I STORAGE [conn108] renameCollection: renaming collection 0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0 from test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.195-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (976a0b0c-38a0-46aa-945b-11d8b16efca0)'. Ident: 'index-212--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 1015)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.216-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test1_fsmdb0.agg_out (976a0b0c-38a0-46aa-945b-11d8b16efca0).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.202-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (41bb520a-fcd0-4263-91f0-dcf65f03e1da)'. Ident: 'index-213-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 1523)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.195-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (976a0b0c-38a0-46aa-945b-11d8b16efca0)'. Ident: 'index-221--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 1015)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.216-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection 41bb520a-fcd0-4263-91f0-dcf65f03e1da from test1_fsmdb0.tmp.agg_out.0d4daf37-dc2a-46a4-aaeb-73d8ba2ce365 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.202-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (41bb520a-fcd0-4263-91f0-dcf65f03e1da)'. Ident: 'index-214-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 1523)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.195-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-211--8000595249233899911, commit timestamp: Timestamp(1574796680, 1015)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.216-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (976a0b0c-38a0-46aa-945b-11d8b16efca0)'. Ident: 'index-212--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 1015)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.202-0500 I STORAGE [conn108] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-211-8224331490264904478, commit timestamp: Timestamp(1574796680, 1523)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.196-0500 I STORAGE [ReplWriterWorker-15] createCollection: test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 with provided UUID: 574f635c-9624-41ce-b51e-5cd717176488 and options: { uuid: UUID("574f635c-9624-41ce-b51e-5cd717176488"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.216-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (976a0b0c-38a0-46aa-945b-11d8b16efca0)'. Ident: 'index-221--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 1015)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.202-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.197-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 8e256131-8929-4a60-8515-ad220f72e373: test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1 ( 0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.216-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-211--4104909142373009110, commit timestamp: Timestamp(1574796680, 1015)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.202-0500 I INDEX [conn46] Registering index build: a401e23b-8ed5-4814-8567-47dd374166f7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.212-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.216-0500 I STORAGE [ReplWriterWorker-5] createCollection: test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 with provided UUID: 574f635c-9624-41ce-b51e-5cd717176488 and options: { uuid: UUID("574f635c-9624-41ce-b51e-5cd717176488"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.202-0500 I COMMAND [conn65] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8053038227914928287, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1534845888977876219, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796680054), clusterTime: Timestamp(1574796680, 3) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796680, 3), signature: { hash: BinData(0, FE2E42C4631029F55810F53F2A579A29DAE3C7CA), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44870", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796675, 2), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 147ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.231-0500 I INDEX [ReplWriterWorker-10] index build: starting on test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.217-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 6d11e354-02e6-40f3-90c0-cd012abecc1d: test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1 ( 0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.202-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: ec04e647-8de2-43dc-8afd-a73cf48316a1: test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1 ( 317e4cb4-8991-4401-bad1-1943f5aa7f79 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.231-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.232-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.203-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.231-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: bdc213d1-d04f-4478-8914-b95e5f86000f: test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747 (35dbc508-59b9-48c4-b237-ef711653cd51 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.249-0500 I INDEX [ReplWriterWorker-13] index build: starting on test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.206-0500 I STORAGE [conn108] createCollection: test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a with generated UUID: 54466e7c-e1e2-4596-9a06-739c0d59646c and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.231-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.249-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.213-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.232-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.249-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 793f1527-b6d0-49cf-b4b7-9ae4782155cb: test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747 (35dbc508-59b9-48c4-b237-ef711653cd51 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.229-0500 I INDEX [conn46] index build: starting on test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.235-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.249-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.229-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.238-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: bdc213d1-d04f-4478-8914-b95e5f86000f: test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747 ( 35dbc508-59b9-48c4-b237-ef711653cd51 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.250-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.229-0500 I STORAGE [conn46] Index build initialized: a401e23b-8ed5-4814-8567-47dd374166f7: test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 (574f635c-9624-41ce-b51e-5cd717176488 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.254-0500 I INDEX [ReplWriterWorker-13] index build: starting on test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.252-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.229-0500 I INDEX [conn46] Waiting for index build to complete: a401e23b-8ed5-4814-8567-47dd374166f7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.254-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.257-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 793f1527-b6d0-49cf-b4b7-9ae4782155cb: test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747 ( 35dbc508-59b9-48c4-b237-ef711653cd51 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.230-0500 I INDEX [conn114] Index build completed: ec04e647-8de2-43dc-8afd-a73cf48316a1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.254-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 5f0b2cd7-95c6-4231-abd0-8eca56a2cbb3: test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1 (317e4cb4-8991-4401-bad1-1943f5aa7f79 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.271-0500 I INDEX [ReplWriterWorker-4] index build: starting on test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.230-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.254-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.271-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.230-0500 I COMMAND [conn114] command test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1 appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796680, 572), signature: { hash: BinData(0, FE2E42C4631029F55810F53F2A579A29DAE3C7CA), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58004", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 23305 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 132ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.255-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.271-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 1ccac71c-379d-49d1-b711-bd00d9c97ebc: test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1 (317e4cb4-8991-4401-bad1-1943f5aa7f79 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.231-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 5417d84e-f2ee-44b0-a405-e9d9e121982b: test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c ( ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.256-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1 (0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0) to test1_fsmdb0.agg_out and drop 41bb520a-fcd0-4263-91f0-dcf65f03e1da.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.271-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.231-0500 I INDEX [conn110] Index build completed: 5417d84e-f2ee-44b0-a405-e9d9e121982b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.257-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.272-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.240-0500 I INDEX [conn108] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.258-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test1_fsmdb0.agg_out (41bb520a-fcd0-4263-91f0-dcf65f03e1da) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 1523), t: 1 } and commit timestamp Timestamp(1574796680, 1523)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.273-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1 (0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0) to test1_fsmdb0.agg_out and drop 41bb520a-fcd0-4263-91f0-dcf65f03e1da.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.241-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.258-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test1_fsmdb0.agg_out (41bb520a-fcd0-4263-91f0-dcf65f03e1da).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.274-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.243-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.258-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0 from test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.274-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test1_fsmdb0.agg_out (41bb520a-fcd0-4263-91f0-dcf65f03e1da) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 1523), t: 1 } and commit timestamp Timestamp(1574796680, 1523)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.243-0500 I COMMAND [conn112] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.258-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (41bb520a-fcd0-4263-91f0-dcf65f03e1da)'. Ident: 'index-216--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 1523)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.274-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test1_fsmdb0.agg_out (41bb520a-fcd0-4263-91f0-dcf65f03e1da).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.243-0500 I STORAGE [conn112] dropCollection: test1_fsmdb0.agg_out (0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 2030), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.258-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (41bb520a-fcd0-4263-91f0-dcf65f03e1da)'. Ident: 'index-229--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 1523)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.274-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection 0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0 from test1_fsmdb0.tmp.agg_out.e7d6945f-dd9f-4218-b956-4da892fe54f1 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.243-0500 I STORAGE [conn112] Finishing collection drop for test1_fsmdb0.agg_out (0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.258-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-215--8000595249233899911, commit timestamp: Timestamp(1574796680, 1523)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.274-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (41bb520a-fcd0-4263-91f0-dcf65f03e1da)'. Ident: 'index-216--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 1523)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.243-0500 I STORAGE [conn112] renameCollection: renaming collection 35dbc508-59b9-48c4-b237-ef711653cd51 from test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.258-0500 I STORAGE [ReplWriterWorker-0] createCollection: test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a with provided UUID: 54466e7c-e1e2-4596-9a06-739c0d59646c and options: { uuid: UUID("54466e7c-e1e2-4596-9a06-739c0d59646c"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.274-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (41bb520a-fcd0-4263-91f0-dcf65f03e1da)'. Ident: 'index-229--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 1523)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.243-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0)'. Ident: 'index-219-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 2030)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.260-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 5f0b2cd7-95c6-4231-abd0-8eca56a2cbb3: test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1 ( 317e4cb4-8991-4401-bad1-1943f5aa7f79 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.274-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-215--4104909142373009110, commit timestamp: Timestamp(1574796680, 1523)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.243-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0)'. Ident: 'index-222-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 2030)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.275-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.276-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 1ccac71c-379d-49d1-b711-bd00d9c97ebc: test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1 ( 317e4cb4-8991-4401-bad1-1943f5aa7f79 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.243-0500 I STORAGE [conn112] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-216-8224331490264904478, commit timestamp: Timestamp(1574796680, 2030)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.294-0500 I INDEX [ReplWriterWorker-12] index build: starting on test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.276-0500 I STORAGE [ReplWriterWorker-11] createCollection: test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a with provided UUID: 54466e7c-e1e2-4596-9a06-739c0d59646c and options: { uuid: UUID("54466e7c-e1e2-4596-9a06-739c0d59646c"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.244-0500 I INDEX [conn108] Registering index build: 3b296914-33fb-4630-9205-bafd989c0968
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.294-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.291-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.244-0500 I COMMAND [conn68] command test1_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7922788805010597143, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3422205033611069300, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796680055), clusterTime: Timestamp(1574796680, 3) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796680, 4), signature: { hash: BinData(0, FE2E42C4631029F55810F53F2A579A29DAE3C7CA), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796675, 2), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 187ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.294-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: d036a164-6c68-4bfe-8500-bf912aa72524: test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c (ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.311-0500 I INDEX [ReplWriterWorker-2] index build: starting on test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.245-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: a401e23b-8ed5-4814-8567-47dd374166f7: test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 ( 574f635c-9624-41ce-b51e-5cd717176488 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.294-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.311-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.246-0500 I STORAGE [conn110] createCollection: test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e with generated UUID: aaf40286-a964-45b0-a25e-57f46facccc6 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.295-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.311-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 64a981b2-864b-455d-96be-b96d6080d9e5: test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c (ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.270-0500 I INDEX [conn108] index build: starting on test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.297-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.311-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.270-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.301-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: d036a164-6c68-4bfe-8500-bf912aa72524: test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c ( ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.312-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.270-0500 I STORAGE [conn108] Index build initialized: 3b296914-33fb-4630-9205-bafd989c0968: test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a (54466e7c-e1e2-4596-9a06-739c0d59646c ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.318-0500 I INDEX [ReplWriterWorker-14] index build: starting on test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.314-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.270-0500 I INDEX [conn108] Waiting for index build to complete: 3b296914-33fb-4630-9205-bafd989c0968
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.318-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.317-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 64a981b2-864b-455d-96be-b96d6080d9e5: test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c ( ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.270-0500 I INDEX [conn46] Index build completed: a401e23b-8ed5-4814-8567-47dd374166f7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.318-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 524a2870-7cb8-4c3a-a1d9-145b32086b7e: test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 (574f635c-9624-41ce-b51e-5cd717176488 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.334-0500 I INDEX [ReplWriterWorker-9] index build: starting on test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.277-0500 I INDEX [conn110] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.318-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.334-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.277-0500 I COMMAND [conn114] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.318-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.334-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: cbda52b0-1271-4c6e-b413-dd5a3a68f403: test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 (574f635c-9624-41ce-b51e-5cd717176488 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.277-0500 I STORAGE [conn114] dropCollection: test1_fsmdb0.agg_out (35dbc508-59b9-48c4-b237-ef711653cd51) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 2597), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.319-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747 (35dbc508-59b9-48c4-b237-ef711653cd51) to test1_fsmdb0.agg_out and drop 0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.334-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.277-0500 I STORAGE [conn114] Finishing collection drop for test1_fsmdb0.agg_out (35dbc508-59b9-48c4-b237-ef711653cd51).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.321-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.335-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.278-0500 I STORAGE [conn114] renameCollection: renaming collection 317e4cb4-8991-4401-bad1-1943f5aa7f79 from test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.321-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test1_fsmdb0.agg_out (0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 2030), t: 1 } and commit timestamp Timestamp(1574796680, 2030)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.336-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747 (35dbc508-59b9-48c4-b237-ef711653cd51) to test1_fsmdb0.agg_out and drop 0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.278-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (35dbc508-59b9-48c4-b237-ef711653cd51)'. Ident: 'index-220-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 2597)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.321-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test1_fsmdb0.agg_out (0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.339-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.278-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (35dbc508-59b9-48c4-b237-ef711653cd51)'. Ident: 'index-224-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 2597)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.321-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection 35dbc508-59b9-48c4-b237-ef711653cd51 from test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.339-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test1_fsmdb0.agg_out (0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 2030), t: 1 } and commit timestamp Timestamp(1574796680, 2030)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.278-0500 I STORAGE [conn114] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-217-8224331490264904478, commit timestamp: Timestamp(1574796680, 2597)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.321-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0)'. Ident: 'index-224--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 2030)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.339-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test1_fsmdb0.agg_out (0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.278-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.321-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0)'. Ident: 'index-233--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 2030)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.339-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 35dbc508-59b9-48c4-b237-ef711653cd51 from test1_fsmdb0.tmp.agg_out.787fd849-7a62-4092-a88a-c1063ce6f747 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.279-0500 I INDEX [conn110] Registering index build: d9ffdbbd-e74c-49e1-b75e-84b27849bf48
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.321-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-223--8000595249233899911, commit timestamp: Timestamp(1574796680, 2030)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.339-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0)'. Ident: 'index-224--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 2030)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.279-0500 I COMMAND [conn70] command test1_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4159949042747058837, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4341525185821773880, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796680055), clusterTime: Timestamp(1574796680, 3) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796680, 4), signature: { hash: BinData(0, FE2E42C4631029F55810F53F2A579A29DAE3C7CA), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58004", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 223ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.322-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 524a2870-7cb8-4c3a-a1d9-145b32086b7e: test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 ( 574f635c-9624-41ce-b51e-5cd717176488 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.339-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (0b341e36-0cc2-4c89-93b2-8ebe66bbe4a0)'. Ident: 'index-233--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 2030)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.279-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.325-0500 I STORAGE [ReplWriterWorker-11] createCollection: test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e with provided UUID: aaf40286-a964-45b0-a25e-57f46facccc6 and options: { uuid: UUID("aaf40286-a964-45b0-a25e-57f46facccc6"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.339-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-223--4104909142373009110, commit timestamp: Timestamp(1574796680, 2030)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.282-0500 I STORAGE [conn114] createCollection: test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 with generated UUID: 418a4e9e-bf75-4acf-8b07-b534529cf66d and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.340-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.341-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: cbda52b0-1271-4c6e-b413-dd5a3a68f403: test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 ( 574f635c-9624-41ce-b51e-5cd717176488 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.282-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.356-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1 (317e4cb4-8991-4401-bad1-1943f5aa7f79) to test1_fsmdb0.agg_out and drop 35dbc508-59b9-48c4-b237-ef711653cd51.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.343-0500 I STORAGE [ReplWriterWorker-8] createCollection: test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e with provided UUID: aaf40286-a964-45b0-a25e-57f46facccc6 and options: { uuid: UUID("aaf40286-a964-45b0-a25e-57f46facccc6"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.297-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 3b296914-33fb-4630-9205-bafd989c0968: test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a ( 54466e7c-e1e2-4596-9a06-739c0d59646c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.356-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test1_fsmdb0.agg_out (35dbc508-59b9-48c4-b237-ef711653cd51) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 2597), t: 1 } and commit timestamp Timestamp(1574796680, 2597)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.356-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.307-0500 I INDEX [conn110] index build: starting on test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.356-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test1_fsmdb0.agg_out (35dbc508-59b9-48c4-b237-ef711653cd51).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.364-0500 I COMMAND [conn56] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796680, 2223) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("c65a601f-c957-428e-adeb-3bd85740d639") }, $clusterTime: { clusterTime: Timestamp(1574796680, 2287), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 12587 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 104ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.307-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.356-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 317e4cb4-8991-4401-bad1-1943f5aa7f79 from test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.364-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1 (317e4cb4-8991-4401-bad1-1943f5aa7f79) to test1_fsmdb0.agg_out and drop 35dbc508-59b9-48c4-b237-ef711653cd51.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.307-0500 I STORAGE [conn110] Index build initialized: d9ffdbbd-e74c-49e1-b75e-84b27849bf48: test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e (aaf40286-a964-45b0-a25e-57f46facccc6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.356-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (35dbc508-59b9-48c4-b237-ef711653cd51)'. Ident: 'index-226--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 2597)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.364-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test1_fsmdb0.agg_out (35dbc508-59b9-48c4-b237-ef711653cd51) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 2597), t: 1 } and commit timestamp Timestamp(1574796680, 2597)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.307-0500 I INDEX [conn110] Waiting for index build to complete: d9ffdbbd-e74c-49e1-b75e-84b27849bf48
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.356-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (35dbc508-59b9-48c4-b237-ef711653cd51)'. Ident: 'index-237--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 2597)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.364-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test1_fsmdb0.agg_out (35dbc508-59b9-48c4-b237-ef711653cd51).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.307-0500 I INDEX [conn108] Index build completed: 3b296914-33fb-4630-9205-bafd989c0968
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.356-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-225--8000595249233899911, commit timestamp: Timestamp(1574796680, 2597)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.364-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 317e4cb4-8991-4401-bad1-1943f5aa7f79 from test1_fsmdb0.tmp.agg_out.994b8a0c-a39e-4549-90b1-caf6acf248c1 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.313-0500 I INDEX [conn114] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.359-0500 I STORAGE [ReplWriterWorker-14] createCollection: test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 with provided UUID: 418a4e9e-bf75-4acf-8b07-b534529cf66d and options: { uuid: UUID("418a4e9e-bf75-4acf-8b07-b534529cf66d"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.364-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (35dbc508-59b9-48c4-b237-ef711653cd51)'. Ident: 'index-226--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 2597)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.313-0500 I COMMAND [conn112] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.373-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.364-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (35dbc508-59b9-48c4-b237-ef711653cd51)'. Ident: 'index-237--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 2597)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.313-0500 I STORAGE [conn112] dropCollection: test1_fsmdb0.agg_out (317e4cb4-8991-4401-bad1-1943f5aa7f79) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 3102), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.387-0500 I INDEX [ReplWriterWorker-4] index build: starting on test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.364-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-225--4104909142373009110, commit timestamp: Timestamp(1574796680, 2597)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.313-0500 I STORAGE [conn112] Finishing collection drop for test1_fsmdb0.agg_out (317e4cb4-8991-4401-bad1-1943f5aa7f79).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.387-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.374-0500 I STORAGE [ReplWriterWorker-7] createCollection: test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 with provided UUID: 418a4e9e-bf75-4acf-8b07-b534529cf66d and options: { uuid: UUID("418a4e9e-bf75-4acf-8b07-b534529cf66d"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.313-0500 I STORAGE [conn112] renameCollection: renaming collection ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b from test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.387-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 2e00185a-1d28-4362-9b51-a796a814f5e4: test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a (54466e7c-e1e2-4596-9a06-739c0d59646c ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.389-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.314-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (317e4cb4-8991-4401-bad1-1943f5aa7f79)'. Ident: 'index-221-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 3102)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.388-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.404-0500 I INDEX [ReplWriterWorker-12] index build: starting on test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.314-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (317e4cb4-8991-4401-bad1-1943f5aa7f79)'. Ident: 'index-228-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 3102)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.388-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.404-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.314-0500 I STORAGE [conn112] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-218-8224331490264904478, commit timestamp: Timestamp(1574796680, 3102)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.390-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.404-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: e8d5a595-3098-420e-8c38-caa5e201877b: test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a (54466e7c-e1e2-4596-9a06-739c0d59646c ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.314-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.394-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 2e00185a-1d28-4362-9b51-a796a814f5e4: test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a ( 54466e7c-e1e2-4596-9a06-739c0d59646c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.404-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.314-0500 I INDEX [conn114] Registering index build: 7994da6b-dfe5-471c-84b4-9c49becb86b7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.400-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c (ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b) to test1_fsmdb0.agg_out and drop 317e4cb4-8991-4401-bad1-1943f5aa7f79.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:23.216-0500 I COMMAND [conn33] command test1_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063") }, $clusterTime: { clusterTime: Timestamp(1574796680, 4559), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2842ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.405-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.314-0500 I COMMAND [conn67] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6275391063324782329, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6081318759008008280, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796680113), clusterTime: Timestamp(1574796680, 638) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796680, 702), signature: { hash: BinData(0, FE2E42C4631029F55810F53F2A579A29DAE3C7CA), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44858", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 200ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.400-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test1_fsmdb0.agg_out (317e4cb4-8991-4401-bad1-1943f5aa7f79) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 3102), t: 1 } and commit timestamp Timestamp(1574796680, 3102)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.407-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.314-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.400-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test1_fsmdb0.agg_out (317e4cb4-8991-4401-bad1-1943f5aa7f79).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.409-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c (ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b) to test1_fsmdb0.agg_out and drop 317e4cb4-8991-4401-bad1-1943f5aa7f79.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.324-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.400-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b from test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.409-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test1_fsmdb0.agg_out (317e4cb4-8991-4401-bad1-1943f5aa7f79) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 3102), t: 1 } and commit timestamp Timestamp(1574796680, 3102)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.331-0500 I INDEX [conn114] index build: starting on test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.400-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (317e4cb4-8991-4401-bad1-1943f5aa7f79)'. Ident: 'index-228--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 3102)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.409-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test1_fsmdb0.agg_out (317e4cb4-8991-4401-bad1-1943f5aa7f79).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.331-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.400-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (317e4cb4-8991-4401-bad1-1943f5aa7f79)'. Ident: 'index-239--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 3102)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.409-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b from test1_fsmdb0.tmp.agg_out.6e969e14-b1dc-4939-8495-d2ff27240a6c to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.331-0500 I STORAGE [conn114] Index build initialized: 7994da6b-dfe5-471c-84b4-9c49becb86b7: test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 (418a4e9e-bf75-4acf-8b07-b534529cf66d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.400-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-227--8000595249233899911, commit timestamp: Timestamp(1574796680, 3102)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.410-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (317e4cb4-8991-4401-bad1-1943f5aa7f79)'. Ident: 'index-228--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 3102)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.331-0500 I INDEX [conn114] Waiting for index build to complete: 7994da6b-dfe5-471c-84b4-9c49becb86b7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.419-0500 I INDEX [ReplWriterWorker-15] index build: starting on test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.410-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (317e4cb4-8991-4401-bad1-1943f5aa7f79)'. Ident: 'index-239--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 3102)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.331-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.419-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.410-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-227--4104909142373009110, commit timestamp: Timestamp(1574796680, 3102)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.332-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: d9ffdbbd-e74c-49e1-b75e-84b27849bf48: test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e ( aaf40286-a964-45b0-a25e-57f46facccc6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.419-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 23f1668d-6d76-4195-a206-2a914056a035: test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e (aaf40286-a964-45b0-a25e-57f46facccc6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.410-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: e8d5a595-3098-420e-8c38-caa5e201877b: test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a ( 54466e7c-e1e2-4596-9a06-739c0d59646c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.332-0500 I INDEX [conn110] Index build completed: d9ffdbbd-e74c-49e1-b75e-84b27849bf48
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.419-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.435-0500 I INDEX [ReplWriterWorker-6] index build: starting on test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.333-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.420-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.435-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.334-0500 I STORAGE [conn112] createCollection: test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc with generated UUID: 0b7abe64-14bd-4e3a-8b07-67866ccb2774 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.422-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.435-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 7ce49cb9-2778-4f47-a09f-5177a5e28017: test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e (aaf40286-a964-45b0-a25e-57f46facccc6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.336-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.425-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 23f1668d-6d76-4195-a206-2a914056a035: test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e ( aaf40286-a964-45b0-a25e-57f46facccc6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.435-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.346-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 7994da6b-dfe5-471c-84b4-9c49becb86b7: test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 ( 418a4e9e-bf75-4acf-8b07-b534529cf66d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.427-0500 I STORAGE [ReplWriterWorker-14] createCollection: test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc with provided UUID: 0b7abe64-14bd-4e3a-8b07-67866ccb2774 and options: { uuid: UUID("0b7abe64-14bd-4e3a-8b07-67866ccb2774"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.436-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.346-0500 I INDEX [conn114] Index build completed: 7994da6b-dfe5-471c-84b4-9c49becb86b7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.443-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.438-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.354-0500 I INDEX [conn112] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.462-0500 I INDEX [ReplWriterWorker-13] index build: starting on test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.442-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 7ce49cb9-2778-4f47-a09f-5177a5e28017: test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e ( aaf40286-a964-45b0-a25e-57f46facccc6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.355-0500 I INDEX [conn112] Registering index build: c1289b6c-e7af-4d54-9d93-6156cb10aeec
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.462-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.444-0500 I STORAGE [ReplWriterWorker-14] createCollection: test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc with provided UUID: 0b7abe64-14bd-4e3a-8b07-67866ccb2774 and options: { uuid: UUID("0b7abe64-14bd-4e3a-8b07-67866ccb2774"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.355-0500 I COMMAND [conn46] CMD: drop test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.462-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 8f60be56-d202-44d7-84b5-2ef9c055d614: test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 (418a4e9e-bf75-4acf-8b07-b534529cf66d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.460-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.371-0500 I INDEX [conn112] index build: starting on test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.462-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.478-0500 I INDEX [ReplWriterWorker-0] index build: starting on test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.371-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.462-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.478-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.371-0500 I STORAGE [conn112] Index build initialized: c1289b6c-e7af-4d54-9d93-6156cb10aeec: test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc (0b7abe64-14bd-4e3a-8b07-67866ccb2774 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.467-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: drain applied 64 side writes (inserted: 64, deleted: 0) for '_id_hashed' in 1 ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.478-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: a5fcef5d-5e19-4990-b016-2586ebdb5145: test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 (418a4e9e-bf75-4acf-8b07-b534529cf66d ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.371-0500 I INDEX [conn112] Waiting for index build to complete: c1289b6c-e7af-4d54-9d93-6156cb10aeec
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.467-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.478-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.371-0500 I STORAGE [conn46] dropCollection: test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 (574f635c-9624-41ce-b51e-5cd717176488) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.467-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 8f60be56-d202-44d7-84b5-2ef9c055d614: test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 ( 418a4e9e-bf75-4acf-8b07-b534529cf66d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.478-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.371-0500 I STORAGE [conn46] Finishing collection drop for test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 (574f635c-9624-41ce-b51e-5cd717176488).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.468-0500 I COMMAND [ReplWriterWorker-12] CMD: drop test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.480-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.371-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 (574f635c-9624-41ce-b51e-5cd717176488)'. Ident: 'index-231-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 4431)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.468-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 (574f635c-9624-41ce-b51e-5cd717176488) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 4431), t: 1 } and commit timestamp Timestamp(1574796680, 4431)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.482-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: a5fcef5d-5e19-4990-b016-2586ebdb5145: test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 ( 418a4e9e-bf75-4acf-8b07-b534529cf66d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.371-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 (574f635c-9624-41ce-b51e-5cd717176488)'. Ident: 'index-234-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 4431)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.468-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 (574f635c-9624-41ce-b51e-5cd717176488).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.487-0500 I COMMAND [ReplWriterWorker-6] CMD: drop test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.371-0500 I STORAGE [conn46] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3'. Ident: collection-229-8224331490264904478, commit timestamp: Timestamp(1574796680, 4431)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.468-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 (574f635c-9624-41ce-b51e-5cd717176488)'. Ident: 'index-236--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 4431)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.487-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 (574f635c-9624-41ce-b51e-5cd717176488) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 4431), t: 1 } and commit timestamp Timestamp(1574796680, 4431)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.372-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.468-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 (574f635c-9624-41ce-b51e-5cd717176488)'. Ident: 'index-245--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 4431)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.487-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 (574f635c-9624-41ce-b51e-5cd717176488).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.372-0500 I COMMAND [conn71] command test1_fsmdb0.agg_out appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 867883044721763800, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3315879577622727488, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796680148), clusterTime: Timestamp(1574796680, 1015) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796680, 1015), signature: { hash: BinData(0, FE2E42C4631029F55810F53F2A579A29DAE3C7CA), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58010", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"warn\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:981 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 222ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.468-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3'. Ident: collection-235--8000595249233899911, commit timestamp: Timestamp(1574796680, 4431)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.487-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 (574f635c-9624-41ce-b51e-5cd717176488)'. Ident: 'index-236--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 4431)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.372-0500 I COMMAND [conn108] CMD: drop test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.471-0500 I COMMAND [ReplWriterWorker-10] CMD: drop test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.487-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3 (574f635c-9624-41ce-b51e-5cd717176488)'. Ident: 'index-245--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 4431)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.372-0500 I STORAGE [conn108] dropCollection: test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a (54466e7c-e1e2-4596-9a06-739c0d59646c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.471-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a (54466e7c-e1e2-4596-9a06-739c0d59646c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 4560), t: 1 } and commit timestamp Timestamp(1574796680, 4560)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.487-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3'. Ident: collection-235--4104909142373009110, commit timestamp: Timestamp(1574796680, 4431)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.372-0500 I STORAGE [conn108] Finishing collection drop for test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a (54466e7c-e1e2-4596-9a06-739c0d59646c).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.471-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a (54466e7c-e1e2-4596-9a06-739c0d59646c).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.489-0500 I COMMAND [conn56] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796680, 4431) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("c65a601f-c957-428e-adeb-3bd85740d639") }, $clusterTime: { clusterTime: Timestamp(1574796680, 4559), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 114ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.372-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a (54466e7c-e1e2-4596-9a06-739c0d59646c)'. Ident: 'index-237-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 4560)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.471-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a (54466e7c-e1e2-4596-9a06-739c0d59646c)'. Ident: 'index-242--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 4560)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.490-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.372-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a (54466e7c-e1e2-4596-9a06-739c0d59646c)'. Ident: 'index-238-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 4560)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.471-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a (54466e7c-e1e2-4596-9a06-739c0d59646c)'. Ident: 'index-251--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 4560)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.490-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a (54466e7c-e1e2-4596-9a06-739c0d59646c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 4560), t: 1 } and commit timestamp Timestamp(1574796680, 4560)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.372-0500 I STORAGE [conn108] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a'. Ident: collection-235-8224331490264904478, commit timestamp: Timestamp(1574796680, 4560)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.471-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a'. Ident: collection-241--8000595249233899911, commit timestamp: Timestamp(1574796680, 4560)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.490-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a (54466e7c-e1e2-4596-9a06-739c0d59646c).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.372-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.487-0500 I INDEX [ReplWriterWorker-11] index build: starting on test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.490-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a (54466e7c-e1e2-4596-9a06-739c0d59646c)'. Ident: 'index-242--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 4560)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.373-0500 I COMMAND [conn65] command test1_fsmdb0.agg_out appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3189447724264204660, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3156308531621130996, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796680204), clusterTime: Timestamp(1574796680, 1523) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796680, 1523), signature: { hash: BinData(0, FE2E42C4631029F55810F53F2A579A29DAE3C7CA), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44870", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"warn\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:981 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 167ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.487-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.490-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a (54466e7c-e1e2-4596-9a06-739c0d59646c)'. Ident: 'index-251--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 4560)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.375-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.487-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 81c0d680-d31f-44ca-bf72-6630d2e512fe: test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc (0b7abe64-14bd-4e3a-8b07-67866ccb2774 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.490-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a'. Ident: collection-241--4104909142373009110, commit timestamp: Timestamp(1574796680, 4560)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.376-0500 I STORAGE [conn108] createCollection: test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789 with generated UUID: d756c7f4-e40d-4100-b5ee-76e6c74d474a and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.487-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.507-0500 I INDEX [ReplWriterWorker-6] index build: starting on test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.377-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: c1289b6c-e7af-4d54-9d93-6156cb10aeec: test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc ( 0b7abe64-14bd-4e3a-8b07-67866ccb2774 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.488-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.507-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.377-0500 I INDEX [conn112] Index build completed: c1289b6c-e7af-4d54-9d93-6156cb10aeec
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.490-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.507-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: f2a8fdfe-020d-4617-b2a1-4f76b1cb01db: test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc (0b7abe64-14bd-4e3a-8b07-67866ccb2774 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.393-0500 I INDEX [conn108] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.492-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 81c0d680-d31f-44ca-bf72-6630d2e512fe: test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc ( 0b7abe64-14bd-4e3a-8b07-67866ccb2774 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.507-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.394-0500 I STORAGE [conn46] createCollection: test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8 with generated UUID: 1d28fb55-bae1-4e1f-a042-d821034bbb65 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.493-0500 I STORAGE [ReplWriterWorker-13] createCollection: test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789 with provided UUID: d756c7f4-e40d-4100-b5ee-76e6c74d474a and options: { uuid: UUID("d756c7f4-e40d-4100-b5ee-76e6c74d474a"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.508-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.394-0500 I INDEX [conn108] Registering index build: a44df0d1-5fe3-49e9-9506-6d0dc478b012
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.506-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.511-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.394-0500 I COMMAND [conn110] CMD: drop test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.508-0500 I STORAGE [ReplWriterWorker-9] createCollection: test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8 with provided UUID: 1d28fb55-bae1-4e1f-a042-d821034bbb65 and options: { uuid: UUID("1d28fb55-bae1-4e1f-a042-d821034bbb65"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.513-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: f2a8fdfe-020d-4617-b2a1-4f76b1cb01db: test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc ( 0b7abe64-14bd-4e3a-8b07-67866ccb2774 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.417-0500 I INDEX [conn46] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.522-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.513-0500 I STORAGE [ReplWriterWorker-13] createCollection: test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789 with provided UUID: d756c7f4-e40d-4100-b5ee-76e6c74d474a and options: { uuid: UUID("d756c7f4-e40d-4100-b5ee-76e6c74d474a"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.424-0500 I INDEX [conn108] index build: starting on test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.527-0500 I COMMAND [ReplWriterWorker-9] CMD: drop test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.527-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.424-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.527-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e (aaf40286-a964-45b0-a25e-57f46facccc6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 5119), t: 1 } and commit timestamp Timestamp(1574796680, 5119)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.529-0500 I STORAGE [ReplWriterWorker-5] createCollection: test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8 with provided UUID: 1d28fb55-bae1-4e1f-a042-d821034bbb65 and options: { uuid: UUID("1d28fb55-bae1-4e1f-a042-d821034bbb65"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.424-0500 I STORAGE [conn108] Index build initialized: a44df0d1-5fe3-49e9-9506-6d0dc478b012: test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789 (d756c7f4-e40d-4100-b5ee-76e6c74d474a ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.527-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e (aaf40286-a964-45b0-a25e-57f46facccc6).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.543-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.424-0500 I INDEX [conn108] Waiting for index build to complete: a44df0d1-5fe3-49e9-9506-6d0dc478b012
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.527-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e (aaf40286-a964-45b0-a25e-57f46facccc6)'. Ident: 'index-248--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 5119)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.549-0500 I COMMAND [ReplWriterWorker-8] CMD: drop test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.424-0500 I STORAGE [conn110] dropCollection: test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e (aaf40286-a964-45b0-a25e-57f46facccc6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.527-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e (aaf40286-a964-45b0-a25e-57f46facccc6)'. Ident: 'index-253--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 5119)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.549-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e (aaf40286-a964-45b0-a25e-57f46facccc6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 5119), t: 1 } and commit timestamp Timestamp(1574796680, 5119)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.424-0500 I STORAGE [conn110] Finishing collection drop for test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e (aaf40286-a964-45b0-a25e-57f46facccc6).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.527-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e'. Ident: collection-247--8000595249233899911, commit timestamp: Timestamp(1574796680, 5119)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.549-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e (aaf40286-a964-45b0-a25e-57f46facccc6).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.424-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e (aaf40286-a964-45b0-a25e-57f46facccc6)'. Ident: 'index-241-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 5119)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.544-0500 I INDEX [ReplWriterWorker-8] index build: starting on test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.549-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e (aaf40286-a964-45b0-a25e-57f46facccc6)'. Ident: 'index-248--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 5119)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.424-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e (aaf40286-a964-45b0-a25e-57f46facccc6)'. Ident: 'index-242-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 5119)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.544-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.549-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e (aaf40286-a964-45b0-a25e-57f46facccc6)'. Ident: 'index-253--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 5119)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.424-0500 I STORAGE [conn110] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e'. Ident: collection-239-8224331490264904478, commit timestamp: Timestamp(1574796680, 5119)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.544-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: fc44c4e4-2980-4468-997f-7e208a89409e: test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789 (d756c7f4-e40d-4100-b5ee-76e6c74d474a ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.549-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e'. Ident: collection-247--4104909142373009110, commit timestamp: Timestamp(1574796680, 5119)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.424-0500 I INDEX [conn46] Registering index build: 4d396abe-999f-42e6-a59c-e15bc863c340
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.544-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.562-0500 I INDEX [ReplWriterWorker-0] index build: starting on test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.424-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.545-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.562-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.424-0500 I COMMAND [conn68] command test1_fsmdb0.agg_out appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7946791800778915901, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1103381241895931130, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796680245), clusterTime: Timestamp(1574796680, 2094) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796680, 2094), signature: { hash: BinData(0, FE2E42C4631029F55810F53F2A579A29DAE3C7CA), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:984 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 178ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.546-0500 I COMMAND [ReplWriterWorker-6] CMD: drop test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.546-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 (418a4e9e-bf75-4acf-8b07-b534529cf66d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 5559), t: 1 } and commit timestamp Timestamp(1574796680, 5559)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.425-0500 I COMMAND [conn114] CMD: drop test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.562-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 2276dd14-e4a2-4694-81fc-8031a172cf72: test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789 (d756c7f4-e40d-4100-b5ee-76e6c74d474a ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.546-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 (418a4e9e-bf75-4acf-8b07-b534529cf66d).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.425-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.562-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.546-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 (418a4e9e-bf75-4acf-8b07-b534529cf66d)'. Ident: 'index-250--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 5559)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.426-0500 I COMMAND [conn68] CMD: dropIndexes test1_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.563-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.546-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 (418a4e9e-bf75-4acf-8b07-b534529cf66d)'. Ident: 'index-257--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 5559)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.438-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.564-0500 I COMMAND [ReplWriterWorker-10] CMD: drop test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.546-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8'. Ident: collection-249--8000595249233899911, commit timestamp: Timestamp(1574796680, 5559)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.445-0500 I INDEX [conn46] index build: starting on test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.564-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 (418a4e9e-bf75-4acf-8b07-b534529cf66d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 5559), t: 1 } and commit timestamp Timestamp(1574796680, 5559)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.547-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.445-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.564-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 (418a4e9e-bf75-4acf-8b07-b534529cf66d).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.547-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.445-0500 I STORAGE [conn46] Index build initialized: 4d396abe-999f-42e6-a59c-e15bc863c340: test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8 (1d28fb55-bae1-4e1f-a042-d821034bbb65 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.564-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 (418a4e9e-bf75-4acf-8b07-b534529cf66d)'. Ident: 'index-250--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 5559)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.547-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc (0b7abe64-14bd-4e3a-8b07-67866ccb2774) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 5560), t: 1 } and commit timestamp Timestamp(1574796680, 5560)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.445-0500 I INDEX [conn46] Waiting for index build to complete: 4d396abe-999f-42e6-a59c-e15bc863c340
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.564-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 (418a4e9e-bf75-4acf-8b07-b534529cf66d)'. Ident: 'index-257--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 5559)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.547-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc (0b7abe64-14bd-4e3a-8b07-67866ccb2774).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.446-0500 I STORAGE [conn114] dropCollection: test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 (418a4e9e-bf75-4acf-8b07-b534529cf66d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.565-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8'. Ident: collection-249--4104909142373009110, commit timestamp: Timestamp(1574796680, 5559)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.547-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc (0b7abe64-14bd-4e3a-8b07-67866ccb2774)'. Ident: 'index-256--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 5560)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.446-0500 I STORAGE [conn114] Finishing collection drop for test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 (418a4e9e-bf75-4acf-8b07-b534529cf66d).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.565-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.547-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc (0b7abe64-14bd-4e3a-8b07-67866ccb2774)'. Ident: 'index-259--8000595249233899911', commit timestamp: 'Timestamp(1574796680, 5560)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.446-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 (418a4e9e-bf75-4acf-8b07-b534529cf66d)'. Ident: 'index-245-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 5559)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.565-0500 I COMMAND [ReplWriterWorker-11] CMD: drop test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.547-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc'. Ident: collection-255--8000595249233899911, commit timestamp: Timestamp(1574796680, 5560)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.446-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8 (418a4e9e-bf75-4acf-8b07-b534529cf66d)'. Ident: 'index-246-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 5559)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.565-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc (0b7abe64-14bd-4e3a-8b07-67866ccb2774) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796680, 5560), t: 1 } and commit timestamp Timestamp(1574796680, 5560)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:20.549-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: fc44c4e4-2980-4468-997f-7e208a89409e: test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789 ( d756c7f4-e40d-4100-b5ee-76e6c74d474a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.446-0500 I STORAGE [conn114] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8'. Ident: collection-243-8224331490264904478, commit timestamp: Timestamp(1574796680, 5559)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.565-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc (0b7abe64-14bd-4e3a-8b07-67866ccb2774).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.446-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.201-0500 I STORAGE [ReplWriterWorker-15] createCollection: test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67 with provided UUID: ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f and options: { uuid: UUID("ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.565-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc (0b7abe64-14bd-4e3a-8b07-67866ccb2774)'. Ident: 'index-256--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 5560)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.446-0500 I COMMAND [conn70] command test1_fsmdb0.agg_out appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8486613502860460183, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 23851289189262899, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796680280), clusterTime: Timestamp(1574796680, 2661) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796680, 2661), signature: { hash: BinData(0, FE2E42C4631029F55810F53F2A579A29DAE3C7CA), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58004", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:984 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 164ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.215-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.565-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc (0b7abe64-14bd-4e3a-8b07-67866ccb2774)'. Ident: 'index-259--4104909142373009110', commit timestamp: 'Timestamp(1574796680, 5560)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.446-0500 I COMMAND [conn112] CMD: drop test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.216-0500 I STORAGE [ReplWriterWorker-11] createCollection: test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c with provided UUID: 5d521e9e-43b0-463b-8cc5-dee1c6d0c70a and options: { uuid: UUID("5d521e9e-43b0-463b-8cc5-dee1c6d0c70a"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.565-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc'. Ident: collection-255--4104909142373009110, commit timestamp: Timestamp(1574796680, 5560)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.446-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: a44df0d1-5fe3-49e9-9506-6d0dc478b012: test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789 ( d756c7f4-e40d-4100-b5ee-76e6c74d474a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:20.566-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 2276dd14-e4a2-4694-81fc-8031a172cf72: test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789 ( d756c7f4-e40d-4100-b5ee-76e6c74d474a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.446-0500 I INDEX [conn108] Index build completed: a44df0d1-5fe3-49e9-9506-6d0dc478b012
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.446-0500 I STORAGE [conn112] dropCollection: test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc (0b7abe64-14bd-4e3a-8b07-67866ccb2774) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.446-0500 I STORAGE [conn112] Finishing collection drop for test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc (0b7abe64-14bd-4e3a-8b07-67866ccb2774).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.446-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc (0b7abe64-14bd-4e3a-8b07-67866ccb2774)'. Ident: 'index-249-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 5560)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.446-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc (0b7abe64-14bd-4e3a-8b07-67866ccb2774)'. Ident: 'index-250-8224331490264904478', commit timestamp: 'Timestamp(1574796680, 5560)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.446-0500 I STORAGE [conn112] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc'. Ident: collection-248-8224331490264904478, commit timestamp: Timestamp(1574796680, 5560)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.447-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.447-0500 I COMMAND [conn67] command test1_fsmdb0.agg_out appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2416130741493260811, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8343984840383674877, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796680332), clusterTime: Timestamp(1574796680, 3234) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796680, 3362), signature: { hash: BinData(0, FE2E42C4631029F55810F53F2A579A29DAE3C7CA), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44858", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:986 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 113ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.217-0500 I STORAGE [ReplWriterWorker-4] createCollection: test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67 with provided UUID: ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f and options: { uuid: UUID("ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.449-0500 I STORAGE [conn112] createCollection: test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67 with generated UUID: ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.449-0500 I STORAGE [conn114] createCollection: test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c with generated UUID: 5d521e9e-43b0-463b-8cc5-dee1c6d0c70a and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.449-0500 I STORAGE [conn108] createCollection: test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5 with generated UUID: 00c54702-2dc2-4686-92b2-e7c65a8d3cca and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.450-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.472-0500 I INDEX [conn112] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:20.480-0500 I INDEX [conn114] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.198-0500 I COMMAND [conn112] command test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67 appName: "tid:4" command: create { create: "tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67", temp: true, validationLevel: "strict", validationAction: "error", databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796680, 5560), signature: { hash: BinData(0, FE2E42C4631029F55810F53F2A579A29DAE3C7CA), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2749ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.198-0500 I COMMAND [conn114] command test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c appName: "tid:3" command: create { create: "tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c", temp: true, validationLevel: "strict", validationAction: "error", databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796680, 5560), signature: { hash: BinData(0, FE2E42C4631029F55810F53F2A579A29DAE3C7CA), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58004", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2749ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.207-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 4d396abe-999f-42e6-a59c-e15bc863c340: test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8 ( 1d28fb55-bae1-4e1f-a042-d821034bbb65 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.207-0500 I INDEX [conn46] Index build completed: 4d396abe-999f-42e6-a59c-e15bc863c340
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.207-0500 I COMMAND [conn46] command test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8 appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796680, 5117), signature: { hash: BinData(0, FE2E42C4631029F55810F53F2A579A29DAE3C7CA), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44870", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 6603 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2789ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.215-0500 I INDEX [conn108] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.215-0500 I COMMAND [conn110] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.215-0500 I COMMAND [conn108] command test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5 appName: "tid:0" command: create { create: "tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5", temp: true, validationLevel: "strict", validationAction: "error", databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796680, 5562), signature: { hash: BinData(0, FE2E42C4631029F55810F53F2A579A29DAE3C7CA), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44858", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2765ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.215-0500 I STORAGE [conn110] dropCollection: test1_fsmdb0.agg_out (ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 2), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.215-0500 I STORAGE [conn110] Finishing collection drop for test1_fsmdb0.agg_out (ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.215-0500 I STORAGE [conn110] renameCollection: renaming collection d756c7f4-e40d-4100-b5ee-76e6c74d474a from test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.215-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b)'. Ident: 'index-227-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 2)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.215-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b)'. Ident: 'index-232-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 2)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.215-0500 I STORAGE [conn110] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-225-8224331490264904478, commit timestamp: Timestamp(1574796683, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.215-0500 I INDEX [conn114] Registering index build: 2a57ded3-630f-4f22-a4e4-a5eeef5849f8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.215-0500 I COMMAND [conn110] command test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789 appName: "tid:1" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789", to: "test1_fsmdb0.agg_out", collectionOptions: { validationLevel: "strict", validationAction: "error" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796680, 6064), signature: { hash: BinData(0, FE2E42C4631029F55810F53F2A579A29DAE3C7CA), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58010", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 2741950 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2742ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.215-0500 I INDEX [conn112] Registering index build: 98c7922f-0918-4e6e-9e94-0eaddbd933dc
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.215-0500 I INDEX [conn108] Registering index build: 29a9b38f-90d8-47d2-9519-dd6bbd5dc5ee
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.215-0500 I COMMAND [conn119] command test1_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796680, 4431), lsid: { id: UUID("c8c15e08-f1a6-4edc-831c-249e4d0ea0c0") }, $clusterTime: { clusterTime: Timestamp(1574796680, 4559), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796680, 4431). Collection minimum timestamp is Timestamp(1574796680, 5563)" errName:SnapshotUnavailable errCode:246 reslen:582 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2724532 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 2724ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.216-0500 I COMMAND [conn71] command test1_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6778938175565989208, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6645679260058695271, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796680373), clusterTime: Timestamp(1574796680, 4559) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796680, 4688), signature: { hash: BinData(0, FE2E42C4631029F55810F53F2A579A29DAE3C7CA), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58010", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 920 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2841ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.219-0500 I STORAGE [conn110] createCollection: test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c with generated UUID: ae3782c8-1806-4cee-93b4-66b1f929ab8b and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.233-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.233-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.234-0500 I STORAGE [ReplWriterWorker-12] createCollection: test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5 with provided UUID: 00c54702-2dc2-4686-92b2-e7c65a8d3cca and options: { uuid: UUID("00c54702-2dc2-4686-92b2-e7c65a8d3cca"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.234-0500 I STORAGE [ReplWriterWorker-5] createCollection: test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c with provided UUID: 5d521e9e-43b0-463b-8cc5-dee1c6d0c70a and options: { uuid: UUID("5d521e9e-43b0-463b-8cc5-dee1c6d0c70a"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.238-0500 I INDEX [conn114] index build: starting on test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.238-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.238-0500 I STORAGE [conn114] Index build initialized: 2a57ded3-630f-4f22-a4e4-a5eeef5849f8: test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67 (ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.239-0500 I INDEX [conn114] Waiting for index build to complete: 2a57ded3-630f-4f22-a4e4-a5eeef5849f8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.246-0500 I INDEX [conn110] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.249-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.252-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.253-0500 I STORAGE [ReplWriterWorker-13] createCollection: test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5 with provided UUID: 00c54702-2dc2-4686-92b2-e7c65a8d3cca and options: { uuid: UUID("00c54702-2dc2-4686-92b2-e7c65a8d3cca"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.262-0500 I INDEX [conn112] index build: starting on test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.262-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.262-0500 I STORAGE [conn112] Index build initialized: 98c7922f-0918-4e6e-9e94-0eaddbd933dc: test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c (5d521e9e-43b0-463b-8cc5-dee1c6d0c70a ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.262-0500 I INDEX [conn112] Waiting for index build to complete: 98c7922f-0918-4e6e-9e94-0eaddbd933dc
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.262-0500 I COMMAND [conn46] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.262-0500 I STORAGE [conn46] dropCollection: test1_fsmdb0.agg_out (d756c7f4-e40d-4100-b5ee-76e6c74d474a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 506), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.262-0500 I STORAGE [conn46] Finishing collection drop for test1_fsmdb0.agg_out (d756c7f4-e40d-4100-b5ee-76e6c74d474a).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.262-0500 I STORAGE [conn46] renameCollection: renaming collection 1d28fb55-bae1-4e1f-a042-d821034bbb65 from test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.262-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (d756c7f4-e40d-4100-b5ee-76e6c74d474a)'. Ident: 'index-253-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 506)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.262-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (d756c7f4-e40d-4100-b5ee-76e6c74d474a)'. Ident: 'index-255-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 506)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.262-0500 I STORAGE [conn46] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-252-8224331490264904478, commit timestamp: Timestamp(1574796683, 506)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.263-0500 I INDEX [conn110] Registering index build: 4694a75e-0d94-4ee5-8120-f9266606a99b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.263-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.263-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.263-0500 I COMMAND [conn65] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7673240002989444110, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8084836518615272400, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796680376), clusterTime: Timestamp(1574796680, 4691) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796680, 4808), signature: { hash: BinData(0, FE2E42C4631029F55810F53F2A579A29DAE3C7CA), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44870", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2886ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:23.263-0500 I COMMAND [conn74] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856") }, $clusterTime: { clusterTime: Timestamp(1574796680, 4691), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2887ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.263-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.264-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.265-0500 I INDEX [ReplWriterWorker-8] index build: starting on test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.265-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.265-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 59d5e6e6-9cc7-4877-becd-abbbaa61d8a8: test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8 (1d28fb55-bae1-4e1f-a042-d821034bbb65 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.265-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.266-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.266-0500 I STORAGE [conn46] createCollection: test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 with generated UUID: d1456d15-3660-4736-b7ef-53da289b5310 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.268-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.269-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.273-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789 (d756c7f4-e40d-4100-b5ee-76e6c74d474a) to test1_fsmdb0.agg_out and drop ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.273-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test1_fsmdb0.agg_out (ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 2), t: 1 } and commit timestamp Timestamp(1574796683, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.273-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test1_fsmdb0.agg_out (ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.273-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection d756c7f4-e40d-4100-b5ee-76e6c74d474a from test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.273-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b)'. Ident: 'index-232--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 2)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.273-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b)'. Ident: 'index-243--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 2)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.273-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-231--8000595249233899911, commit timestamp: Timestamp(1574796683, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.274-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 59d5e6e6-9cc7-4877-becd-abbbaa61d8a8: test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8 ( 1d28fb55-bae1-4e1f-a042-d821034bbb65 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.274-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.274-0500 I STORAGE [ReplWriterWorker-11] createCollection: test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c with provided UUID: ae3782c8-1806-4cee-93b4-66b1f929ab8b and options: { uuid: UUID("ae3782c8-1806-4cee-93b4-66b1f929ab8b"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.277-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.284-0500 I INDEX [ReplWriterWorker-0] index build: starting on test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.285-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.285-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: a33c7049-a066-4bde-829a-f401cf49ccb4: test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8 (1d28fb55-bae1-4e1f-a042-d821034bbb65 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.285-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.285-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.288-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.290-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.292-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: a33c7049-a066-4bde-829a-f401cf49ccb4: test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8 ( 1d28fb55-bae1-4e1f-a042-d821034bbb65 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.292-0500 I INDEX [conn108] index build: starting on test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.292-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.292-0500 I STORAGE [conn108] Index build initialized: 29a9b38f-90d8-47d2-9519-dd6bbd5dc5ee: test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5 (00c54702-2dc2-4686-92b2-e7c65a8d3cca ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.292-0500 I INDEX [conn108] Waiting for index build to complete: 29a9b38f-90d8-47d2-9519-dd6bbd5dc5ee
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.292-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 98c7922f-0918-4e6e-9e94-0eaddbd933dc: test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c ( 5d521e9e-43b0-463b-8cc5-dee1c6d0c70a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.292-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.293-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789 (d756c7f4-e40d-4100-b5ee-76e6c74d474a) to test1_fsmdb0.agg_out and drop ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.293-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test1_fsmdb0.agg_out (ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 2), t: 1 } and commit timestamp Timestamp(1574796683, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.293-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test1_fsmdb0.agg_out (ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.293-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection d756c7f4-e40d-4100-b5ee-76e6c74d474a from test1_fsmdb0.tmp.agg_out.532694e4-7110-48e4-93e7-5a0a9d2bb789 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.293-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b)'. Ident: 'index-232--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 2)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.293-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (ba4d01f2-96e2-4d00-9f20-ae2eb8b7ea1b)'. Ident: 'index-243--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 2)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.293-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-231--4104909142373009110, commit timestamp: Timestamp(1574796683, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.294-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 2a57ded3-630f-4f22-a4e4-a5eeef5849f8: test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67 ( ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.294-0500 I STORAGE [ReplWriterWorker-14] createCollection: test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c with provided UUID: ae3782c8-1806-4cee-93b4-66b1f929ab8b and options: { uuid: UUID("ae3782c8-1806-4cee-93b4-66b1f929ab8b"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.296-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8 (1d28fb55-bae1-4e1f-a042-d821034bbb65) to test1_fsmdb0.agg_out and drop d756c7f4-e40d-4100-b5ee-76e6c74d474a.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.296-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test1_fsmdb0.agg_out (d756c7f4-e40d-4100-b5ee-76e6c74d474a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 506), t: 1 } and commit timestamp Timestamp(1574796683, 506)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.296-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test1_fsmdb0.agg_out (d756c7f4-e40d-4100-b5ee-76e6c74d474a).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.296-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection 1d28fb55-bae1-4e1f-a042-d821034bbb65 from test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.296-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (d756c7f4-e40d-4100-b5ee-76e6c74d474a)'. Ident: 'index-262--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 506)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.296-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (d756c7f4-e40d-4100-b5ee-76e6c74d474a)'. Ident: 'index-265--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 506)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.296-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-261--8000595249233899911, commit timestamp: Timestamp(1574796683, 506)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.301-0500 I INDEX [conn46] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.302-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.302-0500 I INDEX [conn46] Registering index build: 10f0acb1-e3dd-4b79-8d3d-aebf9f399d25
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.303-0500 I STORAGE [ReplWriterWorker-6] createCollection: test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 with provided UUID: d1456d15-3660-4736-b7ef-53da289b5310 and options: { uuid: UUID("d1456d15-3660-4736-b7ef-53da289b5310"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.304-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.311-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.313-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 29a9b38f-90d8-47d2-9519-dd6bbd5dc5ee: test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5 ( 00c54702-2dc2-4686-92b2-e7c65a8d3cca ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.318-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.319-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8 (1d28fb55-bae1-4e1f-a042-d821034bbb65) to test1_fsmdb0.agg_out and drop d756c7f4-e40d-4100-b5ee-76e6c74d474a.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.319-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test1_fsmdb0.agg_out (d756c7f4-e40d-4100-b5ee-76e6c74d474a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 506), t: 1 } and commit timestamp Timestamp(1574796683, 506)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.319-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test1_fsmdb0.agg_out (d756c7f4-e40d-4100-b5ee-76e6c74d474a).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.319-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection 1d28fb55-bae1-4e1f-a042-d821034bbb65 from test1_fsmdb0.tmp.agg_out.d6fa6956-efcc-46ca-91bd-1425e81a4ac8 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.319-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (d756c7f4-e40d-4100-b5ee-76e6c74d474a)'. Ident: 'index-262--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 506)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.319-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (d756c7f4-e40d-4100-b5ee-76e6c74d474a)'. Ident: 'index-265--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 506)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.319-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-261--4104909142373009110, commit timestamp: Timestamp(1574796683, 506)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.319-0500 I INDEX [conn110] index build: starting on test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.319-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.319-0500 I STORAGE [conn110] Index build initialized: 4694a75e-0d94-4ee5-8120-f9266606a99b: test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c (ae3782c8-1806-4cee-93b4-66b1f929ab8b ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.319-0500 I INDEX [conn110] Waiting for index build to complete: 4694a75e-0d94-4ee5-8120-f9266606a99b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.319-0500 I INDEX [conn112] Index build completed: 98c7922f-0918-4e6e-9e94-0eaddbd933dc
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.319-0500 I INDEX [conn114] Index build completed: 2a57ded3-630f-4f22-a4e4-a5eeef5849f8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.319-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.319-0500 I INDEX [conn108] Index build completed: 29a9b38f-90d8-47d2-9519-dd6bbd5dc5ee
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.319-0500 I COMMAND [conn112] command test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796683, 1), signature: { hash: BinData(0, 333929433A71A281AECCBD46BFDC3880F7953EEE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58004", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 16605 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 120ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.319-0500 I COMMAND [conn114] command test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67 appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796683, 1), signature: { hash: BinData(0, 333929433A71A281AECCBD46BFDC3880F7953EEE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 16590 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 120ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.319-0500 I COMMAND [conn108] command test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796683, 2), signature: { hash: BinData(0, 333929433A71A281AECCBD46BFDC3880F7953EEE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44858", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 457 } }, Collection: { acquireCount: { w: 1, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 15 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 104ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.320-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.320-0500 I STORAGE [ReplWriterWorker-9] createCollection: test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 with provided UUID: d1456d15-3660-4736-b7ef-53da289b5310 and options: { uuid: UUID("d1456d15-3660-4736-b7ef-53da289b5310"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.320-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:23.320-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46538 #125 (45 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:23.320-0500 I NETWORK [conn125] received client metadata from 127.0.0.1:46538 conn125: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.323-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.331-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 4694a75e-0d94-4ee5-8120-f9266606a99b: test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c ( ae3782c8-1806-4cee-93b4-66b1f929ab8b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.334-0500 I INDEX [ReplWriterWorker-13] index build: starting on test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.334-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.334-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 8f9b5fc7-f33b-4858-91ea-9fe310b0adb5: test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c (5d521e9e-43b0-463b-8cc5-dee1c6d0c70a ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.334-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.335-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.337-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.337-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.340-0500 I INDEX [conn46] index build: starting on test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.340-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.340-0500 I STORAGE [conn46] Index build initialized: 10f0acb1-e3dd-4b79-8d3d-aebf9f399d25: test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 (d1456d15-3660-4736-b7ef-53da289b5310 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.340-0500 I INDEX [conn46] Waiting for index build to complete: 10f0acb1-e3dd-4b79-8d3d-aebf9f399d25
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.340-0500 I INDEX [conn110] Index build completed: 4694a75e-0d94-4ee5-8120-f9266606a99b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.341-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.341-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.344-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.345-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 8f9b5fc7-f33b-4858-91ea-9fe310b0adb5: test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c ( 5d521e9e-43b0-463b-8cc5-dee1c6d0c70a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.346-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 10f0acb1-e3dd-4b79-8d3d-aebf9f399d25: test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 ( d1456d15-3660-4736-b7ef-53da289b5310 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.346-0500 I INDEX [conn46] Index build completed: 10f0acb1-e3dd-4b79-8d3d-aebf9f399d25
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.352-0500 I INDEX [ReplWriterWorker-4] index build: starting on test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.352-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.354-0500 I INDEX [ReplWriterWorker-14] index build: starting on test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.359-0500 I COMMAND [conn108] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:23.360-0500 I COMMAND [conn71] command test1_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563") }, $clusterTime: { clusterTime: Timestamp(1574796680, 5558), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2913ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:23.361-0500 I COMMAND [conn32] command test1_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee") }, $clusterTime: { clusterTime: Timestamp(1574796680, 5559), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2913ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:23.641-0500 I SHARDING [conn19] distributed lock 'test1_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d8b5cde74b6784bb459
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.354-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.352-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 850388b4-ab2f-4e9e-9703-721af38ebf30: test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67 (ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.359-0500 I STORAGE [conn108] dropCollection: test1_fsmdb0.agg_out (1d28fb55-bae1-4e1f-a042-d821034bbb65) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 2213), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:23.362-0500 I COMMAND [conn70] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d") }, $clusterTime: { clusterTime: Timestamp(1574796680, 5560), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2914ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:23.429-0500 I COMMAND [conn33] command test1_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063") }, $clusterTime: { clusterTime: Timestamp(1574796683, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:819 protocol:op_msg 211ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:23.641-0500 I SHARDING [conn19] Enabling sharding for database [test1_fsmdb0] in config db
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.352-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.354-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 3f4e478d-35ab-4d50-9654-e633543766ca: test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c (5d521e9e-43b0-463b-8cc5-dee1c6d0c70a ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.359-0500 I STORAGE [conn108] Finishing collection drop for test1_fsmdb0.agg_out (1d28fb55-bae1-4e1f-a042-d821034bbb65).
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:23.456-0500 I COMMAND [conn74] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856") }, $clusterTime: { clusterTime: Timestamp(1574796683, 506), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:819 protocol:op_msg 191ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:23.555-0500 I COMMAND [conn32] command test1_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee") }, $clusterTime: { clusterTime: Timestamp(1574796683, 2472), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 187ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:23.642-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d8b5cde74b6784bb459' unlocked.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.353-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.354-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.360-0500 I STORAGE [conn108] renameCollection: renaming collection ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f from test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:23.510-0500 I COMMAND [conn71] command test1_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563") }, $clusterTime: { clusterTime: Timestamp(1574796683, 2214), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 148ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:23.574-0500 I COMMAND [conn33] command test1_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063") }, $clusterTime: { clusterTime: Timestamp(1574796683, 3029), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 144ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:23.645-0500 I SHARDING [conn19] distributed lock 'test1_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d8b5cde74b6784bb45f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.355-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.354-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.360-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (1d28fb55-bae1-4e1f-a042-d821034bbb65)'. Ident: 'index-256-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 2213)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:23.574-0500 I COMMAND [conn70] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d") }, $clusterTime: { clusterTime: Timestamp(1574796683, 2600), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 206ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:23.647-0500 I SHARDING [conn19] distributed lock 'test1_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7d8b5cde74b6784bb461
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.365-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 850388b4-ab2f-4e9e-9703-721af38ebf30: test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67 ( ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.357-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.360-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (1d28fb55-bae1-4e1f-a042-d821034bbb65)'. Ident: 'index-258-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 2213)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:23.637-0500 I COMMAND [conn74] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856") }, $clusterTime: { clusterTime: Timestamp(1574796683, 3034), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 179ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.372-0500 I INDEX [ReplWriterWorker-14] index build: starting on test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.365-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 3f4e478d-35ab-4d50-9654-e633543766ca: test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c ( 5d521e9e-43b0-463b-8cc5-dee1c6d0c70a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.360-0500 I STORAGE [conn108] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-254-8224331490264904478, commit timestamp: Timestamp(1574796683, 2213)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:26.596-0500 I COMMAND [conn71] command test1_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563") }, $clusterTime: { clusterTime: Timestamp(1574796683, 3546), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3066ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.372-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.373-0500 I INDEX [ReplWriterWorker-1] index build: starting on test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.360-0500 I COMMAND [conn112] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.372-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 52d8f47f-1a41-4d05-8b1a-46cc8c15801c: test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5 (00c54702-2dc2-4686-92b2-e7c65a8d3cca ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.373-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:26.597-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test1_fsmdb0 from version {} to version { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.360-0500 I STORAGE [conn112] dropCollection: test1_fsmdb0.agg_out (ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 2214), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.372-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.373-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 9c131390-12d5-4378-8c34-ff261428f883: test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67 (ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.360-0500 I STORAGE [conn112] Finishing collection drop for test1_fsmdb0.agg_out (ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.373-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.373-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.360-0500 I COMMAND [conn67] command test1_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4888697669619310709, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 392866001274632899, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796680447), clusterTime: Timestamp(1574796680, 5560) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796680, 5560), signature: { hash: BinData(0, FE2E42C4631029F55810F53F2A579A29DAE3C7CA), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2911ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.376-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.373-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.360-0500 I STORAGE [conn112] renameCollection: renaming collection 5d521e9e-43b0-463b-8cc5-dee1c6d0c70a from test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.386-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 52d8f47f-1a41-4d05-8b1a-46cc8c15801c: test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5 ( 00c54702-2dc2-4686-92b2-e7c65a8d3cca ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.376-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.360-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f)'. Ident: 'index-262-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 2214)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.393-0500 I INDEX [ReplWriterWorker-13] index build: starting on test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.384-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 9c131390-12d5-4378-8c34-ff261428f883: test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67 ( ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.360-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f)'. Ident: 'index-266-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 2214)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.393-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.392-0500 I INDEX [ReplWriterWorker-13] index build: starting on test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:26.599-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.fsmcoll0 to version 1|3||5ddd7d7d3bbfe7fa5630d6e7 took 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.360-0500 I STORAGE [conn112] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-260-8224331490264904478, commit timestamp: Timestamp(1574796683, 2214)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.393-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: c35e4878-c89b-4f09-a36b-9f7fb93b2b6e: test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c (ae3782c8-1806-4cee-93b4-66b1f929ab8b ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.392-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.360-0500 I COMMAND [conn70] command test1_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5331826515931122184, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1930175369807261433, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796680447), clusterTime: Timestamp(1574796680, 5559) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796680, 5560), signature: { hash: BinData(0, FE2E42C4631029F55810F53F2A579A29DAE3C7CA), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58004", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2912ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.393-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.392-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 808af04c-3cf5-4c99-8633-531a77f9b9ba: test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5 (00c54702-2dc2-4686-92b2-e7c65a8d3cca ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.362-0500 I COMMAND [conn114] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.395-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.392-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.362-0500 I STORAGE [conn114] dropCollection: test1_fsmdb0.agg_out (5d521e9e-43b0-463b-8cc5-dee1c6d0c70a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 2343), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.398-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.393-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.362-0500 I STORAGE [conn114] Finishing collection drop for test1_fsmdb0.agg_out (5d521e9e-43b0-463b-8cc5-dee1c6d0c70a).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.405-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: c35e4878-c89b-4f09-a36b-9f7fb93b2b6e: test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c ( ae3782c8-1806-4cee-93b4-66b1f929ab8b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.396-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.362-0500 I STORAGE [conn114] renameCollection: renaming collection 00c54702-2dc2-4686-92b2-e7c65a8d3cca from test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.426-0500 I INDEX [ReplWriterWorker-1] index build: starting on test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.405-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 808af04c-3cf5-4c99-8633-531a77f9b9ba: test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5 ( 00c54702-2dc2-4686-92b2-e7c65a8d3cca ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.362-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (5d521e9e-43b0-463b-8cc5-dee1c6d0c70a)'. Ident: 'index-263-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 2343)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.426-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.413-0500 I INDEX [ReplWriterWorker-14] index build: starting on test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.362-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (5d521e9e-43b0-463b-8cc5-dee1c6d0c70a)'. Ident: 'index-270-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 2343)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.426-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: ffbdd050-0674-4e62-bf26-4c34b6b01e0e: test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 (d1456d15-3660-4736-b7ef-53da289b5310 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.413-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.362-0500 I STORAGE [conn114] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-261-8224331490264904478, commit timestamp: Timestamp(1574796683, 2343)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.426-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.413-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 2f6ae6a8-9d92-4b49-9c35-ed4e12555bde: test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c (ae3782c8-1806-4cee-93b4-66b1f929ab8b ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.362-0500 I COMMAND [conn68] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4824687277700524396, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3393069120169993775, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796680448), clusterTime: Timestamp(1574796680, 5560) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796680, 5560), signature: { hash: BinData(0, FE2E42C4631029F55810F53F2A579A29DAE3C7CA), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44858", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2913ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.426-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.413-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.367-0500 I COMMAND [conn68] CMD: dropIndexes test1_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.430-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.414-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.367-0500 I STORAGE [conn114] createCollection: test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56 with generated UUID: 563b6fa5-ce39-4d1e-8922-7e08c741a184 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.434-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: ffbdd050-0674-4e62-bf26-4c34b6b01e0e: test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 ( d1456d15-3660-4736-b7ef-53da289b5310 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.417-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.369-0500 I STORAGE [conn112] createCollection: test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f with generated UUID: 6087aa56-aa41-4dee-855d-21ecba6e0c89 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.436-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67 (ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f) to test1_fsmdb0.agg_out and drop 1d28fb55-bae1-4e1f-a042-d821034bbb65.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.422-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 2f6ae6a8-9d92-4b49-9c35-ed4e12555bde: test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c ( ae3782c8-1806-4cee-93b4-66b1f929ab8b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.369-0500 I STORAGE [conn108] createCollection: test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec with generated UUID: 91a2f9be-7116-4319-993b-b3a9372f04e8 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.436-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test1_fsmdb0.agg_out (1d28fb55-bae1-4e1f-a042-d821034bbb65) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 2213), t: 1 } and commit timestamp Timestamp(1574796683, 2213)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.441-0500 I INDEX [ReplWriterWorker-7] index build: starting on test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.398-0500 I INDEX [conn114] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.436-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test1_fsmdb0.agg_out (1d28fb55-bae1-4e1f-a042-d821034bbb65).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.441-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.407-0500 I INDEX [conn112] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.436-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f from test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.441-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 4c4b1afd-6234-4943-8734-cd6bec7646ff: test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 (d1456d15-3660-4736-b7ef-53da289b5310 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.414-0500 I INDEX [conn108] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.436-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (1d28fb55-bae1-4e1f-a042-d821034bbb65)'. Ident: 'index-264--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 2213)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.441-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.415-0500 I INDEX [conn114] Registering index build: 5c7268e0-e9a6-4cb2-917a-3bdbe047ac4a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.436-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (1d28fb55-bae1-4e1f-a042-d821034bbb65)'. Ident: 'index-273--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 2213)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.442-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.415-0500 I INDEX [conn112] Registering index build: e37a63e2-935f-4760-8943-efcea872fd05
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.602-0500 D4 TXN [conn31] New transaction started with txnNumber: 0 on session with lsid ab4a2216-27a6-4418-9665-2c847dd9395a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.436-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-263--8000595249233899911, commit timestamp: Timestamp(1574796683, 2213)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.444-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.415-0500 I INDEX [conn110] Registering index build: 626a4548-bc13-4304-80a1-8dceb2eb1522
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.437-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c (5d521e9e-43b0-463b-8cc5-dee1c6d0c70a) to test1_fsmdb0.agg_out and drop ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.447-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 4c4b1afd-6234-4943-8734-cd6bec7646ff: test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 ( d1456d15-3660-4736-b7ef-53da289b5310 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.415-0500 I COMMAND [conn108] CMD: drop test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.437-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test1_fsmdb0.agg_out (ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 2214), t: 1 } and commit timestamp Timestamp(1574796683, 2214)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.450-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67 (ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f) to test1_fsmdb0.agg_out and drop 1d28fb55-bae1-4e1f-a042-d821034bbb65.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.428-0500 I INDEX [conn114] index build: starting on test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.437-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test1_fsmdb0.agg_out (ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.450-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test1_fsmdb0.agg_out (1d28fb55-bae1-4e1f-a042-d821034bbb65) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 2213), t: 1 } and commit timestamp Timestamp(1574796683, 2213)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.428-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.437-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection 5d521e9e-43b0-463b-8cc5-dee1c6d0c70a from test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.450-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test1_fsmdb0.agg_out (1d28fb55-bae1-4e1f-a042-d821034bbb65).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.428-0500 I STORAGE [conn114] Index build initialized: 5c7268e0-e9a6-4cb2-917a-3bdbe047ac4a: test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56 (563b6fa5-ce39-4d1e-8922-7e08c741a184 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.437-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f)'. Ident: 'index-268--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 2214)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.450-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f from test1_fsmdb0.tmp.agg_out.b988aec5-fc79-4440-a840-c47d17ed3b67 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.428-0500 I INDEX [conn114] Waiting for index build to complete: 5c7268e0-e9a6-4cb2-917a-3bdbe047ac4a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.437-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f)'. Ident: 'index-281--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 2214)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.450-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (1d28fb55-bae1-4e1f-a042-d821034bbb65)'. Ident: 'index-264--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 2213)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.428-0500 I STORAGE [conn108] dropCollection: test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c (ae3782c8-1806-4cee-93b4-66b1f929ab8b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.437-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-267--8000595249233899911, commit timestamp: Timestamp(1574796683, 2214)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.450-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (1d28fb55-bae1-4e1f-a042-d821034bbb65)'. Ident: 'index-273--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 2213)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.428-0500 I STORAGE [conn108] Finishing collection drop for test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c (ae3782c8-1806-4cee-93b4-66b1f929ab8b).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.441-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5 (00c54702-2dc2-4686-92b2-e7c65a8d3cca) to test1_fsmdb0.agg_out and drop 5d521e9e-43b0-463b-8cc5-dee1c6d0c70a.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.450-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-263--4104909142373009110, commit timestamp: Timestamp(1574796683, 2213)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.428-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c (ae3782c8-1806-4cee-93b4-66b1f929ab8b)'. Ident: 'index-269-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 3029)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.441-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test1_fsmdb0.agg_out (5d521e9e-43b0-463b-8cc5-dee1c6d0c70a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 2343), t: 1 } and commit timestamp Timestamp(1574796683, 2343)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.451-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c (5d521e9e-43b0-463b-8cc5-dee1c6d0c70a) to test1_fsmdb0.agg_out and drop ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.428-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c (ae3782c8-1806-4cee-93b4-66b1f929ab8b)'. Ident: 'index-276-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 3029)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.441-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test1_fsmdb0.agg_out (5d521e9e-43b0-463b-8cc5-dee1c6d0c70a).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.451-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test1_fsmdb0.agg_out (ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 2214), t: 1 } and commit timestamp Timestamp(1574796683, 2214)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.428-0500 I STORAGE [conn108] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c'. Ident: collection-267-8224331490264904478, commit timestamp: Timestamp(1574796683, 3029)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.441-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 00c54702-2dc2-4686-92b2-e7c65a8d3cca from test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.451-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test1_fsmdb0.agg_out (ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.428-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.441-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (5d521e9e-43b0-463b-8cc5-dee1c6d0c70a)'. Ident: 'index-270--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 2343)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.451-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 5d521e9e-43b0-463b-8cc5-dee1c6d0c70a from test1_fsmdb0.tmp.agg_out.7b37246f-97b1-41d4-b929-2443bf05c30c to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.428-0500 I COMMAND [conn71] command test1_fsmdb0.agg_out appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 9013483675781987934, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7417190932286663639, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796683217), clusterTime: Timestamp(1574796683, 2) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796683, 2), signature: { hash: BinData(0, 333929433A71A281AECCBD46BFDC3880F7953EEE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58010", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 210ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.441-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (5d521e9e-43b0-463b-8cc5-dee1c6d0c70a)'. Ident: 'index-279--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 2343)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.451-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f)'. Ident: 'index-268--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 2214)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.429-0500 I COMMAND [conn46] CMD: drop test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.441-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-269--8000595249233899911, commit timestamp: Timestamp(1574796683, 2343)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.451-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (ce128d65-c0cd-4b6b-bc1a-d43ecc1e2e6f)'. Ident: 'index-281--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 2214)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.429-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.452-0500 I STORAGE [ReplWriterWorker-6] createCollection: test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56 with provided UUID: 563b6fa5-ce39-4d1e-8922-7e08c741a184 and options: { uuid: UUID("563b6fa5-ce39-4d1e-8922-7e08c741a184"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.451-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-267--4104909142373009110, commit timestamp: Timestamp(1574796683, 2214)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.431-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.471-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.453-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5 (00c54702-2dc2-4686-92b2-e7c65a8d3cca) to test1_fsmdb0.agg_out and drop 5d521e9e-43b0-463b-8cc5-dee1c6d0c70a.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.432-0500 I STORAGE [conn108] createCollection: test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8 with generated UUID: b2590667-ea41-4e16-914b-25952509c04e and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.473-0500 I STORAGE [ReplWriterWorker-3] createCollection: test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f with provided UUID: 6087aa56-aa41-4dee-855d-21ecba6e0c89 and options: { uuid: UUID("6087aa56-aa41-4dee-855d-21ecba6e0c89"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.453-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test1_fsmdb0.agg_out (5d521e9e-43b0-463b-8cc5-dee1c6d0c70a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 2343), t: 1 } and commit timestamp Timestamp(1574796683, 2343)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.440-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 5c7268e0-e9a6-4cb2-917a-3bdbe047ac4a: test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56 ( 563b6fa5-ce39-4d1e-8922-7e08c741a184 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.488-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.453-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test1_fsmdb0.agg_out (5d521e9e-43b0-463b-8cc5-dee1c6d0c70a).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.455-0500 I INDEX [conn112] index build: starting on test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.489-0500 I STORAGE [ReplWriterWorker-6] createCollection: test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec with provided UUID: 91a2f9be-7116-4319-993b-b3a9372f04e8 and options: { uuid: UUID("91a2f9be-7116-4319-993b-b3a9372f04e8"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.453-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 00c54702-2dc2-4686-92b2-e7c65a8d3cca from test1_fsmdb0.tmp.agg_out.a5bc6a0a-292d-47da-bb73-5b67c260dda5 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.455-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.505-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.453-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (5d521e9e-43b0-463b-8cc5-dee1c6d0c70a)'. Ident: 'index-270--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 2343)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.455-0500 I STORAGE [conn112] Index build initialized: e37a63e2-935f-4760-8943-efcea872fd05: test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f (6087aa56-aa41-4dee-855d-21ecba6e0c89 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.509-0500 I COMMAND [ReplWriterWorker-9] CMD: drop test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.453-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (5d521e9e-43b0-463b-8cc5-dee1c6d0c70a)'. Ident: 'index-279--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 2343)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.455-0500 I INDEX [conn112] Waiting for index build to complete: e37a63e2-935f-4760-8943-efcea872fd05
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.509-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c (ae3782c8-1806-4cee-93b4-66b1f929ab8b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 3029), t: 1 } and commit timestamp Timestamp(1574796683, 3029)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.453-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-269--4104909142373009110, commit timestamp: Timestamp(1574796683, 2343)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.455-0500 I INDEX [conn114] Index build completed: 5c7268e0-e9a6-4cb2-917a-3bdbe047ac4a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.509-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c (ae3782c8-1806-4cee-93b4-66b1f929ab8b).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.472-0500 I STORAGE [ReplWriterWorker-6] createCollection: test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56 with provided UUID: 563b6fa5-ce39-4d1e-8922-7e08c741a184 and options: { uuid: UUID("563b6fa5-ce39-4d1e-8922-7e08c741a184"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.455-0500 I STORAGE [conn46] dropCollection: test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 (d1456d15-3660-4736-b7ef-53da289b5310) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.509-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c (ae3782c8-1806-4cee-93b4-66b1f929ab8b)'. Ident: 'index-276--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 3029)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.488-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.455-0500 I STORAGE [conn46] Finishing collection drop for test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 (d1456d15-3660-4736-b7ef-53da289b5310).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.509-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c (ae3782c8-1806-4cee-93b4-66b1f929ab8b)'. Ident: 'index-285--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 3029)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.491-0500 I STORAGE [ReplWriterWorker-1] createCollection: test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f with provided UUID: 6087aa56-aa41-4dee-855d-21ecba6e0c89 and options: { uuid: UUID("6087aa56-aa41-4dee-855d-21ecba6e0c89"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.455-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 (d1456d15-3660-4736-b7ef-53da289b5310)'. Ident: 'index-275-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 3034)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.509-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c'. Ident: collection-275--8000595249233899911, commit timestamp: Timestamp(1574796683, 3029)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.507-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.455-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 (d1456d15-3660-4736-b7ef-53da289b5310)'. Ident: 'index-278-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 3034)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.526-0500 I INDEX [ReplWriterWorker-2] index build: starting on test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.508-0500 I STORAGE [ReplWriterWorker-10] createCollection: test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec with provided UUID: 91a2f9be-7116-4319-993b-b3a9372f04e8 and options: { uuid: UUID("91a2f9be-7116-4319-993b-b3a9372f04e8"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.455-0500 I STORAGE [conn46] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88'. Ident: collection-273-8224331490264904478, commit timestamp: Timestamp(1574796683, 3034)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.526-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.523-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.456-0500 I COMMAND [conn65] command test1_fsmdb0.agg_out appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6968922095344955036, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5039439980496133237, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796683264), clusterTime: Timestamp(1574796683, 506) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796683, 506), signature: { hash: BinData(0, 333929433A71A281AECCBD46BFDC3880F7953EEE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44870", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 190ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.526-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: d7f20810-7d9c-4eaf-b30f-c86adf82f918: test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56 (563b6fa5-ce39-4d1e-8922-7e08c741a184 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.527-0500 I COMMAND [conn56] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796683, 2975) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("c65a601f-c957-428e-adeb-3bd85740d639") }, $clusterTime: { clusterTime: Timestamp(1574796683, 2975), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 4376 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 107ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.459-0500 I STORAGE [conn46] createCollection: test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d with generated UUID: 9565e9b4-7b18-4d2e-bcf3-2611f80a00bf and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.527-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.528-0500 I COMMAND [ReplWriterWorker-11] CMD: drop test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.461-0500 I INDEX [conn108] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.527-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.528-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c (ae3782c8-1806-4cee-93b4-66b1f929ab8b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 3029), t: 1 } and commit timestamp Timestamp(1574796683, 3029)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.462-0500 I INDEX [conn108] Registering index build: e4972dda-b514-47cd-8343-f43da07d88a5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.528-0500 I STORAGE [ReplWriterWorker-7] createCollection: test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8 with provided UUID: b2590667-ea41-4e16-914b-25952509c04e and options: { uuid: UUID("b2590667-ea41-4e16-914b-25952509c04e"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.528-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c (ae3782c8-1806-4cee-93b4-66b1f929ab8b).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.480-0500 I INDEX [conn110] index build: starting on test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.530-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.528-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c (ae3782c8-1806-4cee-93b4-66b1f929ab8b)'. Ident: 'index-276--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 3029)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.481-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.539-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: d7f20810-7d9c-4eaf-b30f-c86adf82f918: test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56 ( 563b6fa5-ce39-4d1e-8922-7e08c741a184 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.528-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c (ae3782c8-1806-4cee-93b4-66b1f929ab8b)'. Ident: 'index-285--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 3029)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.481-0500 I STORAGE [conn110] Index build initialized: 626a4548-bc13-4304-80a1-8dceb2eb1522: test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec (91a2f9be-7116-4319-993b-b3a9372f04e8 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.546-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.528-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c'. Ident: collection-275--4104909142373009110, commit timestamp: Timestamp(1574796683, 3029)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.481-0500 I INDEX [conn110] Waiting for index build to complete: 626a4548-bc13-4304-80a1-8dceb2eb1522
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.548-0500 I COMMAND [ReplWriterWorker-11] CMD: drop test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.543-0500 I INDEX [ReplWriterWorker-0] index build: starting on test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.481-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.548-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 (d1456d15-3660-4736-b7ef-53da289b5310) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 3034), t: 1 } and commit timestamp Timestamp(1574796683, 3034)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.543-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.488-0500 I INDEX [conn46] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.548-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 (d1456d15-3660-4736-b7ef-53da289b5310).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.543-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 8a22fcc4-5f75-4c48-9742-458c35e6cd03: test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56 (563b6fa5-ce39-4d1e-8922-7e08c741a184 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.488-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.548-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 (d1456d15-3660-4736-b7ef-53da289b5310)'. Ident: 'index-278--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 3034)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.543-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.490-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.548-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 (d1456d15-3660-4736-b7ef-53da289b5310)'. Ident: 'index-287--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 3034)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.544-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.493-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: e37a63e2-935f-4760-8943-efcea872fd05: test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f ( 6087aa56-aa41-4dee-855d-21ecba6e0c89 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.548-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88'. Ident: collection-277--8000595249233899911, commit timestamp: Timestamp(1574796683, 3034)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.546-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.509-0500 I INDEX [conn108] index build: starting on test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.549-0500 I STORAGE [ReplWriterWorker-10] createCollection: test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d with provided UUID: 9565e9b4-7b18-4d2e-bcf3-2611f80a00bf and options: { uuid: UUID("9565e9b4-7b18-4d2e-bcf3-2611f80a00bf"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.547-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 8a22fcc4-5f75-4c48-9742-458c35e6cd03: test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56 ( 563b6fa5-ce39-4d1e-8922-7e08c741a184 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.509-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.564-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.548-0500 I STORAGE [ReplWriterWorker-6] createCollection: test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8 with provided UUID: b2590667-ea41-4e16-914b-25952509c04e and options: { uuid: UUID("b2590667-ea41-4e16-914b-25952509c04e"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.509-0500 I STORAGE [conn108] Index build initialized: e4972dda-b514-47cd-8343-f43da07d88a5: test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8 (b2590667-ea41-4e16-914b-25952509c04e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.587-0500 I INDEX [ReplWriterWorker-0] index build: starting on test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.563-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.509-0500 I INDEX [conn108] Waiting for index build to complete: e4972dda-b514-47cd-8343-f43da07d88a5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.587-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.564-0500 I COMMAND [ReplWriterWorker-5] CMD: drop test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.509-0500 I INDEX [conn112] Index build completed: e37a63e2-935f-4760-8943-efcea872fd05
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.587-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 314efb93-3d5e-455a-81b6-a37b46e0af3f: test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f (6087aa56-aa41-4dee-855d-21ecba6e0c89 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.565-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 (d1456d15-3660-4736-b7ef-53da289b5310) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 3034), t: 1 } and commit timestamp Timestamp(1574796683, 3034)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.509-0500 I COMMAND [conn114] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.587-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.565-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 (d1456d15-3660-4736-b7ef-53da289b5310).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.509-0500 I COMMAND [conn112] command test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796683, 2975), signature: { hash: BinData(0, 333929433A71A281AECCBD46BFDC3880F7953EEE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58004", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 7770 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 101ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.588-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.565-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 (d1456d15-3660-4736-b7ef-53da289b5310)'. Ident: 'index-278--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 3034)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.509-0500 I STORAGE [conn114] dropCollection: test1_fsmdb0.agg_out (00c54702-2dc2-4686-92b2-e7c65a8d3cca) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 3540), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.589-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56 (563b6fa5-ce39-4d1e-8922-7e08c741a184) to test1_fsmdb0.agg_out and drop 00c54702-2dc2-4686-92b2-e7c65a8d3cca.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.565-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88 (d1456d15-3660-4736-b7ef-53da289b5310)'. Ident: 'index-287--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 3034)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.509-0500 I STORAGE [conn114] Finishing collection drop for test1_fsmdb0.agg_out (00c54702-2dc2-4686-92b2-e7c65a8d3cca).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.590-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.565-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88'. Ident: collection-277--4104909142373009110, commit timestamp: Timestamp(1574796683, 3034)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.509-0500 I STORAGE [conn114] renameCollection: renaming collection 563b6fa5-ce39-4d1e-8922-7e08c741a184 from test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.590-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test1_fsmdb0.agg_out (00c54702-2dc2-4686-92b2-e7c65a8d3cca) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 3540), t: 1 } and commit timestamp Timestamp(1574796683, 3540)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.565-0500 I STORAGE [ReplWriterWorker-10] createCollection: test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d with provided UUID: 9565e9b4-7b18-4d2e-bcf3-2611f80a00bf and options: { uuid: UUID("9565e9b4-7b18-4d2e-bcf3-2611f80a00bf"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.509-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (00c54702-2dc2-4686-92b2-e7c65a8d3cca)'. Ident: 'index-265-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 3540)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.590-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test1_fsmdb0.agg_out (00c54702-2dc2-4686-92b2-e7c65a8d3cca).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.579-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.509-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (00c54702-2dc2-4686-92b2-e7c65a8d3cca)'. Ident: 'index-272-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 3540)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.590-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection 563b6fa5-ce39-4d1e-8922-7e08c741a184 from test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.602-0500 I INDEX [ReplWriterWorker-7] index build: starting on test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.509-0500 I STORAGE [conn114] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-264-8224331490264904478, commit timestamp: Timestamp(1574796683, 3540)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.591-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (00c54702-2dc2-4686-92b2-e7c65a8d3cca)'. Ident: 'index-272--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 3540)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.602-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.509-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.591-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (00c54702-2dc2-4686-92b2-e7c65a8d3cca)'. Ident: 'index-283--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 3540)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.602-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 38449499-b8a5-4aee-aff8-2c40a00bce87: test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f (6087aa56-aa41-4dee-855d-21ecba6e0c89 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.509-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.591-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-271--8000595249233899911, commit timestamp: Timestamp(1574796683, 3540)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.603-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.509-0500 I INDEX [conn46] Registering index build: 714d6e67-ebf1-44e5-9cbf-8a130b2cea03
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.593-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 314efb93-3d5e-455a-81b6-a37b46e0af3f: test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f ( 6087aa56-aa41-4dee-855d-21ecba6e0c89 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.603-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.509-0500 I COMMAND [conn67] command test1_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5697520089279995198, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1400023311952920073, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796683361), clusterTime: Timestamp(1574796683, 2214) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796683, 2471), signature: { hash: BinData(0, 333929433A71A281AECCBD46BFDC3880F7953EEE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 4331 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 146ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.610-0500 I INDEX [ReplWriterWorker-5] index build: starting on test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.604-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56 (563b6fa5-ce39-4d1e-8922-7e08c741a184) to test1_fsmdb0.agg_out and drop 00c54702-2dc2-4686-92b2-e7c65a8d3cca.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.510-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.610-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.606-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.510-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.610-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: eca10659-5a75-4ee0-a802-a62e0b5db275: test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8 (b2590667-ea41-4e16-914b-25952509c04e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.606-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test1_fsmdb0.agg_out (00c54702-2dc2-4686-92b2-e7c65a8d3cca) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 3540), t: 1 } and commit timestamp Timestamp(1574796683, 3540)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.520-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.610-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.606-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test1_fsmdb0.agg_out (00c54702-2dc2-4686-92b2-e7c65a8d3cca).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.522-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.610-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.606-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 563b6fa5-ce39-4d1e-8922-7e08c741a184 from test1_fsmdb0.tmp.agg_out.f65304b7-6ea1-461a-bf79-a75be707ad56 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.528-0500 I INDEX [conn46] index build: starting on test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.613-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.606-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (00c54702-2dc2-4686-92b2-e7c65a8d3cca)'. Ident: 'index-272--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 3540)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.528-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.623-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: eca10659-5a75-4ee0-a802-a62e0b5db275: test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8 ( b2590667-ea41-4e16-914b-25952509c04e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.606-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (00c54702-2dc2-4686-92b2-e7c65a8d3cca)'. Ident: 'index-283--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 3540)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.528-0500 I STORAGE [conn46] Index build initialized: 714d6e67-ebf1-44e5-9cbf-8a130b2cea03: test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d (9565e9b4-7b18-4d2e-bcf3-2611f80a00bf ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.630-0500 I INDEX [ReplWriterWorker-1] index build: starting on test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.606-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-271--4104909142373009110, commit timestamp: Timestamp(1574796683, 3540)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.528-0500 I INDEX [conn46] Waiting for index build to complete: 714d6e67-ebf1-44e5-9cbf-8a130b2cea03
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.630-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.609-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 38449499-b8a5-4aee-aff8-2c40a00bce87: test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f ( 6087aa56-aa41-4dee-855d-21ecba6e0c89 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.528-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.630-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: e1290775-5a96-475a-bdd4-47f4e34c32e1: test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec (91a2f9be-7116-4319-993b-b3a9372f04e8 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.627-0500 I INDEX [ReplWriterWorker-10] index build: starting on test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.529-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: e4972dda-b514-47cd-8343-f43da07d88a5: test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8 ( b2590667-ea41-4e16-914b-25952509c04e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.630-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.627-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.529-0500 I INDEX [conn108] Index build completed: e4972dda-b514-47cd-8343-f43da07d88a5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.631-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.627-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: e0b464f0-61cb-46b3-b426-85a221e30014: test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8 (b2590667-ea41-4e16-914b-25952509c04e ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.530-0500 I STORAGE [conn114] createCollection: test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a with generated UUID: 5e50e75c-c327-4f05-bb46-1ea87905b919 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.633-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.627-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.532-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 626a4548-bc13-4304-80a1-8dceb2eb1522: test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec ( 91a2f9be-7116-4319-993b-b3a9372f04e8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.635-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: e1290775-5a96-475a-bdd4-47f4e34c32e1: test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec ( 91a2f9be-7116-4319-993b-b3a9372f04e8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.628-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.532-0500 I INDEX [conn110] Index build completed: 626a4548-bc13-4304-80a1-8dceb2eb1522
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.637-0500 I STORAGE [ReplWriterWorker-9] createCollection: test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a with provided UUID: 5e50e75c-c327-4f05-bb46-1ea87905b919 and options: { uuid: UUID("5e50e75c-c327-4f05-bb46-1ea87905b919"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.630-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.532-0500 I COMMAND [conn110] command test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796683, 2975), signature: { hash: BinData(0, 333929433A71A281AECCBD46BFDC3880F7953EEE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44858", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 116ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.651-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.633-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: e0b464f0-61cb-46b3-b426-85a221e30014: test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8 ( b2590667-ea41-4e16-914b-25952509c04e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.532-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.673-0500 I INDEX [ReplWriterWorker-6] index build: starting on test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.648-0500 I INDEX [ReplWriterWorker-11] index build: starting on test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.545-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.673-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.648-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.554-0500 I INDEX [conn114] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.673-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 8f3b7116-2d25-4f32-aacc-d23f5d1f711b: test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d (9565e9b4-7b18-4d2e-bcf3-2611f80a00bf ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.648-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 06b05192-5927-4e94-8502-f87d12bd2d54: test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec (91a2f9be-7116-4319-993b-b3a9372f04e8 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.554-0500 I COMMAND [conn112] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.673-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.648-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.555-0500 I STORAGE [conn112] dropCollection: test1_fsmdb0.agg_out (563b6fa5-ce39-4d1e-8922-7e08c741a184) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 4178), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.674-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.649-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.555-0500 I STORAGE [conn112] Finishing collection drop for test1_fsmdb0.agg_out (563b6fa5-ce39-4d1e-8922-7e08c741a184).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.675-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f (6087aa56-aa41-4dee-855d-21ecba6e0c89) to test1_fsmdb0.agg_out and drop 563b6fa5-ce39-4d1e-8922-7e08c741a184.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.651-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.555-0500 I STORAGE [conn112] renameCollection: renaming collection 6087aa56-aa41-4dee-855d-21ecba6e0c89 from test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.676-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.653-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 06b05192-5927-4e94-8502-f87d12bd2d54: test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec ( 91a2f9be-7116-4319-993b-b3a9372f04e8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.555-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (563b6fa5-ce39-4d1e-8922-7e08c741a184)'. Ident: 'index-283-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 4178)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.676-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test1_fsmdb0.agg_out (563b6fa5-ce39-4d1e-8922-7e08c741a184) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 4178), t: 1 } and commit timestamp Timestamp(1574796683, 4178)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.655-0500 I COMMAND [conn56] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796683, 3546) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("c65a601f-c957-428e-adeb-3bd85740d639") }, $clusterTime: { clusterTime: Timestamp(1574796683, 3610), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 13637 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 122ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.555-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (563b6fa5-ce39-4d1e-8922-7e08c741a184)'. Ident: 'index-286-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 4178)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.676-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test1_fsmdb0.agg_out (563b6fa5-ce39-4d1e-8922-7e08c741a184).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.656-0500 I STORAGE [ReplWriterWorker-3] createCollection: test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a with provided UUID: 5e50e75c-c327-4f05-bb46-1ea87905b919 and options: { uuid: UUID("5e50e75c-c327-4f05-bb46-1ea87905b919"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.555-0500 I STORAGE [conn112] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-280-8224331490264904478, commit timestamp: Timestamp(1574796683, 4178)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.676-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection 6087aa56-aa41-4dee-855d-21ecba6e0c89 from test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.671-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.555-0500 I INDEX [conn114] Registering index build: c5028b72-dcf2-498e-90fe-9c313f11ca1a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.677-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (563b6fa5-ce39-4d1e-8922-7e08c741a184)'. Ident: 'index-290--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 4178)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.692-0500 I INDEX [ReplWriterWorker-1] index build: starting on test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.555-0500 I COMMAND [conn70] command test1_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3708659432056714758, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6483185553979767398, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796683367), clusterTime: Timestamp(1574796683, 2472) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796683, 2601), signature: { hash: BinData(0, 333929433A71A281AECCBD46BFDC3880F7953EEE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58004", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 186ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.677-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (563b6fa5-ce39-4d1e-8922-7e08c741a184)'. Ident: 'index-295--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 4178)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.692-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.555-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 714d6e67-ebf1-44e5-9cbf-8a130b2cea03: test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d ( 9565e9b4-7b18-4d2e-bcf3-2611f80a00bf ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.677-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-289--8000595249233899911, commit timestamp: Timestamp(1574796683, 4178)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.692-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 9274691e-a507-4a4b-aaa3-46b306299322: test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d (9565e9b4-7b18-4d2e-bcf3-2611f80a00bf ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.557-0500 I COMMAND [conn70] CMD: dropIndexes test1_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.678-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 8f3b7116-2d25-4f32-aacc-d23f5d1f711b: test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d ( 9565e9b4-7b18-4d2e-bcf3-2611f80a00bf ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.692-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.573-0500 I INDEX [conn114] index build: starting on test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.681-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec (91a2f9be-7116-4319-993b-b3a9372f04e8) to test1_fsmdb0.agg_out and drop 6087aa56-aa41-4dee-855d-21ecba6e0c89.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.692-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.573-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.573-0500 I STORAGE [conn114] Index build initialized: c5028b72-dcf2-498e-90fe-9c313f11ca1a: test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a (5e50e75c-c327-4f05-bb46-1ea87905b919 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.693-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f (6087aa56-aa41-4dee-855d-21ecba6e0c89) to test1_fsmdb0.agg_out and drop 563b6fa5-ce39-4d1e-8922-7e08c741a184.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.681-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test1_fsmdb0.agg_out (6087aa56-aa41-4dee-855d-21ecba6e0c89) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 5052), t: 1 } and commit timestamp Timestamp(1574796683, 5052)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.573-0500 I INDEX [conn114] Waiting for index build to complete: c5028b72-dcf2-498e-90fe-9c313f11ca1a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.695-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.681-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test1_fsmdb0.agg_out (6087aa56-aa41-4dee-855d-21ecba6e0c89).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.573-0500 I INDEX [conn46] Index build completed: 714d6e67-ebf1-44e5-9cbf-8a130b2cea03
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.695-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test1_fsmdb0.agg_out (563b6fa5-ce39-4d1e-8922-7e08c741a184) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 4178), t: 1 } and commit timestamp Timestamp(1574796683, 4178)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.681-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 91a2f9be-7116-4319-993b-b3a9372f04e8 from test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.573-0500 I COMMAND [conn108] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.695-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test1_fsmdb0.agg_out (563b6fa5-ce39-4d1e-8922-7e08c741a184).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.681-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (6087aa56-aa41-4dee-855d-21ecba6e0c89)'. Ident: 'index-292--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 5052)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.573-0500 I STORAGE [conn108] dropCollection: test1_fsmdb0.agg_out (6087aa56-aa41-4dee-855d-21ecba6e0c89) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 5052), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.695-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection 6087aa56-aa41-4dee-855d-21ecba6e0c89 from test1_fsmdb0.tmp.agg_out.e07acde3-4beb-4962-82f6-8f6dcca0802f to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.681-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (6087aa56-aa41-4dee-855d-21ecba6e0c89)'. Ident: 'index-301--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 5052)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.573-0500 I STORAGE [conn108] Finishing collection drop for test1_fsmdb0.agg_out (6087aa56-aa41-4dee-855d-21ecba6e0c89).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.695-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (563b6fa5-ce39-4d1e-8922-7e08c741a184)'. Ident: 'index-290--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 4178)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.681-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-291--8000595249233899911, commit timestamp: Timestamp(1574796683, 5052)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.573-0500 I STORAGE [conn108] renameCollection: renaming collection 91a2f9be-7116-4319-993b-b3a9372f04e8 from test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.695-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (563b6fa5-ce39-4d1e-8922-7e08c741a184)'. Ident: 'index-295--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 4178)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.682-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8 (b2590667-ea41-4e16-914b-25952509c04e) to test1_fsmdb0.agg_out and drop 91a2f9be-7116-4319-993b-b3a9372f04e8.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.573-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (6087aa56-aa41-4dee-855d-21ecba6e0c89)'. Ident: 'index-284-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 5052)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.695-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-289--4104909142373009110, commit timestamp: Timestamp(1574796683, 4178)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.682-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test1_fsmdb0.agg_out (91a2f9be-7116-4319-993b-b3a9372f04e8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 5053), t: 1 } and commit timestamp Timestamp(1574796683, 5053)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.573-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (6087aa56-aa41-4dee-855d-21ecba6e0c89)'. Ident: 'index-288-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 5052)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.697-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 9274691e-a507-4a4b-aaa3-46b306299322: test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d ( 9565e9b4-7b18-4d2e-bcf3-2611f80a00bf ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.682-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test1_fsmdb0.agg_out (91a2f9be-7116-4319-993b-b3a9372f04e8).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.573-0500 I STORAGE [conn108] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-281-8224331490264904478, commit timestamp: Timestamp(1574796683, 5052)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.700-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec (91a2f9be-7116-4319-993b-b3a9372f04e8) to test1_fsmdb0.agg_out and drop 6087aa56-aa41-4dee-855d-21ecba6e0c89.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.682-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection b2590667-ea41-4e16-914b-25952509c04e from test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.574-0500 I COMMAND [conn110] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.700-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test1_fsmdb0.agg_out (6087aa56-aa41-4dee-855d-21ecba6e0c89) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 5052), t: 1 } and commit timestamp Timestamp(1574796683, 5052)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.682-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (91a2f9be-7116-4319-993b-b3a9372f04e8)'. Ident: 'index-294--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 5053)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.574-0500 I STORAGE [conn110] dropCollection: test1_fsmdb0.agg_out (91a2f9be-7116-4319-993b-b3a9372f04e8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 5053), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.700-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test1_fsmdb0.agg_out (6087aa56-aa41-4dee-855d-21ecba6e0c89).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.682-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (91a2f9be-7116-4319-993b-b3a9372f04e8)'. Ident: 'index-305--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 5053)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.574-0500 I STORAGE [conn110] Finishing collection drop for test1_fsmdb0.agg_out (91a2f9be-7116-4319-993b-b3a9372f04e8).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.700-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 91a2f9be-7116-4319-993b-b3a9372f04e8 from test1_fsmdb0.tmp.agg_out.ebc06ae7-0b49-49bb-ab86-c4a7f6e20fec to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.682-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-293--8000595249233899911, commit timestamp: Timestamp(1574796683, 5053)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.574-0500 I STORAGE [conn110] renameCollection: renaming collection b2590667-ea41-4e16-914b-25952509c04e from test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.700-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (6087aa56-aa41-4dee-855d-21ecba6e0c89)'. Ident: 'index-292--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 5052)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.683-0500 I STORAGE [ReplWriterWorker-14] createCollection: test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 with provided UUID: 85844d5b-91ff-4af0-adbd-5ba55c1d1821 and options: { uuid: UUID("85844d5b-91ff-4af0-adbd-5ba55c1d1821"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.574-0500 I COMMAND [conn68] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8971512586017857891, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5190081400801488327, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796683368), clusterTime: Timestamp(1574796683, 2600) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796683, 2665), signature: { hash: BinData(0, 333929433A71A281AECCBD46BFDC3880F7953EEE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44858", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 205ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.700-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (6087aa56-aa41-4dee-855d-21ecba6e0c89)'. Ident: 'index-301--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 5052)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.700-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.574-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (91a2f9be-7116-4319-993b-b3a9372f04e8)'. Ident: 'index-285-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 5053)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.700-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-291--4104909142373009110, commit timestamp: Timestamp(1574796683, 5052)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.701-0500 I STORAGE [ReplWriterWorker-9] createCollection: test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 with provided UUID: 64f300cf-3f15-4c6f-bb35-db62e661e114 and options: { uuid: UUID("64f300cf-3f15-4c6f-bb35-db62e661e114"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.574-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (91a2f9be-7116-4319-993b-b3a9372f04e8)'. Ident: 'index-292-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 5053)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.701-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8 (b2590667-ea41-4e16-914b-25952509c04e) to test1_fsmdb0.agg_out and drop 91a2f9be-7116-4319-993b-b3a9372f04e8.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.716-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.574-0500 I STORAGE [conn110] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-282-8224331490264904478, commit timestamp: Timestamp(1574796683, 5053)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.701-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test1_fsmdb0.agg_out (91a2f9be-7116-4319-993b-b3a9372f04e8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 5053), t: 1 } and commit timestamp Timestamp(1574796683, 5053)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.717-0500 I STORAGE [ReplWriterWorker-6] createCollection: test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 with provided UUID: 80205376-0cb8-4846-ba5b-6ccb94a7983f and options: { uuid: UUID("80205376-0cb8-4846-ba5b-6ccb94a7983f"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.574-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.701-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test1_fsmdb0.agg_out (91a2f9be-7116-4319-993b-b3a9372f04e8).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.731-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.574-0500 I COMMAND [conn71] command test1_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6736831538592308081, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5407954289127686536, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796683430), clusterTime: Timestamp(1574796683, 3029) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796683, 3029), signature: { hash: BinData(0, 333929433A71A281AECCBD46BFDC3880F7953EEE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58010", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 143ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.701-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection b2590667-ea41-4e16-914b-25952509c04e from test1_fsmdb0.tmp.agg_out.2a92f723-53fb-48a4-8ea0-cea1690aa7a8 to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.745-0500 I INDEX [ReplWriterWorker-0] index build: starting on test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.575-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.701-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (91a2f9be-7116-4319-993b-b3a9372f04e8)'. Ident: 'index-294--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 5053)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.745-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.576-0500 I STORAGE [conn110] createCollection: test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 with generated UUID: 85844d5b-91ff-4af0-adbd-5ba55c1d1821 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.701-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (91a2f9be-7116-4319-993b-b3a9372f04e8)'. Ident: 'index-305--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 5053)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.745-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 2981a3da-2bc5-43d9-bb64-73823085798d: test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a (5e50e75c-c327-4f05-bb46-1ea87905b919 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.577-0500 I STORAGE [conn46] createCollection: test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 with generated UUID: 64f300cf-3f15-4c6f-bb35-db62e661e114 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.701-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-293--4104909142373009110, commit timestamp: Timestamp(1574796683, 5053)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.745-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.577-0500 I STORAGE [conn108] createCollection: test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 with generated UUID: 80205376-0cb8-4846-ba5b-6ccb94a7983f and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.702-0500 I STORAGE [ReplWriterWorker-10] createCollection: test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 with provided UUID: 85844d5b-91ff-4af0-adbd-5ba55c1d1821 and options: { uuid: UUID("85844d5b-91ff-4af0-adbd-5ba55c1d1821"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.746-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.577-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.717-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.749-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.599-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: c5028b72-dcf2-498e-90fe-9c313f11ca1a: test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a ( 5e50e75c-c327-4f05-bb46-1ea87905b919 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.718-0500 I STORAGE [ReplWriterWorker-6] createCollection: test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 with provided UUID: 64f300cf-3f15-4c6f-bb35-db62e661e114 and options: { uuid: UUID("64f300cf-3f15-4c6f-bb35-db62e661e114"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.751-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 2981a3da-2bc5-43d9-bb64-73823085798d: test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a ( 5e50e75c-c327-4f05-bb46-1ea87905b919 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.599-0500 I INDEX [conn114] Index build completed: c5028b72-dcf2-498e-90fe-9c313f11ca1a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.733-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.753-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d (9565e9b4-7b18-4d2e-bcf3-2611f80a00bf) to test1_fsmdb0.agg_out and drop b2590667-ea41-4e16-914b-25952509c04e.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.606-0500 I INDEX [conn110] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.733-0500 I STORAGE [ReplWriterWorker-8] createCollection: test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 with provided UUID: 80205376-0cb8-4846-ba5b-6ccb94a7983f and options: { uuid: UUID("80205376-0cb8-4846-ba5b-6ccb94a7983f"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.753-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test1_fsmdb0.agg_out (b2590667-ea41-4e16-914b-25952509c04e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 5624), t: 1 } and commit timestamp Timestamp(1574796683, 5624)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.607-0500 I INDEX [conn110] Registering index build: 6c11d0c3-99a5-4c80-aed9-290bda708a29
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.747-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.753-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test1_fsmdb0.agg_out (b2590667-ea41-4e16-914b-25952509c04e).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.614-0500 I INDEX [conn46] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.762-0500 I INDEX [ReplWriterWorker-14] index build: starting on test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.754-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection 9565e9b4-7b18-4d2e-bcf3-2611f80a00bf from test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.621-0500 I INDEX [conn108] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.762-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.754-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (b2590667-ea41-4e16-914b-25952509c04e)'. Ident: 'index-298--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 5624)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.636-0500 I INDEX [conn110] index build: starting on test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.762-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: ae7d60ea-bc10-4a25-8c44-60c42837b52d: test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a (5e50e75c-c327-4f05-bb46-1ea87905b919 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.754-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (b2590667-ea41-4e16-914b-25952509c04e)'. Ident: 'index-303--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 5624)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.636-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.762-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.754-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-297--8000595249233899911, commit timestamp: Timestamp(1574796683, 5624)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.636-0500 I STORAGE [conn110] Index build initialized: 6c11d0c3-99a5-4c80-aed9-290bda708a29: test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 (85844d5b-91ff-4af0-adbd-5ba55c1d1821 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.762-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.775-0500 I INDEX [ReplWriterWorker-5] index build: starting on test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.636-0500 I INDEX [conn110] Waiting for index build to complete: 6c11d0c3-99a5-4c80-aed9-290bda708a29
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.765-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.775-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.636-0500 I COMMAND [conn112] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.770-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: ae7d60ea-bc10-4a25-8c44-60c42837b52d: test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a ( 5e50e75c-c327-4f05-bb46-1ea87905b919 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.775-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 9c5418f6-63e7-46e3-b145-031dd57acd69: test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 (85844d5b-91ff-4af0-adbd-5ba55c1d1821 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.636-0500 I STORAGE [conn112] dropCollection: test1_fsmdb0.agg_out (b2590667-ea41-4e16-914b-25952509c04e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 5624), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.772-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d (9565e9b4-7b18-4d2e-bcf3-2611f80a00bf) to test1_fsmdb0.agg_out and drop b2590667-ea41-4e16-914b-25952509c04e.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.776-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.636-0500 I STORAGE [conn112] Finishing collection drop for test1_fsmdb0.agg_out (b2590667-ea41-4e16-914b-25952509c04e).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.772-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test1_fsmdb0.agg_out (b2590667-ea41-4e16-914b-25952509c04e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 5624), t: 1 } and commit timestamp Timestamp(1574796683, 5624)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.776-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.636-0500 I STORAGE [conn112] renameCollection: renaming collection 9565e9b4-7b18-4d2e-bcf3-2611f80a00bf from test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.772-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test1_fsmdb0.agg_out (b2590667-ea41-4e16-914b-25952509c04e).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.778-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.636-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (b2590667-ea41-4e16-914b-25952509c04e)'. Ident: 'index-291-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 5624)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.772-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection 9565e9b4-7b18-4d2e-bcf3-2611f80a00bf from test1_fsmdb0.tmp.agg_out.3a70ab35-2053-41fc-8516-dd1710ec8a0d to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:23.779-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 9c5418f6-63e7-46e3-b145-031dd57acd69: test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 ( 85844d5b-91ff-4af0-adbd-5ba55c1d1821 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.636-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (b2590667-ea41-4e16-914b-25952509c04e)'. Ident: 'index-296-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 5624)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.772-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (b2590667-ea41-4e16-914b-25952509c04e)'. Ident: 'index-298--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 5624)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:25.582-0500 I CONNPOOL [ReplCoordExternNetwork] Ending connection to host localhost:20002 due to bad connection status: CallbackCanceled: Callback was canceled; 1 connections to that host remain open
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.636-0500 I STORAGE [conn112] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-289-8224331490264904478, commit timestamp: Timestamp(1574796683, 5624)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.772-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (b2590667-ea41-4e16-914b-25952509c04e)'. Ident: 'index-303--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 5624)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.597-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a (5e50e75c-c327-4f05-bb46-1ea87905b919) to test1_fsmdb0.agg_out and drop 9565e9b4-7b18-4d2e-bcf3-2611f80a00bf.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.636-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.772-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-297--4104909142373009110, commit timestamp: Timestamp(1574796683, 5624)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.598-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test1_fsmdb0.agg_out (9565e9b4-7b18-4d2e-bcf3-2611f80a00bf) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 6064), t: 1 } and commit timestamp Timestamp(1574796683, 6064)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.636-0500 I INDEX [conn46] Registering index build: c2b76d0a-4ce8-4df1-af4b-0fb110b6f064
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.788-0500 I INDEX [ReplWriterWorker-10] index build: starting on test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.598-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test1_fsmdb0.agg_out (9565e9b4-7b18-4d2e-bcf3-2611f80a00bf).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.636-0500 I INDEX [conn108] Registering index build: ce45d4b5-7e06-4dbe-8cb7-d0cceb0fa476
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.788-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.598-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection 5e50e75c-c327-4f05-bb46-1ea87905b919 from test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.637-0500 I COMMAND [conn65] command test1_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5992247571615698595, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 9005522713276521779, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796683457), clusterTime: Timestamp(1574796683, 3034) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796683, 3034), signature: { hash: BinData(0, 333929433A71A281AECCBD46BFDC3880F7953EEE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44870", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 178ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.788-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: aeee9027-6f5e-4db9-9268-17c76ae75b82: test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 (85844d5b-91ff-4af0-adbd-5ba55c1d1821 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.598-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (9565e9b4-7b18-4d2e-bcf3-2611f80a00bf)'. Ident: 'index-300--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 6064)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.637-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.788-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.598-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (9565e9b4-7b18-4d2e-bcf3-2611f80a00bf)'. Ident: 'index-309--8000595249233899911', commit timestamp: 'Timestamp(1574796683, 6064)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.649-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.789-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.598-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-299--8000595249233899911, commit timestamp: Timestamp(1574796683, 6064)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.657-0500 I INDEX [conn46] index build: starting on test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.791-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.620-0500 I STORAGE [ReplWriterWorker-1] createCollection: test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 with provided UUID: d851e299-edb2-4d3b-b482-b93995646ad7 and options: { uuid: UUID("d851e299-edb2-4d3b-b482-b93995646ad7"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.657-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:23.793-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: aeee9027-6f5e-4db9-9268-17c76ae75b82: test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 ( 85844d5b-91ff-4af0-adbd-5ba55c1d1821 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.657-0500 I STORAGE [conn46] Index build initialized: c2b76d0a-4ce8-4df1-af4b-0fb110b6f064: test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 (64f300cf-3f15-4c6f-bb35-db62e661e114 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:24.603-0500 I CONNPOOL [ReplCoordExternNetwork] Ending connection to host localhost:20001 due to bad connection status: CallbackCanceled: Callback was canceled; 1 connections to that host remain open
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.657-0500 I INDEX [conn46] Waiting for index build to complete: c2b76d0a-4ce8-4df1-af4b-0fb110b6f064
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:25.582-0500 I NETWORK [conn16] end connection 127.0.0.1:51174 (13 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.657-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 6c11d0c3-99a5-4c80-aed9-290bda708a29: test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 ( 85844d5b-91ff-4af0-adbd-5ba55c1d1821 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.599-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a (5e50e75c-c327-4f05-bb46-1ea87905b919) to test1_fsmdb0.agg_out and drop 9565e9b4-7b18-4d2e-bcf3-2611f80a00bf.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.657-0500 I COMMAND [conn114] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.599-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test1_fsmdb0.agg_out (9565e9b4-7b18-4d2e-bcf3-2611f80a00bf) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 6064), t: 1 } and commit timestamp Timestamp(1574796683, 6064)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.657-0500 I INDEX [conn110] Index build completed: 6c11d0c3-99a5-4c80-aed9-290bda708a29
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.599-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test1_fsmdb0.agg_out (9565e9b4-7b18-4d2e-bcf3-2611f80a00bf).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.658-0500 I STORAGE [conn114] dropCollection: test1_fsmdb0.agg_out (9565e9b4-7b18-4d2e-bcf3-2611f80a00bf) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796683, 6064), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.599-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection 5e50e75c-c327-4f05-bb46-1ea87905b919 from test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:23.658-0500 I STORAGE [conn114] Finishing collection drop for test1_fsmdb0.agg_out (9565e9b4-7b18-4d2e-bcf3-2611f80a00bf).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.599-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (9565e9b4-7b18-4d2e-bcf3-2611f80a00bf)'. Ident: 'index-300--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 6064)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.599-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (9565e9b4-7b18-4d2e-bcf3-2611f80a00bf)'. Ident: 'index-309--4104909142373009110', commit timestamp: 'Timestamp(1574796683, 6064)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.599-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-299--4104909142373009110, commit timestamp: Timestamp(1574796683, 6064)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:24.603-0500 I NETWORK [conn17] end connection 127.0.0.1:38130 (44 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.595-0500 I STORAGE [conn114] renameCollection: renaming collection 5e50e75c-c327-4f05-bb46-1ea87905b919 from test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a to test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.595-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (9565e9b4-7b18-4d2e-bcf3-2611f80a00bf)'. Ident: 'index-295-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 6064)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.595-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (9565e9b4-7b18-4d2e-bcf3-2611f80a00bf)'. Ident: 'index-298-8224331490264904478', commit timestamp: 'Timestamp(1574796683, 6064)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.595-0500 I STORAGE [conn114] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-293-8224331490264904478, commit timestamp: Timestamp(1574796683, 6064)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.595-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.595-0500 I COMMAND [conn114] command test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a appName: "tid:4" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test1_fsmdb0.tmp.agg_out.e5e640b1-796c-45b5-ae52-14e43c6dae3a", to: "test1_fsmdb0.agg_out", collectionOptions: { validationLevel: "off", validationAction: "error" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796683, 6060), signature: { hash: BinData(0, 333929433A71A281AECCBD46BFDC3880F7953EEE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 12776 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2950ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.595-0500 I COMMAND [conn37] command test1_fsmdb0.$cmd command: listCollections { listCollections: 1, filter: { name: "agg_out" }, $clusterTime: { clusterTime: Timestamp(1574796683, 5692), signature: { hash: BinData(0, 333929433A71A281AECCBD46BFDC3880F7953EEE), keyId: 6763700092420489256 } }, $configServerState: { opTime: { ts: Timestamp(1574796683, 5692), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:635 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2948094 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 2948ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.595-0500 I COMMAND [conn119] command test1_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796683, 3546), lsid: { id: UUID("c8c15e08-f1a6-4edc-831c-249e4d0ea0c0") }, $clusterTime: { clusterTime: Timestamp(1574796683, 3610), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796683, 3546). Collection minimum timestamp is Timestamp(1574796683, 6064)" errName:SnapshotUnavailable errCode:246 reslen:582 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2938520 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 2938ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.595-0500 I COMMAND [conn67] command test1_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5160239636634019861, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1629852846895839261, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796683529), clusterTime: Timestamp(1574796683, 3546) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796683, 3610), signature: { hash: BinData(0, 333929433A71A281AECCBD46BFDC3880F7953EEE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3065ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.596-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.597-0500 I COMMAND [conn110] command test1_fsmdb0.fsmcoll0 appName: "tid:3" command: getMore { getMore: 4650921040477543042, collection: "fsmcoll0", lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796683, 6064), signature: { hash: BinData(0, 333929433A71A281AECCBD46BFDC3880F7953EEE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58004", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796683, 5692), t: 1 } }, $db: "test1_fsmdb0" } originatingCommand: { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } } ], fromMongos: true, needsMerge: true, collation: { locale: "simple" }, cursor: { batchSize: 0 }, runtimeConstants: { localNow: new Date(1574796683575), clusterTime: Timestamp(1574796683, 5053) }, use44SortKeys: true, allowImplicitCollectionCreation: false, shardVersion: [ Timestamp(1, 1), ObjectId('5ddd7d7d3bbfe7fa5630d6e7') ], lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796683, 5053), signature: { hash: BinData(0, 333929433A71A281AECCBD46BFDC3880F7953EEE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58004", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } planSummary: COLLSCAN cursorid:4650921040477543042 keysExamined:0 docsExamined:514 cursorExhausted:1 numYields:4 nreturned:265 reslen:271124 locks:{ ReplicationStateTransition: { acquireCount: { w: 5 } }, Global: { acquireCount: { r: 5 } }, Database: { acquireCount: { r: 5 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2937292 } }, Collection: { acquireCount: { r: 5 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 2938ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.600-0500 I SHARDING [conn37] CMD: shardcollection: { _shardsvrShardCollection: "test1_fsmdb0.agg_out", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796686, 1), signature: { hash: BinData(0, FB586CE71C8419B71BE34AD164841F2636C292F2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44870", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796686, 1), t: 1 } }, $db: "admin" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.600-0500 I SHARDING [conn37] about to log metadata event into changelog: { _id: "nz_desktop:20001-2019-11-26T14:31:26.600-0500-5ddd7d8e3bbfe7fa5630e249", server: "nz_desktop:20001", shard: "shard-rs0", clientAddr: "127.0.0.1:38444", time: new Date(1574796686600), what: "shardCollection.start", ns: "test1_fsmdb0.agg_out", details: { shardKey: { _id: "hashed" }, collection: "test1_fsmdb0.agg_out", uuid: UUID("5e50e75c-c327-4f05-bb46-1ea87905b919"), empty: false, fromMapReduce: false, primary: "shard-rs0:shard-rs0/localhost:20001,localhost:20002,localhost:20003", numChunks: 1 } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.600-0500 I STORAGE [conn114] createCollection: test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 with generated UUID: d851e299-edb2-4d3b-b482-b93995646ad7 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.603-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.607-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.agg_out to version 1|0||5ddd7d8e3bbfe7fa5630e252 took 1 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.607-0500 I SHARDING [conn37] Marking collection test1_fsmdb0.agg_out as collection version: 1|0||5ddd7d8e3bbfe7fa5630e252, shard version: 1|0||5ddd7d8e3bbfe7fa5630e252
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.607-0500 I SHARDING [conn37] Created 1 chunk(s) for: test1_fsmdb0.agg_out, producing collection version 1|0||5ddd7d8e3bbfe7fa5630e252
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.607-0500 I SHARDING [conn37] about to log metadata event into changelog: { _id: "nz_desktop:20001-2019-11-26T14:31:26.607-0500-5ddd7d8e3bbfe7fa5630e256", server: "nz_desktop:20001", shard: "shard-rs0", clientAddr: "127.0.0.1:38444", time: new Date(1574796686607), what: "shardCollection.end", ns: "test1_fsmdb0.agg_out", details: { version: "1|0||5ddd7d8e3bbfe7fa5630e252" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.607-0500 I STORAGE [ShardServerCatalogCacheLoader-0] createCollection: config.cache.chunks.test1_fsmdb0.agg_out with provided UUID: ad34fc50-677f-4846-b03c-7b24f5f1669a and options: { uuid: UUID("ad34fc50-677f-4846-b03c-7b24f5f1669a") }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.617-0500 I INDEX [conn114] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.617-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: c2b76d0a-4ce8-4df1-af4b-0fb110b6f064: test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 ( 64f300cf-3f15-4c6f-bb35-db62e661e114 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.632-0500 I INDEX [conn108] index build: starting on test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.632-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.632-0500 I STORAGE [conn108] Index build initialized: ce45d4b5-7e06-4dbe-8cb7-d0cceb0fa476: test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 (80205376-0cb8-4846-ba5b-6ccb94a7983f ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.632-0500 I INDEX [conn108] Waiting for index build to complete: ce45d4b5-7e06-4dbe-8cb7-d0cceb0fa476
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.632-0500 I INDEX [conn46] Index build completed: c2b76d0a-4ce8-4df1-af4b-0fb110b6f064
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.632-0500 I COMMAND [conn110] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.632-0500 I COMMAND [conn46] command test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796683, 5622), signature: { hash: BinData(0, 333929433A71A281AECCBD46BFDC3880F7953EEE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44858", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 21662 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 3017ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.632-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.632-0500 I INDEX [conn114] Registering index build: 9c68c581-5edb-41df-af60-97f693003303
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.632-0500 I COMMAND [conn46] CMD: drop test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.633-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.634-0500 I STORAGE [ReplWriterWorker-4] createCollection: test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 with provided UUID: d851e299-edb2-4d3b-b482-b93995646ad7 and options: { uuid: UUID("d851e299-edb2-4d3b-b482-b93995646ad7"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.636-0500 I INDEX [ShardServerCatalogCacheLoader-0] index build: done building index _id_ on ns config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.636-0500 I INDEX [ShardServerCatalogCacheLoader-0] Registering index build: 2f432a4c-445b-461e-a7e2-8956fce689c6
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.636-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.645-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.649-0500 I INDEX [ReplWriterWorker-12] index build: starting on test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.649-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.649-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 8e9df5f9-c32f-4d60-9022-08c8e9f849fe: test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 (64f300cf-3f15-4c6f-bb35-db62e661e114 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.650-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.650-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.650-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.653-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.654-0500 I STORAGE [ReplWriterWorker-12] createCollection: config.cache.chunks.test1_fsmdb0.agg_out with provided UUID: ad34fc50-677f-4846-b03c-7b24f5f1669a and options: { uuid: UUID("ad34fc50-677f-4846-b03c-7b24f5f1669a") }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.655-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 8e9df5f9-c32f-4d60-9022-08c8e9f849fe: test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 ( 64f300cf-3f15-4c6f-bb35-db62e661e114 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.657-0500 I INDEX [conn114] index build: starting on test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.657-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.657-0500 I STORAGE [conn114] Index build initialized: 9c68c581-5edb-41df-af60-97f693003303: test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 (d851e299-edb2-4d3b-b482-b93995646ad7 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.657-0500 I INDEX [conn114] Waiting for index build to complete: 9c68c581-5edb-41df-af60-97f693003303
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.657-0500 I STORAGE [conn46] dropCollection: test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 (85844d5b-91ff-4af0-adbd-5ba55c1d1821) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.657-0500 I STORAGE [conn46] Finishing collection drop for test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 (85844d5b-91ff-4af0-adbd-5ba55c1d1821).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.657-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test1_fsmdb0 from version {} to version { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.657-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 (85844d5b-91ff-4af0-adbd-5ba55c1d1821)'. Ident: 'index-307-8224331490264904478', commit timestamp: 'Timestamp(1574796686, 1011)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.657-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 (85844d5b-91ff-4af0-adbd-5ba55c1d1821)'. Ident: 'index-310-8224331490264904478', commit timestamp: 'Timestamp(1574796686, 1011)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.657-0500 I STORAGE [conn46] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201'. Ident: collection-304-8224331490264904478, commit timestamp: Timestamp(1574796686, 1011)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.657-0500 I COMMAND [conn110] renameCollectionForCommand: rename test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 to test1_fsmdb0.agg_out and drop test1_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.657-0500 I COMMAND [conn71] command test1_fsmdb0.agg_out appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4650921040477543042, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3752180479030823295, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796683575), clusterTime: Timestamp(1574796683, 5053) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796683, 5053), signature: { hash: BinData(0, 333929433A71A281AECCBD46BFDC3880F7953EEE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58004", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: cannot rename to a sharded collection" errName:IllegalOperation errCode:20 reslen:796 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3081ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:26.658-0500 I COMMAND [conn32] command test1_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("24e7029f-69f1-4b83-9971-584e1ea130ee") }, $clusterTime: { clusterTime: Timestamp(1574796683, 5051), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: cannot rename to a sharded collection" errName:IllegalOperation errCode:20 reslen:626 protocol:op_msg 3083ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.658-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.agg_out to version 1|0||5ddd7d8e3bbfe7fa5630e252 took 0 ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.659-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d8b5cde74b6784bb461' unlocked.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.660-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d8b5cde74b6784bb45f' unlocked.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.661-0500 I COMMAND [conn19] command admin.$cmd appName: "tid:2" command: _configsvrShardCollection { _configsvrShardCollection: "test1_fsmdb0.agg_out", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1574796683, 5690), signature: { hash: BinData(0, 333929433A71A281AECCBD46BFDC3880F7953EEE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44870", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796683, 5690), t: 1 } }, $db: "admin" } numYields:0 reslen:586 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 6 } }, Global: { acquireCount: { r: 2, w: 4 } }, Database: { acquireCount: { r: 2, w: 4 } }, Collection: { acquireCount: { r: 2, w: 4 } }, Mutex: { acquireCount: { r: 10, W: 1 } } } flowControl:{ acquireCount: 4 } storage:{} protocol:op_msg 3017ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:26.661-0500 I COMMAND [conn74] command test1_fsmdb0.agg_out appName: "tid:2" command: shardCollection { shardCollection: "test1_fsmdb0.agg_out", key: { _id: "hashed" }, lsid: { id: UUID("275b014e-73f2-4e18-a6f0-826d2de7f856") }, $clusterTime: { clusterTime: Timestamp(1574796683, 5690), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:244 protocol:op_msg 3017ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.662-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: ce45d4b5-7e06-4dbe-8cb7-d0cceb0fa476: test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 ( 80205376-0cb8-4846-ba5b-6ccb94a7983f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:26.663-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.agg_out to version 1|0||5ddd7d8e3bbfe7fa5630e252 took 1 ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.666-0500 I INDEX [ReplWriterWorker-13] index build: starting on test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.666-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.666-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 848a1db0-13e9-437c-8e09-4d01b7893194: test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 (64f300cf-3f15-4c6f-bb35-db62e661e114 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.666-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.667-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.669-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.672-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.673-0500 I STORAGE [ReplWriterWorker-13] createCollection: config.cache.chunks.test1_fsmdb0.agg_out with provided UUID: ad34fc50-677f-4846-b03c-7b24f5f1669a and options: { uuid: UUID("ad34fc50-677f-4846-b03c-7b24f5f1669a") }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.674-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 848a1db0-13e9-437c-8e09-4d01b7893194: test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 ( 64f300cf-3f15-4c6f-bb35-db62e661e114 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.677-0500 I INDEX [ShardServerCatalogCacheLoader-0] index build: starting on config.cache.chunks.test1_fsmdb0.agg_out properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.677-0500 I INDEX [ShardServerCatalogCacheLoader-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.677-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Index build initialized: 2f432a4c-445b-461e-a7e2-8956fce689c6: config.cache.chunks.test1_fsmdb0.agg_out (ad34fc50-677f-4846-b03c-7b24f5f1669a ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.677-0500 I INDEX [ShardServerCatalogCacheLoader-0] Waiting for index build to complete: 2f432a4c-445b-461e-a7e2-8956fce689c6
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.677-0500 I INDEX [conn108] Index build completed: ce45d4b5-7e06-4dbe-8cb7-d0cceb0fa476
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.677-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.677-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.677-0500 I COMMAND [conn108] command test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796683, 5622), signature: { hash: BinData(0, 333929433A71A281AECCBD46BFDC3880F7953EEE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58010", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 2952167 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 3054ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.677-0500 I COMMAND [conn110] CMD: drop test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.678-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.678-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.681-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index lastmod_1 on ns config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.684-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.684-0500 I STORAGE [conn110] dropCollection: test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 (64f300cf-3f15-4c6f-bb35-db62e661e114) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.684-0500 I STORAGE [conn110] Finishing collection drop for test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 (64f300cf-3f15-4c6f-bb35-db62e661e114).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.685-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 (64f300cf-3f15-4c6f-bb35-db62e661e114)'. Ident: 'index-308-8224331490264904478', commit timestamp: 'Timestamp(1574796686, 1018)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.685-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 (64f300cf-3f15-4c6f-bb35-db62e661e114)'. Ident: 'index-312-8224331490264904478', commit timestamp: 'Timestamp(1574796686, 1018)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.685-0500 I STORAGE [conn110] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163'. Ident: collection-305-8224331490264904478, commit timestamp: Timestamp(1574796686, 1018)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.685-0500 I COMMAND [conn68] command test1_fsmdb0.agg_out appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4917421259022220104, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1439353762117942444, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796683575), clusterTime: Timestamp(1574796683, 5053) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796683, 5053), signature: { hash: BinData(0, 333929433A71A281AECCBD46BFDC3880F7953EEE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44858", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: cannot rename to a sharded collection" errName:IllegalOperation errCode:20 reslen:796 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3108ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.685-0500 I STORAGE [conn110] createCollection: test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b with generated UUID: ed6dc7fc-a823-4bcd-8191-43541b5f5f67 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.685-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 2f432a4c-445b-461e-a7e2-8956fce689c6: config.cache.chunks.test1_fsmdb0.agg_out ( ad34fc50-677f-4846-b03c-7b24f5f1669a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.685-0500 I INDEX [ShardServerCatalogCacheLoader-0] Index build completed: 2f432a4c-445b-461e-a7e2-8956fce689c6
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:26.685-0500 I COMMAND [conn70] command test1_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("402c61a1-9f8f-4ea9-870e-a14d3f03e01d") }, $clusterTime: { clusterTime: Timestamp(1574796683, 5053), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: cannot rename to a sharded collection" errName:IllegalOperation errCode:20 reslen:626 protocol:op_msg 3109ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.685-0500 I SHARDING [ShardServerCatalogCacheLoader-0] Marking collection config.cache.chunks.test1_fsmdb0.agg_out as collection version:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.688-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 9c68c581-5edb-41df-af60-97f693003303: test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 ( d851e299-edb2-4d3b-b482-b93995646ad7 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.688-0500 I INDEX [conn114] Index build completed: 9c68c581-5edb-41df-af60-97f693003303
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.688-0500 I COMMAND [conn68] CMD: dropIndexes test1_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.688-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.689-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.689-0500 I COMMAND [conn68] CMD: dropIndexes test1_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.689-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.693-0500 I INDEX [ReplWriterWorker-12] index build: starting on test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.693-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.693-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: a47228a5-f326-4dbb-a892-651e2dfefcc0: test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 (80205376-0cb8-4846-ba5b-6ccb94a7983f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.693-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.693-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.694-0500 I SHARDING [conn19] distributed lock 'test1_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d8e5cde74b6784bb48c
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.695-0500 I SHARDING [conn19] Enabling sharding for database [test1_fsmdb0] in config db
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.696-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d8e5cde74b6784bb48c' unlocked.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.696-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.698-0500 I SHARDING [conn19] distributed lock 'test1_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d8e5cde74b6784bb492
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.699-0500 I COMMAND [ReplWriterWorker-2] CMD: drop test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.699-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 (85844d5b-91ff-4af0-adbd-5ba55c1d1821) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796686, 1011), t: 1 } and commit timestamp Timestamp(1574796686, 1011)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.699-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 (85844d5b-91ff-4af0-adbd-5ba55c1d1821).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.699-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 (85844d5b-91ff-4af0-adbd-5ba55c1d1821)'. Ident: 'index-312--8000595249233899911', commit timestamp: 'Timestamp(1574796686, 1011)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.699-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 (85844d5b-91ff-4af0-adbd-5ba55c1d1821)'. Ident: 'index-319--8000595249233899911', commit timestamp: 'Timestamp(1574796686, 1011)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.699-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201'. Ident: collection-311--8000595249233899911, commit timestamp: Timestamp(1574796686, 1011)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.700-0500 I SHARDING [conn19] distributed lock 'test1_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7d8e5cde74b6784bb494
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.700-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: a47228a5-f326-4dbb-a892-651e2dfefcc0: test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 ( 80205376-0cb8-4846-ba5b-6ccb94a7983f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.704-0500 I INDEX [conn110] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.705-0500 I INDEX [conn110] Registering index build: efa43795-47c0-4a92-a665-c05c7c0b9e3a
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.706-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test1_fsmdb0 from version {} to version { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:26.706-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test1_fsmdb0 from version {} to version { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.706-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.agg_out to version 1|0||5ddd7d8e3bbfe7fa5630e252 took 0 ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:26.707-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.agg_out to version 1|0||5ddd7d8e3bbfe7fa5630e252 took 0 ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.708-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d8e5cde74b6784bb494' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.708-0500 I COMMAND [conn68] CMD: dropIndexes test1_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.708-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.709-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d8e5cde74b6784bb492' unlocked.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.713-0500 I INDEX [ReplWriterWorker-10] index build: starting on test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.713-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.713-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: ebb36417-1332-4bd1-bf6e-eb66235ab3ce: test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 (80205376-0cb8-4846-ba5b-6ccb94a7983f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.713-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.714-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.716-0500 I INDEX [ReplWriterWorker-8] index build: starting on config.cache.chunks.test1_fsmdb0.agg_out properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.716-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.716-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: abf5536d-4ea6-46d6-a775-ff67f157b63e: config.cache.chunks.test1_fsmdb0.agg_out (ad34fc50-677f-4846-b03c-7b24f5f1669a ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.716-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.717-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.717-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.718-0500 I COMMAND [ReplWriterWorker-14] CMD: drop test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.718-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 (85844d5b-91ff-4af0-adbd-5ba55c1d1821) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796686, 1011), t: 1 } and commit timestamp Timestamp(1574796686, 1011)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.718-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 (85844d5b-91ff-4af0-adbd-5ba55c1d1821).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.718-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 (85844d5b-91ff-4af0-adbd-5ba55c1d1821)'. Ident: 'index-312--4104909142373009110', commit timestamp: 'Timestamp(1574796686, 1011)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.718-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201 (85844d5b-91ff-4af0-adbd-5ba55c1d1821)'. Ident: 'index-319--4104909142373009110', commit timestamp: 'Timestamp(1574796686, 1011)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.718-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201'. Ident: collection-311--4104909142373009110, commit timestamp: Timestamp(1574796686, 1011)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.719-0500 I INDEX [conn110] index build: starting on test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.719-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.719-0500 I STORAGE [conn110] Index build initialized: efa43795-47c0-4a92-a665-c05c7c0b9e3a: test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b (ed6dc7fc-a823-4bcd-8191-43541b5f5f67 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.719-0500 I INDEX [conn110] Waiting for index build to complete: efa43795-47c0-4a92-a665-c05c7c0b9e3a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.719-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: ebb36417-1332-4bd1-bf6e-eb66235ab3ce: test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 ( 80205376-0cb8-4846-ba5b-6ccb94a7983f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.719-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index lastmod_1 on ns config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.719-0500 I COMMAND [conn108] CMD: drop test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.720-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.720-0500 I STORAGE [conn108] dropCollection: test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 (80205376-0cb8-4846-ba5b-6ccb94a7983f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.721-0500 I STORAGE [conn108] Finishing collection drop for test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 (80205376-0cb8-4846-ba5b-6ccb94a7983f).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.721-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 (80205376-0cb8-4846-ba5b-6ccb94a7983f)'. Ident: 'index-309-8224331490264904478', commit timestamp: 'Timestamp(1574796686, 1720)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.721-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 (80205376-0cb8-4846-ba5b-6ccb94a7983f)'. Ident: 'index-314-8224331490264904478', commit timestamp: 'Timestamp(1574796686, 1720)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:26.876-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js finished.
[fsm_workload_test:agg_out] 2019-11-26T14:31:27.315-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:29.908-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:29.908-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:29.908-0500 [jsTest] New session started with sessionID: { "id" : UUID("75d521b6-964f-416d-8d20-67ee1a45dbf3") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:29.908-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:29.908-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:29.908-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:29.908-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:29.908-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:29.909-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:29.909-0500 [jsTest] Workload(s) completed in 17265 ms: jstests/concurrency/fsm_workloads/agg_out.js
[fsm_workload_test:agg_out] 2019-11-26T14:31:29.909-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:29.909-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:29.909-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.721-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { padding: "text" }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:29.910-0500 Starting JSTest jstests/hooks/run_check_repl_dbhash_background.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_check_repl_dbhash_background"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_check_repl_dbhash_background.js
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:26.721-0500 I COMMAND [conn33] command test1_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063") }, $clusterTime: { clusterTime: Timestamp(1574796683, 5053), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:812 protocol:op_msg 3145ms
[fsm_workload_test:agg_out] 2019-11-26T14:31:29.910-0500 FSM workload jstests/concurrency/fsm_workloads/agg_out.js finished.
[executor:fsm_workload_test:job0] 2019-11-26T14:31:29.911-0500 agg_out.js ran in 22.24 seconds: no failures detected.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.729-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: abf5536d-4ea6-46d6-a775-ff67f157b63e: config.cache.chunks.test1_fsmdb0.agg_out ( ad34fc50-677f-4846-b03c-7b24f5f1669a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:26.730-0500 I COMMAND [conn71] command test1_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563") }, $clusterTime: { clusterTime: Timestamp(1574796685, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test1_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:812 protocol:op_msg 132ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.737-0500 I INDEX [ReplWriterWorker-2] index build: starting on config.cache.chunks.test1_fsmdb0.agg_out properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:26.749-0500 I STORAGE [ReplWriterWorker-9] createCollection: config.cache.chunks.test1_fsmdb0.agg_out with provided UUID: d42e625c-196f-4a50-b0c5-66d06bbde62c and options: { uuid: UUID("d42e625c-196f-4a50-b0c5-66d06bbde62c") }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:26.749-0500 I STORAGE [ReplWriterWorker-11] createCollection: config.cache.chunks.test1_fsmdb0.agg_out with provided UUID: d42e625c-196f-4a50-b0c5-66d06bbde62c and options: { uuid: UUID("d42e625c-196f-4a50-b0c5-66d06bbde62c") }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.751-0500 I SHARDING [conn19] distributed lock 'test1_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d8e5cde74b6784bb4af
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.721-0500 I STORAGE [conn108] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23'. Ident: collection-306-8224331490264904478, commit timestamp: Timestamp(1574796686, 1720)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.727-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46542 #126 (46 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:26.725-0500 I NETWORK [conn33] Successfully connected to shard-rs0/localhost:20001,localhost:20002,localhost:20003 (1 connections now open to shard-rs0/localhost:20001,localhost:20002,localhost:20003 with a 0 second timeout)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.738-0500 I INDEX [ReplWriterWorker-1] index build: starting on test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[CheckReplDBHashInBackground:job0] Pausing the background check repl dbhash thread.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.737-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:26.755-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test1_fsmdb0 from version {} to version { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:26.766-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:26.765-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.721-0500 I COMMAND [conn70] command test1_fsmdb0.agg_out appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4613786166880196915, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5079598007179657033, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796683576), clusterTime: Timestamp(1574796683, 5053) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("e3e6d81f-56e4-4925-8103-eee7edf55063"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796683, 5053), signature: { hash: BinData(0, 333929433A71A281AECCBD46BFDC3880F7953EEE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58010", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796677, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:982 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3144ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.752-0500 I SHARDING [conn19] Enabling sharding for database [test1_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.727-0500 I NETWORK [conn126] received client metadata from 127.0.0.1:46542 conn126: { driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:26.727-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.agg_out to version 1|0||5ddd7d8e3bbfe7fa5630e252 took 0 ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.738-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.737-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 5d46685f-f715-4c44-83bb-ee9e0fd6b41b: config.cache.chunks.test1_fsmdb0.agg_out (ad34fc50-677f-4846-b03c-7b24f5f1669a ): indexes: 1
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:26.756-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.agg_out to version 1|0||5ddd7d8e3bbfe7fa5630e252 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:26.786-0500 I INDEX [ReplWriterWorker-15] index build: starting on config.cache.chunks.test1_fsmdb0.agg_out properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:26.785-0500 I INDEX [ReplWriterWorker-8] index build: starting on config.cache.chunks.test1_fsmdb0.agg_out properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.721-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.754-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d8e5cde74b6784bb4af' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.730-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.agg_out to version 1|0||5ddd7d8e3bbfe7fa5630e252 took 1 ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:26.727-0500 I NETWORK [conn33] Successfully connected to shard-rs1/localhost:20004,localhost:20005,localhost:20006 (1 connections now open to shard-rs1/localhost:20004,localhost:20005,localhost:20006 with a 0 second timeout)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.738-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: d6d99ae2-1107-431a-8a4d-76ea56a88816: test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 (d851e299-edb2-4d3b-b482-b93995646ad7 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.737-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:26.781-0500 I NETWORK [conn74] end connection 127.0.0.1:44870 (7 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:26.786-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:26.785-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.721-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.756-0500 I SHARDING [conn17] distributed lock 'test1_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d8e5cde74b6784bb4b7
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.730-0500 I SHARDING [conn126] Marking collection test1_fsmdb0.agg_out as collection version: 1|0||5ddd7d8e3bbfe7fa5630e252, shard version: 0|0||5ddd7d8e3bbfe7fa5630e252
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:26.794-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test1_fsmdb0 from version {} to version { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.738-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.738-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:26.787-0500 I NETWORK [conn70] end connection 127.0.0.1:44858 (6 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:26.786-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 20727dd5-329a-40f6-b463-8fc13a24b05e: config.cache.chunks.test1_fsmdb0.agg_out (d42e625c-196f-4a50-b0c5-66d06bbde62c ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:26.785-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: b73c91ef-6eb4-403e-a1d6-079a47d522fe: config.cache.chunks.test1_fsmdb0.agg_out (d42e625c-196f-4a50-b0c5-66d06bbde62c ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.723-0500 I COMMAND [conn70] CMD: dropIndexes test1_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.758-0500 I SHARDING [conn17] distributed lock 'test1_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7d8e5cde74b6784bb4bb
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.730-0500 I STORAGE [ShardServerCatalogCacheLoader-1] createCollection: config.cache.chunks.test1_fsmdb0.agg_out with provided UUID: d42e625c-196f-4a50-b0c5-66d06bbde62c and options: { uuid: UUID("d42e625c-196f-4a50-b0c5-66d06bbde62c") }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:26.795-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.agg_out to version 1|0||5ddd7d8e3bbfe7fa5630e252 took 0 ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.739-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.741-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index lastmod_1 on ns config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:26.863-0500 I NETWORK [conn76] end connection 127.0.0.1:44908 (5 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:26.787-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:26.785-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.725-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.759-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test1_fsmdb0 from version {} to version { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.731-0500 I COMMAND [conn80] CMD: dropIndexes test1_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:26.810-0500 I NETWORK [conn33] end connection 127.0.0.1:58010 (2 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.741-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.752-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 5d46685f-f715-4c44-83bb-ee9e0fd6b41b: config.cache.chunks.test1_fsmdb0.agg_out ( ad34fc50-677f-4846-b03c-7b24f5f1669a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:26.868-0500 I NETWORK [conn75] end connection 127.0.0.1:44886 (4 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:26.787-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:29.922-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js started with pid 15457.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:26.786-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.725-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39074 #122 (45 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.760-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.agg_out to version 1|0||5ddd7d8e3bbfe7fa5630e252 took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.731-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:26.828-0500 I NETWORK [conn32] end connection 127.0.0.1:58004 (1 connection now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.742-0500 I COMMAND [ReplWriterWorker-6] CMD: drop test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.760-0500 I INDEX [ReplWriterWorker-9] index build: starting on test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:27.265-0500 I COMMAND [conn71] command test1_fsmdb0 appName: "tid:4" command: enableSharding { enableSharding: "test1_fsmdb0", lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563") }, $clusterTime: { clusterTime: Timestamp(1574796686, 2548), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:163 protocol:op_msg 506ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:26.788-0500 I SHARDING [ReplWriterWorker-7] Marking collection config.cache.chunks.test1_fsmdb0.agg_out as collection version:
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:26.787-0500 I SHARDING [ReplWriterWorker-6] Marking collection config.cache.chunks.test1_fsmdb0.agg_out as collection version:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.725-0500 I NETWORK [conn122] received client metadata from 127.0.0.1:39074 conn122: { driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.761-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7d8e5cde74b6784bb4bb' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.733-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:27.321-0500 I NETWORK [conn31] end connection 127.0.0.1:57938 (0 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.742-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 (64f300cf-3f15-4c6f-bb35-db62e661e114) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796686, 1018), t: 1 } and commit timestamp Timestamp(1574796686, 1018)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.760-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:27.274-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test1_fsmdb0 from version {} to version { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:26.790-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 1 side writes (inserted: 1, deleted: 0) for 'lastmod_1' in 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:26.790-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: drain applied 1 side writes (inserted: 1, deleted: 0) for 'lastmod_1' in 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.729-0500 I COMMAND [conn114] CMD: drop test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.762-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7d8e5cde74b6784bb4b7' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.736-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.742-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 (64f300cf-3f15-4c6f-bb35-db62e661e114).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.760-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: fca7971a-24b2-424c-9502-7b19b72cc962: test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 (d851e299-edb2-4d3b-b482-b93995646ad7 ): indexes: 1
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:27.276-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.agg_out to version 1|0||5ddd7d8e3bbfe7fa5630e252 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:26.790-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:26.790-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index lastmod_1 on ns config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.729-0500 I STORAGE [conn114] dropCollection: test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 (d851e299-edb2-4d3b-b482-b93995646ad7) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.791-0500 I SHARDING [conn22] distributed lock 'test1_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d8e5cde74b6784bb4ce
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.738-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.742-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 (64f300cf-3f15-4c6f-bb35-db62e661e114)'. Ident: 'index-314--8000595249233899911', commit timestamp: 'Timestamp(1574796686, 1018)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.760-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:27.313-0500 I NETWORK [conn71] end connection 127.0.0.1:44860 (3 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:26.793-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 20727dd5-329a-40f6-b463-8fc13a24b05e: config.cache.chunks.test1_fsmdb0.agg_out ( d42e625c-196f-4a50-b0c5-66d06bbde62c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:26.791-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: b73c91ef-6eb4-403e-a1d6-079a47d522fe: config.cache.chunks.test1_fsmdb0.agg_out ( d42e625c-196f-4a50-b0c5-66d06bbde62c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.729-0500 I STORAGE [conn114] Finishing collection drop for test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 (d851e299-edb2-4d3b-b482-b93995646ad7).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.791-0500 I SHARDING [conn22] Enabling sharding for database [test1_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.742-0500 I COMMAND [conn80] CMD: dropIndexes test1_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.742-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 (64f300cf-3f15-4c6f-bb35-db62e661e114)'. Ident: 'index-323--8000595249233899911', commit timestamp: 'Timestamp(1574796686, 1018)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.761-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:27.320-0500 I NETWORK [conn61] end connection 127.0.0.1:44742 (2 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:26.876-0500 I NETWORK [conn48] end connection 127.0.0.1:35070 (10 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:26.876-0500 I NETWORK [conn51] end connection 127.0.0.1:51706 (10 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.729-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 (d851e299-edb2-4d3b-b482-b93995646ad7)'. Ident: 'index-318-8224331490264904478', commit timestamp: 'Timestamp(1574796686, 2032)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.793-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7d8e5cde74b6784bb4ce' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.746-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.742-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163'. Ident: collection-313--8000595249233899911, commit timestamp: Timestamp(1574796686, 1018)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.765-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:27.321-0500 I NETWORK [conn63] end connection 127.0.0.1:44786 (1 connection now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:27.322-0500 I NETWORK [conn46] end connection 127.0.0.1:34984 (9 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:27.322-0500 I NETWORK [conn49] end connection 127.0.0.1:51622 (9 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.729-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 (d851e299-edb2-4d3b-b482-b93995646ad7)'. Ident: 'index-320-8224331490264904478', commit timestamp: 'Timestamp(1574796686, 2032)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.795-0500 I SHARDING [conn23] distributed lock 'test1_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d8e5cde74b6784bb4d6
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.747-0500 I INDEX [ShardServerCatalogCacheLoader-1] index build: done building index _id_ on ns config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.743-0500 I STORAGE [ReplWriterWorker-2] createCollection: test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b with provided UUID: ed6dc7fc-a823-4bcd-8191-43541b5f5f67 and options: { uuid: UUID("ed6dc7fc-a823-4bcd-8191-43541b5f5f67"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.765-0500 I COMMAND [ReplWriterWorker-5] CMD: drop test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:27.323-0500 I NETWORK [conn64] end connection 127.0.0.1:44794 (0 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:27.333-0500 I NETWORK [conn45] end connection 127.0.0.1:34948 (8 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:27.333-0500 I NETWORK [conn48] end connection 127.0.0.1:51584 (8 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.729-0500 I STORAGE [conn114] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368'. Ident: collection-315-8224331490264904478, commit timestamp: Timestamp(1574796686, 2032)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.796-0500 I SHARDING [conn23] distributed lock 'test1_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7d8e5cde74b6784bb4da
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.747-0500 I INDEX [ShardServerCatalogCacheLoader-1] Registering index build: ba11c651-394d-4ae0-be0f-a403d573c1ee
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.743-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: d6d99ae2-1107-431a-8a4d-76ea56a88816: test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 ( d851e299-edb2-4d3b-b482-b93995646ad7 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.766-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 (64f300cf-3f15-4c6f-bb35-db62e661e114) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796686, 1018), t: 1 } and commit timestamp Timestamp(1574796686, 1018)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:27.395-0500 I NETWORK [shard-registry-reload] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:27.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.729-0500 I COMMAND [conn67] command test1_fsmdb0.agg_out appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 345550836717682247, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6700193840388445667, ns: "test1_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test1_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796686599), clusterTime: Timestamp(1574796686, 1) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796686, 65), signature: { hash: BinData(0, FB586CE71C8419B71BE34AD164841F2636C292F2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796686, 1), t: 1 } }, $db: "test1_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368\", to: \"test1_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test1_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:982 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 129ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.798-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test1_fsmdb0 from version {} to version { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.748-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.759-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.766-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 (64f300cf-3f15-4c6f-bb35-db62e661e114).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:27.395-0500 I NETWORK [shard-registry-reload] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:27.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.731-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: efa43795-47c0-4a92-a665-c05c7c0b9e3a: test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b ( ed6dc7fc-a823-4bcd-8191-43541b5f5f67 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.798-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.agg_out to version 1|0||5ddd7d8e3bbfe7fa5630e252 took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.752-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.761-0500 I SHARDING [ReplWriterWorker-1] Marking collection config.cache.chunks.test1_fsmdb0.agg_out as collection version:
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.766-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: fca7971a-24b2-424c-9502-7b19b72cc962: test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 ( d851e299-edb2-4d3b-b482-b93995646ad7 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:27.397-0500 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: LockStateChangeFailed: findAndModify query predicate didn't match any lock document
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:27.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.731-0500 I INDEX [conn110] Index build completed: efa43795-47c0-4a92-a665-c05c7c0b9e3a
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.800-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7d8e5cde74b6784bb4da' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.753-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.772-0500 I COMMAND [ReplWriterWorker-3] CMD: drop test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.766-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 (64f300cf-3f15-4c6f-bb35-db62e661e114)'. Ident: 'index-314--4104909142373009110', commit timestamp: 'Timestamp(1574796686, 1018)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:29.299-0500 I CONNPOOL [ReplCoordExternNetwork] Ending connection to host localhost:20004 due to bad connection status: CallbackCanceled: Callback was canceled; 2 connections to that host remain open
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:27.393-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.731-0500 I COMMAND [conn70] CMD: dropIndexes test1_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.801-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7d8e5cde74b6784bb4d6' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.753-0500 I COMMAND [conn80] CMD: dropIndexes test1_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.773-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 (80205376-0cb8-4846-ba5b-6ccb94a7983f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796686, 1720), t: 1 } and commit timestamp Timestamp(1574796686, 1720)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.766-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163 (64f300cf-3f15-4c6f-bb35-db62e661e114)'. Ident: 'index-323--4104909142373009110', commit timestamp: 'Timestamp(1574796686, 1018)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:27.393-0500 I SHARDING [updateShardIdentityConfigString] Updating config server with confirmed set shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.731-0500 I COMMAND [conn67] CMD: dropIndexes test1_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.868-0500 I NETWORK [conn94] end connection 127.0.0.1:56292 (38 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.759-0500 I COMMAND [conn80] CMD: dropIndexes test1_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.773-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 (80205376-0cb8-4846-ba5b-6ccb94a7983f).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.766-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163'. Ident: collection-313--4104909142373009110, commit timestamp: Timestamp(1574796686, 1018)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:29.295-0500 I CONNPOOL [ReplCoordExternNetwork] Ending connection to host localhost:20004 due to bad connection status: CallbackCanceled: Callback was canceled; 2 connections to that host remain open
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.733-0500 I COMMAND [conn67] CMD: dropIndexes test1_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:26.876-0500 I NETWORK [conn93] end connection 127.0.0.1:56290 (37 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.764-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.773-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 (80205376-0cb8-4846-ba5b-6ccb94a7983f)'. Ident: 'index-316--8000595249233899911', commit timestamp: 'Timestamp(1574796686, 1720)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.767-0500 I STORAGE [ReplWriterWorker-14] createCollection: test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b with provided UUID: ed6dc7fc-a823-4bcd-8191-43541b5f5f67 and options: { uuid: UUID("ed6dc7fc-a823-4bcd-8191-43541b5f5f67"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.738-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:27.262-0500 I SHARDING [conn19] distributed lock 'test1_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d8e5cde74b6784bb4bf
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.764-0500 I INDEX [ShardServerCatalogCacheLoader-1] index build: starting on config.cache.chunks.test1_fsmdb0.agg_out properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.773-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 (80205376-0cb8-4846-ba5b-6ccb94a7983f)'. Ident: 'index-327--8000595249233899911', commit timestamp: 'Timestamp(1574796686, 1720)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.784-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.738-0500 I COMMAND [conn68] CMD: dropIndexes test1_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:27.262-0500 I SHARDING [conn19] Enabling sharding for database [test1_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.764-0500 I INDEX [ShardServerCatalogCacheLoader-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.773-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23'. Ident: collection-315--8000595249233899911, commit timestamp: Timestamp(1574796686, 1720)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.786-0500 I SHARDING [ReplWriterWorker-15] Marking collection config.cache.chunks.test1_fsmdb0.agg_out as collection version:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.742-0500 I COMMAND [conn70] CMD: dropIndexes test1_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:27.264-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d8e5cde74b6784bb4bf' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.764-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Index build initialized: ba11c651-394d-4ae0-be0f-a403d573c1ee: config.cache.chunks.test1_fsmdb0.agg_out (d42e625c-196f-4a50-b0c5-66d06bbde62c ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.791-0500 I INDEX [ReplWriterWorker-7] index build: starting on test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.798-0500 I COMMAND [ReplWriterWorker-0] CMD: drop test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.746-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:27.264-0500 I COMMAND [conn19] command admin.$cmd appName: "tid:4" command: _configsvrEnableSharding { _configsvrEnableSharding: "test1_fsmdb0", writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("476da9c6-a903-4290-8632-5349ffeb7563"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1574796686, 2548), signature: { hash: BinData(0, FB586CE71C8419B71BE34AD164841F2636C292F2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:44860", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796686, 2546), t: 1 } }, $db: "admin" } numYields:0 reslen:505 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 5 } }, ReplicationStateTransition: { acquireCount: { w: 9 } }, Global: { acquireCount: { r: 5, w: 4 } }, Database: { acquireCount: { r: 4, w: 4 } }, Collection: { acquireCount: { r: 3, w: 4 } }, Mutex: { acquireCount: { r: 10 } }, oplog: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 4 } storage:{} protocol:op_msg 505ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.764-0500 I INDEX [ShardServerCatalogCacheLoader-1] Waiting for index build to complete: ba11c651-394d-4ae0-be0f-a403d573c1ee
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.791-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.798-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 (80205376-0cb8-4846-ba5b-6ccb94a7983f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796686, 1720), t: 1 } and commit timestamp Timestamp(1574796686, 1720)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.748-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:27.267-0500 I SHARDING [conn19] distributed lock 'test1_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d8f5cde74b6784bb4f5
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.764-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.791-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 20134952-7e50-4552-915f-c3bc9b88d5df: test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b (ed6dc7fc-a823-4bcd-8191-43541b5f5f67 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.798-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 (80205376-0cb8-4846-ba5b-6ccb94a7983f).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.798-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 (80205376-0cb8-4846-ba5b-6ccb94a7983f)'. Ident: 'index-316--4104909142373009110', commit timestamp: 'Timestamp(1574796686, 1720)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:27.268-0500 I SHARDING [conn19] distributed lock 'test1_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7d8f5cde74b6784bb4f7
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.765-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.791-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.749-0500 I COMMAND [conn110] CMD: drop test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.798-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23 (80205376-0cb8-4846-ba5b-6ccb94a7983f)'. Ident: 'index-327--4104909142373009110', commit timestamp: 'Timestamp(1574796686, 1720)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:27.270-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test1_fsmdb0 from version {} to version { uuid: UUID("d0f4215a-b89e-4a24-a32f-19b376c4a7ad"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.767-0500 I COMMAND [conn80] CMD: dropIndexes test1_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.792-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.749-0500 I STORAGE [conn110] dropCollection: test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b (ed6dc7fc-a823-4bcd-8191-43541b5f5f67) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.798-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23'. Ident: collection-315--4104909142373009110, commit timestamp: Timestamp(1574796686, 1720)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:27.271-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.agg_out to version 1|0||5ddd7d8e3bbfe7fa5630e252 took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.768-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index lastmod_1 on ns config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.795-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.749-0500 I STORAGE [conn110] Finishing collection drop for test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b (ed6dc7fc-a823-4bcd-8191-43541b5f5f67).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.815-0500 I INDEX [ReplWriterWorker-14] index build: starting on test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:27.272-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d8f5cde74b6784bb4f7' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.769-0500 I COMMAND [conn80] CMD: dropIndexes test1_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.796-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 20134952-7e50-4552-915f-c3bc9b88d5df: test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b ( ed6dc7fc-a823-4bcd-8191-43541b5f5f67 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.749-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b (ed6dc7fc-a823-4bcd-8191-43541b5f5f67)'. Ident: 'index-325-8224331490264904478', commit timestamp: 'Timestamp(1574796686, 2545)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.815-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:27.273-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d8f5cde74b6784bb4f5' unlocked.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:27.320-0500 I NETWORK [conn89] end connection 127.0.0.1:56162 (36 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.798-0500 I COMMAND [ReplWriterWorker-13] CMD: drop test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.749-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b (ed6dc7fc-a823-4bcd-8191-43541b5f5f67)'. Ident: 'index-326-8224331490264904478', commit timestamp: 'Timestamp(1574796686, 2545)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.815-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: a03a0365-42c6-44f7-b997-f00ce3f4d025: test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b (ed6dc7fc-a823-4bcd-8191-43541b5f5f67 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.769-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: ba11c651-394d-4ae0-be0f-a403d573c1ee: config.cache.chunks.test1_fsmdb0.agg_out ( d42e625c-196f-4a50-b0c5-66d06bbde62c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:27.321-0500 I NETWORK [conn90] end connection 127.0.0.1:56190 (35 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.798-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 (d851e299-edb2-4d3b-b482-b93995646ad7) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796686, 2032), t: 1 } and commit timestamp Timestamp(1574796686, 2032)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.749-0500 I STORAGE [conn110] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b'. Ident: collection-324-8224331490264904478, commit timestamp: Timestamp(1574796686, 2545)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.815-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.770-0500 I INDEX [ShardServerCatalogCacheLoader-1] Index build completed: ba11c651-394d-4ae0-be0f-a403d573c1ee
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:27.321-0500 I NETWORK [conn91] end connection 127.0.0.1:56200 (34 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.798-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 (d851e299-edb2-4d3b-b482-b93995646ad7).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.752-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.815-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.770-0500 I SHARDING [ShardServerCatalogCacheLoader-1] Marking collection config.cache.chunks.test1_fsmdb0.agg_out as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:27.323-0500 I NETWORK [conn92] end connection 127.0.0.1:56202 (33 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.798-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 (d851e299-edb2-4d3b-b482-b93995646ad7)'. Ident: 'index-322--8000595249233899911', commit timestamp: 'Timestamp(1574796686, 2032)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:29.943-0500 MongoDB shell version v0.0.0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.753-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.819-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.819-0500 Implicit session: session { "id" : UUID("a384f421-4497-411a-a4f0-8382cb509fae") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.819-0500 MongoDB server version: 0.0.0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.819-0500 true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.819-0500 2019-11-26T14:31:30.004-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.819-0500 2019-11-26T14:31:30.004-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.819-0500 2019-11-26T14:31:30.005-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.819-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.819-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.819-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.819-0500 [jsTest] New session started with sessionID: { "id" : UUID("d574dec1-8a6e-4f3b-b166-e241d09022cf") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500 2019-11-26T14:31:30.008-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500 2019-11-26T14:31:30.008-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500 2019-11-26T14:31:30.008-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500 2019-11-26T14:31:30.008-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500 2019-11-26T14:31:30.009-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500 [jsTest] New session started with sessionID: { "id" : UUID("f6273f27-6005-4de0-859a-abc8d5f8f1ef") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500 2019-11-26T14:31:30.010-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500 2019-11-26T14:31:30.010-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500 2019-11-26T14:31:30.010-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500 2019-11-26T14:31:30.010-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500 2019-11-26T14:31:30.011-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.820-0500 [jsTest] New session started with sessionID: { "id" : UUID("5c72cc44-1194-44a9-97cd-3479874cd1c1") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.821-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.821-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.821-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.821-0500 Skipping data consistency checks for 1-node CSRS: { "type" : "replica set", "primary" : "localhost:20000", "nodes" : [ "localhost:20000" ] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.821-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.821-0500 Implicit session: session { "id" : UUID("e48d4187-c5b2-4f21-b7ec-4798fbf37630") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.821-0500 MongoDB server version: 0.0.0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.821-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.821-0500 Implicit session: session { "id" : UUID("754b2c92-2f93-4d70-9813-6e606814cfb1") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.821-0500 MongoDB server version: 0.0.0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.821-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.821-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.821-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.821-0500 [jsTest] New session started with sessionID: { "id" : UUID("9ce12365-9210-47a4-b46f-3d683d93c875") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.821-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.821-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.821-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.818-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.821-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.772-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.821-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:27.333-0500 I NETWORK [conn88] end connection 127.0.0.1:56158 (32 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.822-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:29.995-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44942 #78 (1 connection now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.822-0500 [jsTest] New session started with sessionID: { "id" : UUID("f1a971fd-20ee-4478-8359-9553680648d3") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.822-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.822-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:30.011-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35122 #50 (9 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.822-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.822-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.822-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:30.011-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51766 #56 (9 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.822-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.823-0500 [jsTest] New session started with sessionID: { "id" : UUID("ef4d242e-e0cc-48bf-a8dd-1f4e3aaf487a") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.823-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.798-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 (d851e299-edb2-4d3b-b482-b93995646ad7)'. Ident: 'index-331--8000595249233899911', commit timestamp: 'Timestamp(1574796686, 2032)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.823-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.823-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.753-0500 I COMMAND [conn71] CMD: dropIndexes test1_fsmdb0.agg_out: { padding: "text" }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.823-0500 Running data consistency checks for replica set: shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.820-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.823-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.823-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.773-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { flag: 1.0 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.823-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:30.004-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56346 #95 (33 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.823-0500 [jsTest] New session started with sessionID: { "id" : UUID("0aabb8de-ebbb-4858-bed8-750b460eb8c6") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:29.995-0500 I NETWORK [conn78] received client metadata from 127.0.0.1:44942 conn78: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.823-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:30.011-0500 I NETWORK [conn50] received client metadata from 127.0.0.1:35122 conn50: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.824-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:30.011-0500 I NETWORK [conn56] received client metadata from 127.0.0.1:51766 conn56: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.824-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.798-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368'. Ident: collection-321--8000595249233899911, commit timestamp: Timestamp(1574796686, 2032)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.824-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.759-0500 I COMMAND [conn71] CMD: dropIndexes test1_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.824-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.820-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 (d851e299-edb2-4d3b-b482-b93995646ad7) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796686, 2032), t: 1 } and commit timestamp Timestamp(1574796686, 2032)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.824-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.773-0500 I COMMAND [conn80] CMD: dropIndexes test1_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.824-0500 [jsTest] New session started with sessionID: { "id" : UUID("0bbc9230-ca9a-408a-9726-23e550f0276f") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:30.005-0500 I NETWORK [conn95] received client metadata from 127.0.0.1:56346 conn95: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.825-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:30.065-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44964 #79 (2 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.825-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:30.079-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35148 #51 (10 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.825-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:30.079-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51786 #57 (10 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.825-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.825-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.817-0500 I COMMAND [ReplWriterWorker-6] CMD: drop test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.825-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.764-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.825-0500 [jsTest] New session started with sessionID: { "id" : UUID("c6472a52-36eb-458c-8f6a-30118b45fa81") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.820-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 (d851e299-edb2-4d3b-b482-b93995646ad7).
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.825-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.778-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { padding: "text" }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.826-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:30.005-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56348 #96 (34 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.826-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:30.065-0500 I NETWORK [conn79] received client metadata from 127.0.0.1:44964 conn79: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.826-0500 Running data consistency checks for replica set: shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:30.068-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44966 #80 (3 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.826-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:30.079-0500 I NETWORK [conn57] received client metadata from 127.0.0.1:51786 conn57: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.826-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.826-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.817-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b (ed6dc7fc-a823-4bcd-8191-43541b5f5f67) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796686, 2545), t: 1 } and commit timestamp Timestamp(1574796686, 2545)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.826-0500 [jsTest] New session started with sessionID: { "id" : UUID("249c99a9-8954-4bc1-b401-e30a9bab4536") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.767-0500 I COMMAND [conn71] CMD: dropIndexes test1_fsmdb0.agg_out: { flag: 1.0 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.826-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.820-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: a03a0365-42c6-44f7-b997-f00ce3f4d025: test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b ( ed6dc7fc-a823-4bcd-8191-43541b5f5f67 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.827-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.798-0500 I COMMAND [conn80] CMD: dropIndexes test1_fsmdb0.agg_out: { padding: "text" }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.827-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:30.005-0500 I NETWORK [conn96] received client metadata from 127.0.0.1:56348 conn96: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.827-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:30.114-0500 I NETWORK [conn96] end connection 127.0.0.1:56348 (33 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.827-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:30.069-0500 I NETWORK [conn80] received client metadata from 127.0.0.1:44966 conn80: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.827-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:30.090-0500 W CONTROL [conn57] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 40 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.827-0500 [jsTest] New session started with sessionID: { "id" : UUID("a79f8393-323b-48a3-8c6e-9ba28f691211") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.818-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b (ed6dc7fc-a823-4bcd-8191-43541b5f5f67).
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.827-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.770-0500 I COMMAND [conn71] CMD: dropIndexes test1_fsmdb0.agg_out: { padding: "text" }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.828-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.820-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 (d851e299-edb2-4d3b-b482-b93995646ad7)'. Ident: 'index-322--4104909142373009110', commit timestamp: 'Timestamp(1574796686, 2032)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.828-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.803-0500 I COMMAND [conn80] CMD: dropIndexes test1_fsmdb0.agg_out: { flag: 1.0 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.828-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.828-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:30.080-0500 I NETWORK [conn51] received client metadata from 127.0.0.1:35148 conn51: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.828-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:30.121-0500 I NETWORK [conn95] end connection 127.0.0.1:56346 (32 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.829-0500 [jsTest] New session started with sessionID: { "id" : UUID("6a709d5e-4f6f-4524-a9d5-78f2682d1e5f") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:30.107-0500 I NETWORK [conn80] end connection 127.0.0.1:44966 (2 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.829-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.829-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:30.105-0500 W CONTROL [conn57] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 40 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.829-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.829-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.818-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b (ed6dc7fc-a823-4bcd-8191-43541b5f5f67)'. Ident: 'index-334--8000595249233899911', commit timestamp: 'Timestamp(1574796686, 2545)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.829-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.772-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.829-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.830-0500 [jsTest] New session started with sessionID: { "id" : UUID("768e4fb8-1228-4256-af38-4d7c11d4b95c") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.820-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368 (d851e299-edb2-4d3b-b482-b93995646ad7)'. Ident: 'index-331--4104909142373009110', commit timestamp: 'Timestamp(1574796686, 2032)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.830-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.812-0500 I COMMAND [conn80] CMD: dropIndexes test1_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.830-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.830-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:30.090-0500 W CONTROL [conn51] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 43 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.830-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.830-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:30.111-0500 I NETWORK [conn79] end connection 127.0.0.1:44964 (1 connection now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.830-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:30.108-0500 I NETWORK [conn57] end connection 127.0.0.1:51786 (9 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.831-0500 [jsTest] New session started with sessionID: { "id" : UUID("d0f1f08e-09b3-42ba-8796-bdce687fbb18") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.831-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.831-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.831-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.831-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.831-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.831-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.818-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b (ed6dc7fc-a823-4bcd-8191-43541b5f5f67)'. Ident: 'index-335--8000595249233899911', commit timestamp: 'Timestamp(1574796686, 2545)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.831-0500 [jsTest] New session started with sessionID: { "id" : UUID("721e1f03-a3ba-4241-b9c5-7ec7d53b05ab") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.831-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.831-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.831-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.773-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { flag: 1.0 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:30.831-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js finished.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.820-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368'. Ident: collection-321--4104909142373009110, commit timestamp: Timestamp(1574796686, 2032)
[executor:fsm_workload_test:job0] 2019-11-26T14:31:30.832-0500 agg_out:CheckReplDBHashInBackground ran in 23.16 seconds: no failures detected.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.820-0500 I COMMAND [conn80] CMD: dropIndexes test1_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:30.106-0500 W CONTROL [conn51] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 43 }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:30.114-0500 I NETWORK [conn78] end connection 127.0.0.1:44942 (0 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:30.121-0500 I NETWORK [conn56] end connection 127.0.0.1:51766 (8 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.818-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b'. Ident: collection-333--8000595249233899911, commit timestamp: Timestamp(1574796686, 2545)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.773-0500 I COMMAND [conn71] CMD: dropIndexes test1_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.840-0500 I COMMAND [ReplWriterWorker-11] CMD: drop test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b
[executor:fsm_workload_test:job0] 2019-11-26T14:31:30.833-0500 Running agg_out:CheckReplDBHash...
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.869-0500 I NETWORK [conn122] end connection 127.0.0.1:46514 (45 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.834-0500 Starting JSTest jstests/hooks/run_check_repl_dbhash.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_check_repl_dbhash"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_check_repl_dbhash.js
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:30.108-0500 I NETWORK [conn51] end connection 127.0.0.1:35148 (9 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.862-0500 W CONTROL [conn52] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 323 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.778-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.840-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b (ed6dc7fc-a823-4bcd-8191-43541b5f5f67) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796686, 2545), t: 1 } and commit timestamp Timestamp(1574796686, 2545)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:26.876-0500 I NETWORK [conn121] end connection 127.0.0.1:46510 (44 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:30.121-0500 I NETWORK [conn50] end connection 127.0.0.1:35122 (8 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.864-0500 I NETWORK [conn52] end connection 127.0.0.1:52982 (13 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.797-0500 I COMMAND [conn71] CMD: dropIndexes test1_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.840-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b (ed6dc7fc-a823-4bcd-8191-43541b5f5f67).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:27.307-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:26.876-0500 I NETWORK [conn51] end connection 127.0.0.1:52958 (12 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.803-0500 I COMMAND [conn71] CMD: dropIndexes test1_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.840-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b (ed6dc7fc-a823-4bcd-8191-43541b5f5f67)'. Ident: 'index-334--4104909142373009110', commit timestamp: 'Timestamp(1574796686, 2545)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:27.320-0500 I NETWORK [conn112] end connection 127.0.0.1:46392 (43 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:27.322-0500 I NETWORK [conn48] end connection 127.0.0.1:52872 (11 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.812-0500 I COMMAND [conn71] CMD: dropIndexes test1_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.840-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b (ed6dc7fc-a823-4bcd-8191-43541b5f5f67)'. Ident: 'index-335--4104909142373009110', commit timestamp: 'Timestamp(1574796686, 2545)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:27.321-0500 I NETWORK [conn113] end connection 127.0.0.1:46400 (42 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:27.333-0500 I NETWORK [conn47] end connection 127.0.0.1:52834 (10 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.820-0500 I COMMAND [conn71] CMD: dropIndexes test1_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.840-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b'. Ident: collection-333--4104909142373009110, commit timestamp: Timestamp(1574796686, 2545)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:27.322-0500 I NETWORK [conn114] end connection 127.0.0.1:46420 (41 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:27.358-0500 I NETWORK [shard-registry-reload] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.861-0500 W CONTROL [conn119] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 327 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.862-0500 W CONTROL [conn56] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 718 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:27.322-0500 I NETWORK [conn116] end connection 127.0.0.1:46428 (40 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:27.358-0500 I NETWORK [shard-registry-reload] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.863-0500 I NETWORK [conn118] end connection 127.0.0.1:39054 (44 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.864-0500 I NETWORK [conn56] end connection 127.0.0.1:52092 (12 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:27.322-0500 I NETWORK [conn115] end connection 127.0.0.1:46422 (39 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:27.392-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53000 #54 (11 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.864-0500 I NETWORK [conn119] end connection 127.0.0.1:39056 (43 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:26.876-0500 I NETWORK [conn55] end connection 127.0.0.1:52072 (11 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:27.333-0500 I NETWORK [conn111] end connection 127.0.0.1:46388 (38 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.841-0500 JSTest jstests/hooks/run_check_repl_dbhash.js started with pid 15488.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:27.392-0500 I NETWORK [conn54] received client metadata from 127.0.0.1:53000 conn54: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.868-0500 I NETWORK [conn117] end connection 127.0.0.1:39040 (42 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:27.322-0500 I NETWORK [conn48] end connection 127.0.0.1:51982 (10 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:29.295-0500 I NETWORK [conn17] end connection 127.0.0.1:45670 (37 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:30.008-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53016 #55 (12 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:26.876-0500 I NETWORK [conn116] end connection 127.0.0.1:39034 (41 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:27.333-0500 I NETWORK [conn47] end connection 127.0.0.1:51950 (9 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:29.299-0500 I NETWORK [conn19] end connection 127.0.0.1:45674 (36 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:30.009-0500 I NETWORK [conn55] received client metadata from 127.0.0.1:53016 conn55: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:27.307-0500 I COMMAND [conn65] CMD: dropIndexes test1_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:27.392-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52116 #57 (10 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:30.076-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53040 #56 (13 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:30.011-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46566 #127 (37 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:27.320-0500 I NETWORK [conn93] end connection 127.0.0.1:38918 (40 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:27.392-0500 I NETWORK [conn57] received client metadata from 127.0.0.1:52116 conn57: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:30.076-0500 I NETWORK [conn56] received client metadata from 127.0.0.1:53040 conn56: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:30.011-0500 I NETWORK [conn127] received client metadata from 127.0.0.1:46566 conn127: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:27.321-0500 I NETWORK [conn94] end connection 127.0.0.1:38932 (39 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:30.008-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52126 #58 (11 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:30.087-0500 W CONTROL [conn56] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 323 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:30.011-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46570 #128 (38 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:27.321-0500 I NETWORK [conn95] end connection 127.0.0.1:38944 (38 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:30.009-0500 I NETWORK [conn58] received client metadata from 127.0.0.1:52126 conn58: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:30.109-0500 W CONTROL [conn56] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 323 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:30.011-0500 I NETWORK [conn128] received client metadata from 127.0.0.1:46570 conn128: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:27.321-0500 I NETWORK [conn97] end connection 127.0.0.1:38952 (37 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:30.075-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52148 #59 (12 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:30.111-0500 I NETWORK [conn56] end connection 127.0.0.1:53040 (12 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:30.076-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46582 #129 (39 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:27.322-0500 I NETWORK [conn96] end connection 127.0.0.1:38946 (36 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:30.075-0500 I NETWORK [conn59] received client metadata from 127.0.0.1:52148 conn59: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:30.121-0500 I NETWORK [conn55] end connection 127.0.0.1:53016 (11 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:30.076-0500 I NETWORK [conn129] received client metadata from 127.0.0.1:46582 conn129: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:27.333-0500 I NETWORK [conn92] end connection 127.0.0.1:38914 (35 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:30.086-0500 W CONTROL [conn59] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 718 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:30.078-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46586 #130 (40 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:27.392-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39080 #123 (36 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:30.109-0500 W CONTROL [conn59] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 718 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:30.078-0500 I NETWORK [conn130] received client metadata from 127.0.0.1:46586 conn130: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:27.392-0500 I NETWORK [conn123] received client metadata from 127.0.0.1:39080 conn123: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:30.111-0500 I NETWORK [conn59] end connection 127.0.0.1:52148 (11 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:30.089-0500 W CONTROL [conn130] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 47 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:30.121-0500 I NETWORK [conn58] end connection 127.0.0.1:52126 (10 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:30.008-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39090 #124 (37 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:30.105-0500 W CONTROL [conn130] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 47 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:30.009-0500 I NETWORK [conn124] received client metadata from 127.0.0.1:39090 conn124: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:30.107-0500 I NETWORK [conn129] end connection 127.0.0.1:46582 (39 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:30.009-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39096 #125 (38 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:30.108-0500 I NETWORK [conn130] end connection 127.0.0.1:46586 (38 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:30.009-0500 I NETWORK [conn125] received client metadata from 127.0.0.1:39096 conn125: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:30.114-0500 I NETWORK [conn128] end connection 127.0.0.1:46570 (37 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:30.072-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39110 #126 (39 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:30.121-0500 I NETWORK [conn127] end connection 127.0.0.1:46566 (36 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:30.072-0500 I NETWORK [conn126] received client metadata from 127.0.0.1:39110 conn126: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:30.075-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39112 #127 (40 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:30.075-0500 I NETWORK [conn127] received client metadata from 127.0.0.1:39112 conn127: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:30.086-0500 W CONTROL [conn127] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 327 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:30.108-0500 W CONTROL [conn127] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 327 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:30.111-0500 I NETWORK [conn126] end connection 127.0.0.1:39110 (39 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:30.111-0500 I NETWORK [conn127] end connection 127.0.0.1:39112 (38 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:30.114-0500 I NETWORK [conn125] end connection 127.0.0.1:39096 (37 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:30.121-0500 I NETWORK [conn124] end connection 127.0.0.1:39090 (36 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.864-0500 MongoDB shell version v0.0.0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.914-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:30.915-0500 I NETWORK [listener] connection accepted from 127.0.0.1:44984 #81 (1 connection now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:30.915-0500 I NETWORK [conn81] received client metadata from 127.0.0.1:44984 conn81: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.917-0500 Implicit session: session { "id" : UUID("461f45f6-a488-4a65-8ebc-ece708b8588e") }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.919-0500 MongoDB server version: 0.0.0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.920-0500 true
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.924-0500 2019-11-26T14:31:30.924-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.924-0500 2019-11-26T14:31:30.924-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:30.924-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56388 #97 (33 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:30.924-0500 I NETWORK [conn97] received client metadata from 127.0.0.1:56388 conn97: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.925-0500 2019-11-26T14:31:30.925-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:30.925-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56390 #98 (34 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:30.925-0500 I NETWORK [conn98] received client metadata from 127.0.0.1:56390 conn98: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.926-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.926-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.926-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.926-0500 [jsTest] New session started with sessionID: { "id" : UUID("a0b581e7-20d1-4662-9bf1-90a8f55a7d48") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.926-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.926-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.926-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.928-0500 2019-11-26T14:31:30.928-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.928-0500 2019-11-26T14:31:30.928-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.928-0500 2019-11-26T14:31:30.928-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.928-0500 2019-11-26T14:31:30.928-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:30.928-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39132 #128 (37 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:30.928-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53056 #57 (12 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:30.928-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52170 #60 (11 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:30.928-0500 I NETWORK [conn128] received client metadata from 127.0.0.1:39132 conn128: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:30.928-0500 I NETWORK [conn57] received client metadata from 127.0.0.1:53056 conn57: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.929-0500 2019-11-26T14:31:30.929-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:30.928-0500 I NETWORK [conn60] received client metadata from 127.0.0.1:52170 conn60: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:30.929-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39138 #129 (38 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:30.929-0500 I NETWORK [conn129] received client metadata from 127.0.0.1:39138 conn129: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.929-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.930-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.930-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.930-0500 [jsTest] New session started with sessionID: { "id" : UUID("2097188d-49a2-4b97-8dae-70265469c4e4") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.930-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.930-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.930-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.930-0500 2019-11-26T14:31:30.930-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.930-0500 2019-11-26T14:31:30.930-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.930-0500 2019-11-26T14:31:30.930-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.930-0500 2019-11-26T14:31:30.930-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:30.930-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46606 #131 (37 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:30.931-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35168 #52 (9 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:30.930-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51806 #58 (9 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:30.931-0500 I NETWORK [conn131] received client metadata from 127.0.0.1:46606 conn131: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:30.931-0500 I NETWORK [conn58] received client metadata from 127.0.0.1:51806 conn58: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.931-0500 2019-11-26T14:31:30.931-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:30.931-0500 I NETWORK [conn52] received client metadata from 127.0.0.1:35168 conn52: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:30.931-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46612 #132 (38 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:30.931-0500 I NETWORK [conn132] received client metadata from 127.0.0.1:46612 conn132: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.932-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.932-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.932-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.932-0500 [jsTest] New session started with sessionID: { "id" : UUID("79cd1f06-a234-460e-b194-e67763c21358") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.932-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.932-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.932-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.933-0500 Skipping data consistency checks for 1-node CSRS: { "type" : "sharded cluster", "configsvr" : { "type" : "replica set", "primary" : "localhost:20000", "nodes" : [ "localhost:20000" ] }, "shards" : { "shard-rs0" : { "type" : "replica set", "primary" : "localhost:20001", "nodes" : [ "localhost:20001", "localhost:20002", "localhost:20003" ] }, "shard-rs1" : { "type" : "replica set", "primary" : "localhost:20004", "nodes" : [ "localhost:20004", "localhost:20005", "localhost:20006" ] } }, "mongos" : { "type" : "mongos router", "nodes" : [ "localhost:20007", "localhost:20008" ] } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.985-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:30.985-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45006 #82 (2 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:30.985-0500 I NETWORK [conn82] received client metadata from 127.0.0.1:45006 conn82: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.987-0500 Implicit session: session { "id" : UUID("eeed1fde-3d00-4c6c-8225-f78a1006d751") }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.988-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:30.988-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45008 #83 (3 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:30.988-0500 I NETWORK [conn83] received client metadata from 127.0.0.1:45008 conn83: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.988-0500 MongoDB server version: 0.0.0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.990-0500 Implicit session: session { "id" : UUID("d5d0e748-81fa-49c5-9f9d-101660847fc1") }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.991-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:30.992-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39152 #130 (39 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:30.992-0500 I NETWORK [conn130] received client metadata from 127.0.0.1:39152 conn130: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.993-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.993-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.993-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.993-0500 [jsTest] New session started with sessionID: { "id" : UUID("b41f1b5b-36aa-42db-813f-d1145a170f21") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.994-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.994-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.994-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:30.995-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46620 #133 (39 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:30.995-0500 I NETWORK [conn133] received client metadata from 127.0.0.1:46620 conn133: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.995-0500 Recreating replica set from config {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.995-0500 "_id" : "shard-rs0",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.995-0500 "version" : 2,
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:30.995-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39156 #131 (40 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.996-0500 "protocolVersion" : NumberLong(1),
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:30.996-0500 I NETWORK [conn131] received client metadata from 127.0.0.1:39156 conn131: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.996-0500 "writeConcernMajorityJournalDefault" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.996-0500 "members" : [
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.996-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.996-0500 "_id" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.996-0500 "host" : "localhost:20001",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.996-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.996-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.996-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.996-0500 "priority" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.996-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.996-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.996-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.996-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.996-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.996-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.996-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.996-0500 "_id" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.997-0500 "host" : "localhost:20002",
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:30.996-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53082 #58 (13 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.997-0500 "arbiterOnly" : false,
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:30.996-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52192 #61 (12 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.997-0500 "buildIndexes" : true,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:30.997-0500 I NETWORK [conn58] received client metadata from 127.0.0.1:53082 conn58: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.997-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.997-0500 "priority" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.997-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.997-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.997-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.997-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.997-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.998-0500 },
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:30.996-0500 I NETWORK [conn61] received client metadata from 127.0.0.1:52192 conn61: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.998-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.998-0500 "_id" : 2,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.998-0500 "host" : "localhost:20003",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.998-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.998-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.998-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.998-0500 "priority" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.998-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.998-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.998-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.998-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.998-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.998-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.998-0500 ],
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.998-0500 "settings" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.998-0500 "chainingAllowed" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.998-0500 "heartbeatIntervalMillis" : 2000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.998-0500 "heartbeatTimeoutSecs" : 10,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.998-0500 "electionTimeoutMillis" : 86400000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.998-0500 "catchUpTimeoutMillis" : -1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.998-0500 "catchUpTakeoverDelayMillis" : 30000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.998-0500 "getLastErrorModes" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.999-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.999-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.999-0500 "getLastErrorDefaults" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.999-0500 "w" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.999-0500 "wtimeout" : 0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.999-0500 },
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:30.999-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51828 #59 (10 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.999-0500 "replicaSetId" : ObjectId("5ddd7d683bbfe7fa5630d3b8")
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.999-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.999-0500 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:30.998-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46628 #134 (40 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.999-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.999-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.999-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.999-0500 [jsTest] New session started with sessionID: { "id" : UUID("0ae7b3dc-cdce-4d57-ad09-df757c4a7aaf") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:30.999-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.000-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.000-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.000-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.000-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.000-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.000-0500 [jsTest] New session started with sessionID: { "id" : UUID("60ea0788-2679-4917-8789-29bb3aef984f") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.000-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.000-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.000-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.000-0500 Recreating replica set from config {
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:30.999-0500 I NETWORK [conn59] received client metadata from 127.0.0.1:51828 conn59: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.000-0500 "_id" : "shard-rs1",
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:30.998-0500 I NETWORK [conn134] received client metadata from 127.0.0.1:46628 conn134: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.000-0500 "version" : 2,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.000-0500 "protocolVersion" : NumberLong(1),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.000-0500 "writeConcernMajorityJournalDefault" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.000-0500 "members" : [
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.000-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.000-0500 "_id" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.000-0500 "host" : "localhost:20004",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.001-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.001-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.001-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.001-0500 "priority" : 1,
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:30.999-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35190 #53 (10 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.001-0500 "tags" : {
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:30.999-0500 I NETWORK [conn53] received client metadata from 127.0.0.1:35190 conn53: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.001-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.001-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.001-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.001-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.001-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.001-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.001-0500 "_id" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.001-0500 "host" : "localhost:20005",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.001-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.001-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.001-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.001-0500 "priority" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.001-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.001-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.001-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.001-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 "_id" : 2,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 "host" : "localhost:20006",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 "priority" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 ],
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 "settings" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 "chainingAllowed" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 "heartbeatIntervalMillis" : 2000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 "heartbeatTimeoutSecs" : 10,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 "electionTimeoutMillis" : 86400000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 "catchUpTimeoutMillis" : -1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 "catchUpTakeoverDelayMillis" : 30000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 "getLastErrorModes" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.002-0500 "getLastErrorDefaults" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500 "w" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500 "wtimeout" : 0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500 "replicaSetId" : ObjectId("5ddd7d6bcf8184c2e1492eba")
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500 [jsTest] New session started with sessionID: { "id" : UUID("5b4ff44d-2af4-4c1e-8a40-faae28f897f1") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500 [jsTest] New session started with sessionID: { "id" : UUID("52ef5a2d-b19f-4b3c-9e41-27aeb212741e") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500 [jsTest] New session started with sessionID: { "id" : UUID("8e417e67-6ec8-4e01-949c-a85fdb761414") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.003-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.004-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.004-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.004-0500 [jsTest] New session started with sessionID: { "id" : UUID("531611ff-38e6-4aff-aebb-f1b82012ebc0") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.004-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.004-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.004-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.004-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.004-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.004-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.004-0500 [jsTest] New session started with sessionID: { "id" : UUID("4ac4bb8c-f1f7-4595-8a44-014e0587a214") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.004-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.004-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.004-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.013-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.013-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.013-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.013-0500 [jsTest] Freezing nodes: [localhost:20002,localhost:20003]
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.013-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.013-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.013-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.014-0500 I COMMAND [conn61] Attempting to step down in response to replSetStepDown command
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.014-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.015-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.015-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.015-0500 [jsTest] Freezing nodes: [localhost:20005,localhost:20006]
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.015-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.015-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.015-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.015-0500 I REPL [conn61] 'freezing' for 86400 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.015-0500 I COMMAND [conn59] Attempting to step down in response to replSetStepDown command
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.015-0500 I COMMAND [conn58] Attempting to step down in response to replSetStepDown command
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.016-0500 I REPL [conn58] 'freezing' for 86400 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.016-0500 I REPL [conn59] 'freezing' for 86400 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.017-0500 I COMMAND [conn53] Attempting to step down in response to replSetStepDown command
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.017-0500 I REPL [conn53] 'freezing' for 86400 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.018-0500 I COMMAND [conn131] CMD fsync: sync:1 lock:1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.020-0500 I COMMAND [conn134] CMD fsync: sync:1 lock:1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.074-0500 W COMMAND [fsyncLockWorker] WARNING: instance is locked, blocking all writes. The fsync command has finished execution, remember to unlock the instance using fsyncUnlock().
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.074-0500 I COMMAND [conn134] mongod is locked and no writes are allowed. db.fsyncUnlock() to unlock
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.074-0500 I COMMAND [conn134] Lock count is 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.074-0500 I COMMAND [conn134] For more info see http://dochub.mongodb.org/core/fsynccommand
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.075-0500 ReplSetTest awaitReplication: going to check only localhost:20005,localhost:20006
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.081-0500 ReplSetTest awaitReplication: starting: optime for primary, localhost:20004, is { "ts" : Timestamp(1574796691, 7), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.082-0500 ReplSetTest awaitReplication: checking secondaries against latest primary optime { "ts" : Timestamp(1574796691, 7), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.084-0500 ReplSetTest awaitReplication: checking secondary #0: localhost:20005
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.085-0500 ReplSetTest awaitReplication: secondary #0, localhost:20005, is synced
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.086-0500 ReplSetTest awaitReplication: checking secondary #1: localhost:20006
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.087-0500 ReplSetTest awaitReplication: secondary #1, localhost:20006, is synced
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.087-0500 ReplSetTest awaitReplication: finished: all 2 secondaries synced at optime { "ts" : Timestamp(1574796691, 7), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.091-0500 checkDBHashesForReplSet checking data hashes against primary: localhost:20004
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.091-0500 checkDBHashesForReplSet going to check data hashes on secondary: localhost:20005
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.093-0500 checkDBHashesForReplSet going to check data hashes on secondary: localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.200-0500 I COMMAND [conn134] command: unlock requested
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.202-0500 I COMMAND [conn134] fsyncUnlock completed. mongod is now unlocked and free to accept writes
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.203-0500 I REPL [conn59] 'unfreezing'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.203-0500 I REPL [conn53] 'unfreezing'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.205-0500 I NETWORK [conn83] end connection 127.0.0.1:45008 (2 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.205-0500 I NETWORK [conn133] end connection 127.0.0.1:46620 (39 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.205-0500 I NETWORK [conn134] end connection 127.0.0.1:46628 (38 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.206-0500 I NETWORK [conn59] end connection 127.0.0.1:51828 (9 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.205-0500 I NETWORK [conn53] end connection 127.0.0.1:35190 (9 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.505-0500 W COMMAND [fsyncLockWorker] WARNING: instance is locked, blocking all writes. The fsync command has finished execution, remember to unlock the instance using fsyncUnlock().
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.505-0500 I COMMAND [conn131] mongod is locked and no writes are allowed. db.fsyncUnlock() to unlock
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.505-0500 I COMMAND [conn131] Lock count is 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.505-0500 I COMMAND [conn131] For more info see http://dochub.mongodb.org/core/fsynccommand
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.505-0500 I COMMAND [conn131] command admin.$cmd appName: "MongoDB Shell" command: fsync { fsync: 1.0, lock: 1.0, allowFsyncFailure: true, lsid: { id: UUID("60ea0788-2679-4917-8789-29bb3aef984f") }, $clusterTime: { clusterTime: Timestamp(1574796691, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:477 locks:{ Mutex: { acquireCount: { W: 1 } } } protocol:op_msg 486ms
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.506-0500 ReplSetTest awaitReplication: going to check only localhost:20002,localhost:20003
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.513-0500 ReplSetTest awaitReplication: starting: optime for primary, localhost:20001, is { "ts" : Timestamp(1574796691, 8), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.513-0500 ReplSetTest awaitReplication: checking secondaries against latest primary optime { "ts" : Timestamp(1574796691, 8), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.515-0500 ReplSetTest awaitReplication: checking secondary #0: localhost:20002
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.516-0500 ReplSetTest awaitReplication: secondary #0, localhost:20002, is synced
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.518-0500 ReplSetTest awaitReplication: checking secondary #1: localhost:20003
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.519-0500 ReplSetTest awaitReplication: secondary #1, localhost:20003, is synced
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.519-0500 ReplSetTest awaitReplication: finished: all 2 secondaries synced at optime { "ts" : Timestamp(1574796691, 8), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.523-0500 checkDBHashesForReplSet checking data hashes against primary: localhost:20001
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.523-0500 checkDBHashesForReplSet going to check data hashes on secondary: localhost:20002
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.525-0500 checkDBHashesForReplSet going to check data hashes on secondary: localhost:20003
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.609-0500 I STORAGE [TimestampMonitor] Removing drop-pending idents with drop timestamps before timestamp Timestamp(1574796686, 7)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.609-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-73--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796667, 6)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.612-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-79--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796667, 6)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.612-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-69--2588534479858262356 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796667, 6)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.613-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-93--2588534479858262356 (ns: config.cache.chunks.test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796667, 10)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.614-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-95--2588534479858262356 (ns: config.cache.chunks.test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796667, 10)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.615-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-91--2588534479858262356 (ns: config.cache.chunks.test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796667, 10)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.616-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-26--2588534479858262356 (ns: test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 15)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.617-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-27--2588534479858262356 (ns: test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 15)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.619-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-25--2588534479858262356 (ns: test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 15)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.619-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-30--2588534479858262356 (ns: config.cache.chunks.test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 23)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.620-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-31--2588534479858262356 (ns: config.cache.chunks.test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 23)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.621-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-29--2588534479858262356 (ns: config.cache.chunks.test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 23)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.666-0500 I COMMAND [conn131] command: unlock requested
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.668-0500 I COMMAND [conn131] fsyncUnlock completed. mongod is now unlocked and free to accept writes
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.668-0500 I REPL [conn61] 'unfreezing'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.669-0500 I REPL [conn58] 'unfreezing'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.670-0500 I NETWORK [conn82] end connection 127.0.0.1:45006 (1 connection now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.670-0500 I NETWORK [conn130] end connection 127.0.0.1:39152 (39 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.670-0500 I NETWORK [conn131] end connection 127.0.0.1:39156 (38 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.671-0500 I NETWORK [conn58] end connection 127.0.0.1:53082 (12 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.670-0500 I NETWORK [conn61] end connection 127.0.0.1:52192 (11 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.672-0500 Finished data consistency checks for cluster in 751 ms.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.673-0500 I NETWORK [conn81] end connection 127.0.0.1:44984 (0 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.673-0500 I NETWORK [conn98] end connection 127.0.0.1:56390 (33 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.674-0500 I NETWORK [conn129] end connection 127.0.0.1:39138 (37 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.674-0500 I NETWORK [conn132] end connection 127.0.0.1:46612 (37 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.681-0500 I NETWORK [conn57] end connection 127.0.0.1:53056 (11 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.681-0500 I NETWORK [conn52] end connection 127.0.0.1:35168 (8 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.681-0500 I NETWORK [conn97] end connection 127.0.0.1:56388 (32 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.681-0500 I NETWORK [conn58] end connection 127.0.0.1:51806 (8 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.681-0500 I NETWORK [conn131] end connection 127.0.0.1:46606 (36 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.681-0500 I NETWORK [conn60] end connection 127.0.0.1:52170 (10 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.681-0500 I NETWORK [conn128] end connection 127.0.0.1:39132 (36 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:31.682-0500 JSTest jstests/hooks/run_check_repl_dbhash.js finished.
[executor:fsm_workload_test:job0] 2019-11-26T14:31:31.683-0500 agg_out:CheckReplDBHash ran in 0.85 seconds: no failures detected.
[executor:fsm_workload_test:job0] 2019-11-26T14:31:31.683-0500 Running agg_out:ValidateCollections...
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.684-0500 Starting JSTest jstests/hooks/run_validate_collections.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_validate_collections"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_validate_collections.js
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.691-0500 JSTest jstests/hooks/run_validate_collections.js started with pid 15521.
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.713-0500 MongoDB shell version v0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.763-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.763-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45026 #84 (1 connection now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.764-0500 I NETWORK [conn84] received client metadata from 127.0.0.1:45026 conn84: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.765-0500 Implicit session: session { "id" : UUID("f5d0f914-38a8-48b9-bc07-de8349974dbc") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.767-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.769-0500 true
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.772-0500 2019-11-26T14:31:31.772-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.772-0500 2019-11-26T14:31:31.772-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.773-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56430 #99 (33 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.773-0500 I NETWORK [conn99] received client metadata from 127.0.0.1:56430 conn99: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.773-0500 2019-11-26T14:31:31.773-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.773-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56432 #100 (34 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.774-0500 I NETWORK [conn100] received client metadata from 127.0.0.1:56432 conn100: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.774-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.775-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.775-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.775-0500 [jsTest] New session started with sessionID: { "id" : UUID("21b98927-ea05-4e3e-9f10-92c4a6e08335") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.775-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.775-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.775-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.777-0500 2019-11-26T14:31:31.777-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.777-0500 2019-11-26T14:31:31.777-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.777-0500 2019-11-26T14:31:31.777-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.777-0500 2019-11-26T14:31:31.777-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.777-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53096 #59 (12 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.777-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39178 #132 (37 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.777-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52210 #62 (11 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.777-0500 I NETWORK [conn62] received client metadata from 127.0.0.1:52210 conn62: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.777-0500 I NETWORK [conn59] received client metadata from 127.0.0.1:53096 conn59: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.777-0500 I NETWORK [conn132] received client metadata from 127.0.0.1:39178 conn132: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.778-0500 2019-11-26T14:31:31.778-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.778-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39180 #133 (38 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.778-0500 I NETWORK [conn133] received client metadata from 127.0.0.1:39180 conn133: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.779-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.779-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.779-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.779-0500 [jsTest] New session started with sessionID: { "id" : UUID("43249ca6-6037-4343-8a5e-896aefd28eb4") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.779-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.779-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.779-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.779-0500 2019-11-26T14:31:31.779-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.780-0500 2019-11-26T14:31:31.780-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.780-0500 2019-11-26T14:31:31.780-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.780-0500 2019-11-26T14:31:31.780-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.780-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51846 #60 (9 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.780-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35208 #54 (9 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.780-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46652 #135 (37 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.780-0500 I NETWORK [conn135] received client metadata from 127.0.0.1:46652 conn135: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.780-0500 I NETWORK [conn60] received client metadata from 127.0.0.1:51846 conn60: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.780-0500 I NETWORK [conn54] received client metadata from 127.0.0.1:35208 conn54: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.780-0500 2019-11-26T14:31:31.780-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.781-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46654 #136 (38 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.781-0500 I NETWORK [conn136] received client metadata from 127.0.0.1:46654 conn136: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.781-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.781-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.782-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.782-0500 [jsTest] New session started with sessionID: { "id" : UUID("820fcb69-756e-4451-a7b4-481635770111") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.782-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.782-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.782-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.850-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.850-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45048 #85 (2 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.850-0500 I NETWORK [conn85] received client metadata from 127.0.0.1:45048 conn85: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.851-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.851-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.851-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.851-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.851-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45050 #86 (3 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.851-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45052 #87 (4 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.852-0500 I NETWORK [conn86] received client metadata from 127.0.0.1:45050 conn86: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.852-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45054 #88 (5 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.852-0500 I NETWORK [conn87] received client metadata from 127.0.0.1:45052 conn87: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.852-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45056 #89 (6 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.852-0500 I NETWORK [conn88] received client metadata from 127.0.0.1:45054 conn88: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.852-0500 Implicit session: session { "id" : UUID("92f0dd0f-5e29-4005-8055-83f305eab957") }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.852-0500 I NETWORK [conn89] received client metadata from 127.0.0.1:45056 conn89: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.853-0500 Implicit session: session { "id" : UUID("203cb81d-6a35-48a1-8e4a-6925a27d7c88") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.853-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.853-0500 Implicit session: session { "id" : UUID("cc60651a-37eb-4a49-bb26-24a8ee59cf9a") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.854-0500 Implicit session: session { "id" : UUID("49cde773-3f92-46cb-a8d3-36d68d20cdc2") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.854-0500 Implicit session: session { "id" : UUID("75267546-84a1-44e1-b44a-b91f3bee0839") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.855-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.855-0500 Running validate() on localhost:20001
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.855-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.855-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39200 #134 (39 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.855-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.855-0500 I NETWORK [conn134] received client metadata from 127.0.0.1:39200 conn134: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.855-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.856-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.856-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.856-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.856-0500 [jsTest] New session started with sessionID: { "id" : UUID("e698829a-8ad8-4bc3-9b79-d7064b3c0dc9") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.856-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.857-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.857-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.857-0500 Running validate() on localhost:20002
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.857-0500 Running validate() on localhost:20000
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.857-0500 Running validate() on localhost:20004
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.857-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56464 #101 (35 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.857-0500 I NETWORK [conn101] received client metadata from 127.0.0.1:56464 conn101: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.857-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46672 #137 (39 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.857-0500 I NETWORK [conn137] received client metadata from 127.0.0.1:46672 conn137: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.856-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52236 #63 (12 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.857-0500 I NETWORK [conn63] received client metadata from 127.0.0.1:52236 conn63: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.857-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53130 #60 (13 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.857-0500 I NETWORK [conn60] received client metadata from 127.0.0.1:53130 conn60: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.857-0500 Running validate() on localhost:20003
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.858-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.858-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.858-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.858-0500 [jsTest] New session started with sessionID: { "id" : UUID("27d510bd-5868-4732-bbe4-fbf7a1d63edf") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.858-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.858-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.858-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.858-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.858-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.858-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.858-0500 [jsTest] New session started with sessionID: { "id" : UUID("b89e991e-c5ef-4639-8e06-b44e6a4ae9fd") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.858-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.858-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.858-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.858-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.858-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.858-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.858-0500 [jsTest] New session started with sessionID: { "id" : UUID("3084a21b-e5d1-4e42-ae70-bcd6fd146fe7") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.858-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.858-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.858-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.859-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.859-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.859-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.859-0500 [jsTest] New session started with sessionID: { "id" : UUID("6ba23ced-cd18-45a9-810b-d80649639e2f") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.859-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.859-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.859-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.861-0500 I COMMAND [conn134] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.861-0500 I INDEX [conn134] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.862-0500 I COMMAND [conn63] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.862-0500 W STORAGE [conn63] Could not complete validation of table:collection-17--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.862-0500 I INDEX [conn63] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.862-0500 W STORAGE [conn63] Could not complete validation of table:index-18--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.862-0500 I COMMAND [conn101] CMD: validate admin.system.keys, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.863-0500 W STORAGE [conn101] Could not complete validation of table:collection-41-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.863-0500 I INDEX [conn63] validating collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.863-0500 I INDEX [conn101] validating the internal structure of index _id_ on collection admin.system.keys
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.863-0500 W STORAGE [conn101] Could not complete validation of table:index-42-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.863-0500 I INDEX [conn63] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.863-0500 I INDEX [conn63] Validation complete for collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.863-0500 I INDEX [conn101] validating collection admin.system.keys (UUID: 807238e6-a72f-4ef0-b305-4bab60afd0e6)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.863-0500 I COMMAND [conn137] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.863-0500 I COMMAND [conn60] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.863-0500 I INDEX [conn101] validating index consistency _id_ on collection admin.system.keys
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.863-0500 I INDEX [conn101] Validation complete for collection admin.system.keys (UUID: 807238e6-a72f-4ef0-b305-4bab60afd0e6). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.864-0500 I COMMAND [conn101] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.864-0500 I INDEX [conn134] validating collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.863-0500 W STORAGE [conn60] Could not complete validation of table:collection-17--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.864-0500 I INDEX [conn137] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.864-0500 I INDEX [conn134] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.863-0500 I INDEX [conn60] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.864-0500 I INDEX [conn134] Validation complete for collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.863-0500 W STORAGE [conn60] Could not complete validation of table:index-18--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.863-0500 I INDEX [conn60] validating collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.863-0500 I INDEX [conn60] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.863-0500 I INDEX [conn60] Validation complete for collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.864-0500 I COMMAND [conn63] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.864-0500 I INDEX [conn101] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.865-0500 W STORAGE [conn63] Could not complete validation of table:collection-31--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:31.883-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.865-0500 I COMMAND [conn60] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:32.816-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:32.816-0500 Implicit session: session { "id" : UUID("c5636d17-92b5-4824-b63e-3d3753d12f2a") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:32.816-0500 Implicit session: session { "id" : UUID("b26b91bd-f90c-45aa-9483-9ff905d7222d") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:32.816-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:32.816-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:32.816-0500 Running validate() on localhost:20005
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:32.816-0500 Running validate() on localhost:20006
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:32.816-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:32.816-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:32.816-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:32.816-0500 [jsTest] New session started with sessionID: { "id" : UUID("9c9b3e20-045f-4332-8f9f-d799deba4410") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:32.816-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:32.817-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:32.817-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:32.817-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:32.817-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:32.817-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:32.817-0500 [jsTest] New session started with sessionID: { "id" : UUID("5c9edaff-be59-4f13-8cd6-f01443461b10") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:32.817-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:32.817-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:32.817-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.865-0500 I COMMAND [conn134] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.866-0500 I INDEX [conn137] validating collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:32.817-0500 JSTest jstests/hooks/run_validate_collections.js finished.
[executor:fsm_workload_test:job0] 2019-11-26T14:31:32.818-0500 agg_out:ValidateCollections ran in 1.13 seconds: no failures detected.
[executor:fsm_workload_test:job0] 2019-11-26T14:31:32.818-0500 Running agg_out:CleanupConcurrencyWorkloads...
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.866-0500 I INDEX [conn101] validating collection admin.system.version (UUID: 1b1834a4-71ee-49e7-abbc-7ae09d5089b2)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.883-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45068 #90 (7 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.891-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51878 #61 (10 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.891-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35238 #55 (10 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.865-0500 I INDEX [conn63] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.865-0500 W STORAGE [conn60] Could not complete validation of table:collection-31--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.866-0500 I INDEX [conn137] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.866-0500 I INDEX [conn134] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.867-0500 I INDEX [conn101] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.883-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45069 #91 (8 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.891-0500 I NETWORK [conn61] received client metadata from 127.0.0.1:51878 conn61: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.891-0500 I NETWORK [conn55] received client metadata from 127.0.0.1:35238 conn55: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:32.821-0500 I NETWORK [listener] connection accepted from 127.0.0.1:58220 #36 (1 connection now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.865-0500 W STORAGE [conn63] Could not complete validation of table:index-32--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.865-0500 I INDEX [conn60] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.866-0500 I INDEX [conn137] Validation complete for collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.868-0500 I INDEX [conn134] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.867-0500 I INDEX [conn101] Validation complete for collection admin.system.version (UUID: 1b1834a4-71ee-49e7-abbc-7ae09d5089b2). No corruption found.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.883-0500 I NETWORK [conn90] received client metadata from 127.0.0.1:45068 conn90: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.898-0500 I COMMAND [conn61] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.898-0500 I COMMAND [conn55] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:32.822-0500 I NETWORK [conn36] received client metadata from 127.0.0.1:58220 conn36: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.865-0500 I INDEX [conn63] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.865-0500 W STORAGE [conn60] Could not complete validation of table:index-32--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.868-0500 I COMMAND [conn137] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.870-0500 I INDEX [conn134] validating collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.869-0500 I COMMAND [conn101] CMD: validate config.actionlog, full:true
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.884-0500 I NETWORK [conn91] received client metadata from 127.0.0.1:45069 conn91: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.898-0500 W STORAGE [conn61] Could not complete validation of table:collection-17--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.898-0500 W STORAGE [conn55] Could not complete validation of table:collection-17--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.865-0500 W STORAGE [conn63] Could not complete validation of table:index-35--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.865-0500 I INDEX [conn60] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.869-0500 I INDEX [conn137] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.870-0500 I INDEX [conn134] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.869-0500 W STORAGE [conn101] Could not complete validation of table:collection-47-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.912-0500 I NETWORK [conn87] end connection 127.0.0.1:45052 (7 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.898-0500 I INDEX [conn61] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.898-0500 I INDEX [conn55] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.865-0500 I INDEX [conn63] validating collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.865-0500 W STORAGE [conn60] Could not complete validation of table:index-35--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.870-0500 I INDEX [conn137] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.870-0500 I INDEX [conn134] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.869-0500 I INDEX [conn101] validating the internal structure of index _id_ on collection config.actionlog
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.928-0500 I NETWORK [conn88] end connection 127.0.0.1:45054 (6 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.898-0500 W STORAGE [conn61] Could not complete validation of table:index-18--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.899-0500 W STORAGE [conn55] Could not complete validation of table:index-18--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.865-0500 I INDEX [conn63] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.865-0500 I INDEX [conn60] validating collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.872-0500 I INDEX [conn137] validating collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.870-0500 I INDEX [conn134] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.869-0500 W STORAGE [conn101] Could not complete validation of table:index-48-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.932-0500 I NETWORK [conn89] end connection 127.0.0.1:45056 (5 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.898-0500 I INDEX [conn61] validating collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.899-0500 I INDEX [conn55] validating collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.865-0500 I INDEX [conn63] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.865-0500 I INDEX [conn60] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.872-0500 I INDEX [conn137] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.870-0500 I COMMAND [conn134] CMD: validate config.cache.chunks.test1_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.869-0500 I INDEX [conn101] validating collection config.actionlog (UUID: ff427093-1de4-4a9f-83c9-6b01392e1aea)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.960-0500 I NETWORK [conn91] end connection 127.0.0.1:45069 (4 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.899-0500 I INDEX [conn61] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.899-0500 I INDEX [conn55] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.865-0500 I INDEX [conn63] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.866-0500 I INDEX [conn60] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.873-0500 I INDEX [conn137] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.870-0500 W STORAGE [conn134] Could not complete validation of table:collection-317-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.869-0500 I INDEX [conn101] validating index consistency _id_ on collection config.actionlog
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.962-0500 I NETWORK [conn90] end connection 127.0.0.1:45068 (3 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.899-0500 I INDEX [conn61] Validation complete for collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.899-0500 I INDEX [conn55] Validation complete for collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.865-0500 I COMMAND [conn63] CMD: validate config.cache.chunks.test1_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.866-0500 I INDEX [conn60] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.873-0500 I INDEX [conn137] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.870-0500 I INDEX [conn134] validating the internal structure of index _id_ on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.869-0500 I INDEX [conn101] Validation complete for collection config.actionlog (UUID: ff427093-1de4-4a9f-83c9-6b01392e1aea). No corruption found.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:31.976-0500 I NETWORK [conn85] end connection 127.0.0.1:45048 (2 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.901-0500 I COMMAND [conn61] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.901-0500 I COMMAND [conn55] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.865-0500 W STORAGE [conn63] Could not complete validation of table:collection-325--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.866-0500 I COMMAND [conn60] CMD: validate config.cache.chunks.test1_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.873-0500 I COMMAND [conn137] CMD: validate config.cache.chunks.test1_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.870-0500 W STORAGE [conn134] Could not complete validation of table:index-319-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.869-0500 I COMMAND [conn101] CMD: validate config.changelog, full:true
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:32.002-0500 I NETWORK [conn86] end connection 127.0.0.1:45050 (1 connection now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.901-0500 W STORAGE [conn61] Could not complete validation of table:collection-29--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.901-0500 W STORAGE [conn55] Could not complete validation of table:collection-29--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.865-0500 I INDEX [conn63] validating the internal structure of index _id_ on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.866-0500 W STORAGE [conn60] Could not complete validation of table:collection-325--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.873-0500 W STORAGE [conn137] Could not complete validation of table:collection-108--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.870-0500 I INDEX [conn134] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.869-0500 W STORAGE [conn101] Could not complete validation of table:collection-49-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:32.005-0500 I NETWORK [conn84] end connection 127.0.0.1:45026 (0 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.901-0500 I INDEX [conn61] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.901-0500 I INDEX [conn55] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.865-0500 W STORAGE [conn63] Could not complete validation of table:index-326--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.866-0500 I INDEX [conn60] validating the internal structure of index _id_ on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.873-0500 I INDEX [conn137] validating the internal structure of index _id_ on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.870-0500 W STORAGE [conn134] Could not complete validation of table:index-322-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.869-0500 I INDEX [conn101] validating the internal structure of index _id_ on collection config.changelog
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:32.821-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45076 #92 (1 connection now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.901-0500 W STORAGE [conn61] Could not complete validation of table:index-30--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.901-0500 W STORAGE [conn55] Could not complete validation of table:index-30--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.865-0500 I INDEX [conn63] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.866-0500 W STORAGE [conn60] Could not complete validation of table:index-326--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.873-0500 W STORAGE [conn137] Could not complete validation of table:index-109--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.871-0500 I INDEX [conn134] validating collection config.cache.chunks.test1_fsmdb0.agg_out (UUID: ad34fc50-677f-4846-b03c-7b24f5f1669a)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.871-0500 I INDEX [conn134] validating index consistency _id_ on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.871-0500 I INDEX [conn134] validating index consistency lastmod_1 on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:32.822-0500 I NETWORK [conn92] received client metadata from 127.0.0.1:45076 conn92: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.901-0500 I INDEX [conn55] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.865-0500 W STORAGE [conn63] Could not complete validation of table:index-329--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.866-0500 I INDEX [conn60] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.874-0500 I INDEX [conn137] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.869-0500 W STORAGE [conn101] Could not complete validation of table:index-50-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.901-0500 I INDEX [conn61] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.871-0500 I INDEX [conn134] Validation complete for collection config.cache.chunks.test1_fsmdb0.agg_out (UUID: ad34fc50-677f-4846-b03c-7b24f5f1669a). No corruption found.
[CleanupConcurrencyWorkloads:job0:agg_out:CleanupConcurrencyWorkloads] 2019-11-26T14:31:32.831-0500 Dropping all databases except for ['config', 'local', '$external', 'admin']
[CleanupConcurrencyWorkloads:job0:agg_out:CleanupConcurrencyWorkloads] 2019-11-26T14:31:32.832-0500 Dropping database test1_fsmdb0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:32.823-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45080 #93 (2 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.901-0500 W STORAGE [conn55] Could not complete validation of table:index-31--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.866-0500 I INDEX [conn63] validating collection config.cache.chunks.test1_fsmdb0.agg_out (UUID: ad34fc50-677f-4846-b03c-7b24f5f1669a)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.866-0500 W STORAGE [conn60] Could not complete validation of table:index-329--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.874-0500 W STORAGE [conn137] Could not complete validation of table:index-110--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.870-0500 I INDEX [conn101] validating collection config.changelog (UUID: 65b892c8-48e9-4ca9-8300-743a486a361f)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.901-0500 W STORAGE [conn61] Could not complete validation of table:index-31--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.871-0500 I COMMAND [conn134] CMD: validate config.cache.chunks.test1_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:32.823-0500 I NETWORK [conn93] received client metadata from 127.0.0.1:45080 conn93: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.901-0500 I INDEX [conn55] validating collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.866-0500 I INDEX [conn63] validating index consistency _id_ on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:32.832-0500 I NETWORK [listener] connection accepted from 127.0.0.1:58228 #37 (2 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.866-0500 I INDEX [conn60] validating collection config.cache.chunks.test1_fsmdb0.agg_out (UUID: ad34fc50-677f-4846-b03c-7b24f5f1669a)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.874-0500 I INDEX [conn137] validating collection config.cache.chunks.test1_fsmdb0.agg_out (UUID: d42e625c-196f-4a50-b0c5-66d06bbde62c)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.870-0500 I INDEX [conn101] validating index consistency _id_ on collection config.changelog
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.901-0500 I INDEX [conn61] validating collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.872-0500 I INDEX [conn134] validating the internal structure of index _id_ on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:32.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.901-0500 I INDEX [conn55] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.901-0500 I INDEX [conn55] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.866-0500 I INDEX [conn63] validating index consistency lastmod_1 on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.866-0500 I INDEX [conn60] validating index consistency _id_ on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.874-0500 I INDEX [conn137] validating index consistency _id_ on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.870-0500 I INDEX [conn101] Validation complete for collection config.changelog (UUID: 65b892c8-48e9-4ca9-8300-743a486a361f). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.901-0500 I INDEX [conn61] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.873-0500 I INDEX [conn134] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:32.825-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20004
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:32.832-0500 I NETWORK [conn37] received client metadata from 127.0.0.1:58228 conn37: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.901-0500 I INDEX [conn55] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.902-0500 I COMMAND [conn55] CMD: validate config.cache.chunks.test1_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.902-0500 W STORAGE [conn55] Could not complete validation of table:collection-117--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.874-0500 I INDEX [conn137] validating index consistency lastmod_1 on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.870-0500 I COMMAND [conn101] CMD: validate config.chunks, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.901-0500 I INDEX [conn61] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.875-0500 I INDEX [conn134] validating collection config.cache.chunks.test1_fsmdb0.fsmcoll0 (UUID: 24d02c72-11d8-48c7-b13e-109658af75b4)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.875-0500 I INDEX [conn134] validating index consistency _id_ on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.875-0500 I INDEX [conn134] validating index consistency lastmod_1 on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.902-0500 I INDEX [conn55] validating the internal structure of index _id_ on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.874-0500 I INDEX [conn137] Validation complete for collection config.cache.chunks.test1_fsmdb0.agg_out (UUID: d42e625c-196f-4a50-b0c5-66d06bbde62c). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.870-0500 W STORAGE [conn101] Could not complete validation of table:collection-17-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.901-0500 I INDEX [conn61] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.866-0500 I INDEX [conn63] Validation complete for collection config.cache.chunks.test1_fsmdb0.agg_out (UUID: ad34fc50-677f-4846-b03c-7b24f5f1669a). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.866-0500 I INDEX [conn60] validating index consistency lastmod_1 on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.876-0500 I INDEX [conn134] Validation complete for collection config.cache.chunks.test1_fsmdb0.fsmcoll0 (UUID: 24d02c72-11d8-48c7-b13e-109658af75b4). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.902-0500 W STORAGE [conn55] Could not complete validation of table:index-118--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.875-0500 I COMMAND [conn137] CMD: validate config.cache.chunks.test1_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.870-0500 I INDEX [conn101] validating the internal structure of index _id_ on collection config.chunks
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.902-0500 I COMMAND [conn61] CMD: validate config.cache.chunks.test1_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.866-0500 I COMMAND [conn63] CMD: validate config.cache.chunks.test1_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.866-0500 I INDEX [conn60] Validation complete for collection config.cache.chunks.test1_fsmdb0.agg_out (UUID: ad34fc50-677f-4846-b03c-7b24f5f1669a). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.876-0500 I COMMAND [conn134] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.902-0500 I INDEX [conn55] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.875-0500 I INDEX [conn137] validating the internal structure of index _id_ on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.870-0500 W STORAGE [conn101] Could not complete validation of table:index-18-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.902-0500 W STORAGE [conn61] Could not complete validation of table:collection-117--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.866-0500 W STORAGE [conn63] Could not complete validation of table:collection-49--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.867-0500 I COMMAND [conn60] CMD: validate config.cache.chunks.test1_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.876-0500 W STORAGE [conn134] Could not complete validation of table:collection-20-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.902-0500 W STORAGE [conn55] Could not complete validation of table:index-119--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.877-0500 I INDEX [conn137] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.870-0500 I INDEX [conn101] validating the internal structure of index ns_1_min_1 on collection config.chunks
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.902-0500 I INDEX [conn61] validating the internal structure of index _id_ on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.866-0500 I INDEX [conn63] validating the internal structure of index _id_ on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.867-0500 W STORAGE [conn60] Could not complete validation of table:collection-49--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.876-0500 I INDEX [conn134] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.902-0500 I INDEX [conn55] validating collection config.cache.chunks.test1_fsmdb0.agg_out (UUID: d42e625c-196f-4a50-b0c5-66d06bbde62c)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.879-0500 I INDEX [conn137] validating collection config.cache.chunks.test1_fsmdb0.fsmcoll0 (UUID: 06773b9f-88ae-4430-b4bd-32b9c52979b6)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.870-0500 W STORAGE [conn101] Could not complete validation of table:index-19-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.902-0500 W STORAGE [conn61] Could not complete validation of table:index-118--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.866-0500 W STORAGE [conn63] Could not complete validation of table:index-50--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.867-0500 I INDEX [conn60] validating the internal structure of index _id_ on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.876-0500 W STORAGE [conn134] Could not complete validation of table:index-23-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.902-0500 I INDEX [conn55] validating index consistency _id_ on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.879-0500 I INDEX [conn137] validating index consistency _id_ on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.870-0500 I INDEX [conn101] validating the internal structure of index ns_1_shard_1_min_1 on collection config.chunks
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.902-0500 I INDEX [conn61] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.866-0500 I INDEX [conn63] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.867-0500 W STORAGE [conn60] Could not complete validation of table:index-50--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.876-0500 I INDEX [conn134] validating collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.902-0500 I INDEX [conn55] validating index consistency lastmod_1 on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.879-0500 I INDEX [conn137] validating index consistency lastmod_1 on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.870-0500 W STORAGE [conn101] Could not complete validation of table:index-20-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.902-0500 W STORAGE [conn61] Could not complete validation of table:index-119--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.866-0500 W STORAGE [conn63] Could not complete validation of table:index-51--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.867-0500 I INDEX [conn60] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.876-0500 I INDEX [conn134] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.902-0500 I INDEX [conn55] Validation complete for collection config.cache.chunks.test1_fsmdb0.agg_out (UUID: d42e625c-196f-4a50-b0c5-66d06bbde62c). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.879-0500 I INDEX [conn137] Validation complete for collection config.cache.chunks.test1_fsmdb0.fsmcoll0 (UUID: 06773b9f-88ae-4430-b4bd-32b9c52979b6). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.870-0500 I INDEX [conn101] validating the internal structure of index ns_1_lastmod_1 on collection config.chunks
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.902-0500 I INDEX [conn61] validating collection config.cache.chunks.test1_fsmdb0.agg_out (UUID: d42e625c-196f-4a50-b0c5-66d06bbde62c)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.867-0500 I INDEX [conn63] validating collection config.cache.chunks.test1_fsmdb0.fsmcoll0 (UUID: 24d02c72-11d8-48c7-b13e-109658af75b4)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.867-0500 W STORAGE [conn60] Could not complete validation of table:index-51--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.876-0500 I INDEX [conn134] Validation complete for collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.903-0500 I COMMAND [conn55] CMD: validate config.cache.chunks.test1_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.880-0500 I COMMAND [conn137] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.870-0500 W STORAGE [conn101] Could not complete validation of table:index-21-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.902-0500 I INDEX [conn61] validating index consistency _id_ on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.867-0500 I INDEX [conn63] validating index consistency _id_ on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.867-0500 I INDEX [conn60] validating collection config.cache.chunks.test1_fsmdb0.fsmcoll0 (UUID: 24d02c72-11d8-48c7-b13e-109658af75b4)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.877-0500 I COMMAND [conn134] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.903-0500 W STORAGE [conn55] Could not complete validation of table:collection-113--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.880-0500 W STORAGE [conn137] Could not complete validation of table:collection-18--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.871-0500 I INDEX [conn101] validating collection config.chunks (UUID: e7035d0b-a892-4426-b520-83da62bcbda6)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.902-0500 I INDEX [conn61] validating index consistency lastmod_1 on collection config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.867-0500 I INDEX [conn63] validating index consistency lastmod_1 on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.867-0500 I INDEX [conn60] validating index consistency _id_ on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.878-0500 I INDEX [conn134] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.903-0500 I INDEX [conn55] validating the internal structure of index _id_ on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.880-0500 I INDEX [conn137] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.871-0500 I INDEX [conn101] validating index consistency _id_ on collection config.chunks
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.902-0500 I INDEX [conn61] Validation complete for collection config.cache.chunks.test1_fsmdb0.agg_out (UUID: d42e625c-196f-4a50-b0c5-66d06bbde62c). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.867-0500 I INDEX [conn63] Validation complete for collection config.cache.chunks.test1_fsmdb0.fsmcoll0 (UUID: 24d02c72-11d8-48c7-b13e-109658af75b4). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.867-0500 I INDEX [conn60] validating index consistency lastmod_1 on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.880-0500 I INDEX [conn134] validating collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.903-0500 W STORAGE [conn55] Could not complete validation of table:index-114--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.880-0500 W STORAGE [conn137] Could not complete validation of table:index-20--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.871-0500 I INDEX [conn101] validating index consistency ns_1_min_1 on collection config.chunks
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.903-0500 I COMMAND [conn61] CMD: validate config.cache.chunks.test1_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.867-0500 I COMMAND [conn63] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.867-0500 I INDEX [conn60] Validation complete for collection config.cache.chunks.test1_fsmdb0.fsmcoll0 (UUID: 24d02c72-11d8-48c7-b13e-109658af75b4). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.880-0500 I INDEX [conn134] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.903-0500 I INDEX [conn55] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.880-0500 I INDEX [conn137] validating collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.871-0500 I INDEX [conn101] validating index consistency ns_1_shard_1_min_1 on collection config.chunks
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.903-0500 W STORAGE [conn61] Could not complete validation of table:collection-113--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.867-0500 W STORAGE [conn63] Could not complete validation of table:collection-29--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.868-0500 I COMMAND [conn60] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.880-0500 I INDEX [conn134] Validation complete for collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.903-0500 W STORAGE [conn55] Could not complete validation of table:index-115--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.880-0500 I INDEX [conn137] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.871-0500 I INDEX [conn101] validating index consistency ns_1_lastmod_1 on collection config.chunks
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.903-0500 I INDEX [conn61] validating the internal structure of index _id_ on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.867-0500 I INDEX [conn63] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.868-0500 W STORAGE [conn60] Could not complete validation of table:collection-29--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.880-0500 I COMMAND [conn134] CMD: validate config.system.sessions, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.903-0500 I INDEX [conn55] validating collection config.cache.chunks.test1_fsmdb0.fsmcoll0 (UUID: 06773b9f-88ae-4430-b4bd-32b9c52979b6)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.903-0500 I INDEX [conn55] validating index consistency _id_ on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.871-0500 I INDEX [conn101] Validation complete for collection config.chunks (UUID: e7035d0b-a892-4426-b520-83da62bcbda6). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.903-0500 W STORAGE [conn61] Could not complete validation of table:index-114--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.867-0500 W STORAGE [conn63] Could not complete validation of table:index-30--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.868-0500 I INDEX [conn60] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.881-0500 I INDEX [conn134] validating the internal structure of index _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.880-0500 I INDEX [conn137] Validation complete for collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.903-0500 I INDEX [conn55] validating index consistency lastmod_1 on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.871-0500 I COMMAND [conn101] CMD: validate config.collections, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.903-0500 I INDEX [conn61] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.868-0500 I INDEX [conn63] validating collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.868-0500 W STORAGE [conn60] Could not complete validation of table:index-30--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.883-0500 I INDEX [conn134] validating the internal structure of index lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.881-0500 I COMMAND [conn137] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.903-0500 I INDEX [conn55] Validation complete for collection config.cache.chunks.test1_fsmdb0.fsmcoll0 (UUID: 06773b9f-88ae-4430-b4bd-32b9c52979b6). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.871-0500 W STORAGE [conn101] Could not complete validation of table:collection-51-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.903-0500 W STORAGE [conn61] Could not complete validation of table:index-115--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.868-0500 I INDEX [conn63] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.868-0500 I INDEX [conn60] validating collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.885-0500 I INDEX [conn134] validating collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.885-0500 I INDEX [conn134] validating index consistency _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.904-0500 I COMMAND [conn55] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.871-0500 I INDEX [conn101] validating the internal structure of index _id_ on collection config.collections
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.903-0500 I INDEX [conn61] validating collection config.cache.chunks.test1_fsmdb0.fsmcoll0 (UUID: 06773b9f-88ae-4430-b4bd-32b9c52979b6)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.868-0500 I INDEX [conn63] Validation complete for collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.868-0500 I INDEX [conn60] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.882-0500 I INDEX [conn137] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.885-0500 I INDEX [conn134] validating index consistency lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.904-0500 W STORAGE [conn55] Could not complete validation of table:collection-27--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.871-0500 W STORAGE [conn101] Could not complete validation of table:index-52-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.903-0500 I INDEX [conn61] validating index consistency _id_ on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.868-0500 I COMMAND [conn63] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.868-0500 I INDEX [conn60] Validation complete for collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.884-0500 I INDEX [conn137] validating collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.885-0500 I INDEX [conn134] Validation complete for collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.904-0500 I INDEX [conn55] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.871-0500 I INDEX [conn101] validating collection config.collections (UUID: c846d630-16e0-4675-b90f-3cd769544ef0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.903-0500 I INDEX [conn61] validating index consistency lastmod_1 on collection config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.868-0500 W STORAGE [conn63] Could not complete validation of table:collection-27--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.869-0500 I COMMAND [conn60] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.884-0500 I INDEX [conn137] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.885-0500 I COMMAND [conn134] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.904-0500 W STORAGE [conn55] Could not complete validation of table:index-28--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.872-0500 I INDEX [conn101] validating index consistency _id_ on collection config.collections
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.903-0500 I INDEX [conn61] Validation complete for collection config.cache.chunks.test1_fsmdb0.fsmcoll0 (UUID: 06773b9f-88ae-4430-b4bd-32b9c52979b6). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.868-0500 I INDEX [conn63] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.869-0500 W STORAGE [conn60] Could not complete validation of table:collection-27--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.884-0500 I INDEX [conn137] Validation complete for collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.886-0500 W STORAGE [conn134] Could not complete validation of table:collection-15-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.904-0500 I INDEX [conn55] validating collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.872-0500 I INDEX [conn101] Validation complete for collection config.collections (UUID: c846d630-16e0-4675-b90f-3cd769544ef0). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.904-0500 I COMMAND [conn61] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.868-0500 W STORAGE [conn63] Could not complete validation of table:index-28--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.869-0500 I INDEX [conn60] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.885-0500 I COMMAND [conn137] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.886-0500 I INDEX [conn134] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.904-0500 I INDEX [conn55] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.872-0500 I COMMAND [conn101] CMD: validate config.databases, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.904-0500 W STORAGE [conn61] Could not complete validation of table:collection-27--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.868-0500 I INDEX [conn63] validating collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.869-0500 W STORAGE [conn60] Could not complete validation of table:index-28--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.885-0500 W STORAGE [conn137] Could not complete validation of table:collection-15--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.886-0500 W STORAGE [conn134] Could not complete validation of table:index-16-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.904-0500 I INDEX [conn55] Validation complete for collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.872-0500 W STORAGE [conn101] Could not complete validation of table:collection-55-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.904-0500 I INDEX [conn61] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.868-0500 I INDEX [conn63] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.869-0500 I INDEX [conn60] validating collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.885-0500 I INDEX [conn137] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.886-0500 I INDEX [conn134] validating collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.905-0500 I COMMAND [conn55] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.872-0500 I INDEX [conn101] validating the internal structure of index _id_ on collection config.databases
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.904-0500 W STORAGE [conn61] Could not complete validation of table:index-28--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.868-0500 I INDEX [conn63] Validation complete for collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.869-0500 I INDEX [conn60] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.885-0500 W STORAGE [conn137] Could not complete validation of table:index-16--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.886-0500 I INDEX [conn134] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.905-0500 W STORAGE [conn55] Could not complete validation of table:collection-25--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.872-0500 W STORAGE [conn101] Could not complete validation of table:index-56-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.904-0500 I INDEX [conn61] validating collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.869-0500 I COMMAND [conn63] CMD: validate config.system.sessions, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.869-0500 I INDEX [conn60] Validation complete for collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.885-0500 I INDEX [conn137] validating collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.886-0500 I INDEX [conn134] Validation complete for collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.905-0500 I INDEX [conn55] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.872-0500 I INDEX [conn101] validating collection config.databases (UUID: 1c31f9a7-ee46-41d3-a296-2e1f323b51b8)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.904-0500 I INDEX [conn61] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.869-0500 W STORAGE [conn63] Could not complete validation of table:collection-25--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.870-0500 I COMMAND [conn60] CMD: validate config.system.sessions, full:true
[executor:fsm_workload_test:job0] 2019-11-26T14:31:32.885-0500 agg_out:CleanupConcurrencyWorkloads ran in 0.07 seconds: no failures detected.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.885-0500 I INDEX [conn137] validating index consistency _id_ on collection config.transactions
[executor] 2019-11-26T14:31:33.927-0500 Waiting for threads to complete
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.887-0500 I COMMAND [conn134] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.905-0500 W STORAGE [conn55] Could not complete validation of table:index-26--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.872-0500 I INDEX [conn101] validating index consistency _id_ on collection config.databases
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.904-0500 I INDEX [conn61] Validation complete for collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.869-0500 I INDEX [conn63] validating the internal structure of index _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:32.884-0500 I NETWORK [conn37] end connection 127.0.0.1:58228 (1 connection now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:32.884-0500 I NETWORK [conn93] end connection 127.0.0.1:45080 (1 connection now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.870-0500 W STORAGE [conn60] Could not complete validation of table:collection-25--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0] Stopping the background check repl dbhash thread.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.885-0500 I INDEX [conn137] Validation complete for collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.887-0500 W STORAGE [conn134] Could not complete validation of table:collection-10-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[executor] 2019-11-26T14:31:33.928-0500 Threads are completed!
[executor] 2019-11-26T14:31:33.929-0500 Summary of latest execution: All 5 test(s) passed in 26.26 seconds.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.905-0500 I INDEX [conn55] validating collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.872-0500 I INDEX [conn101] Validation complete for collection config.databases (UUID: 1c31f9a7-ee46-41d3-a296-2e1f323b51b8). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.905-0500 I COMMAND [conn61] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.869-0500 W STORAGE [conn63] Could not complete validation of table:index-26--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:32.884-0500 I NETWORK [conn36] end connection 127.0.0.1:58220 (0 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:32.884-0500 I NETWORK [conn92] end connection 127.0.0.1:45076 (0 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.870-0500 I INDEX [conn60] validating the internal structure of index _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.888-0500 I COMMAND [conn137] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.887-0500 I INDEX [conn134] validating collection local.oplog.rs (UUID: 5f1b9ff7-2fef-4590-8e90-0f3704b0f5df)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.905-0500 I INDEX [conn55] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.873-0500 I COMMAND [conn101] CMD: validate config.lockpings, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.905-0500 W STORAGE [conn61] Could not complete validation of table:collection-25--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.869-0500 I INDEX [conn63] validating the internal structure of index lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:33.931-0500 I NETWORK [listener] connection accepted from 127.0.0.1:58230 #38 (1 connection now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:33.931-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45090 #96 (1 connection now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.870-0500 W STORAGE [conn60] Could not complete validation of table:index-26--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0] Starting the background check repl dbhash thread.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.888-0500 W STORAGE [conn137] Could not complete validation of table:collection-10--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.913-0500 I INDEX [conn134] Validation complete for collection local.oplog.rs (UUID: 5f1b9ff7-2fef-4590-8e90-0f3704b0f5df). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.905-0500 I INDEX [conn55] Validation complete for collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.873-0500 W STORAGE [conn101] Could not complete validation of table:collection-32-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.905-0500 I INDEX [conn61] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.869-0500 W STORAGE [conn63] Could not complete validation of table:index-33--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:33.931-0500 I NETWORK [conn38] received client metadata from 127.0.0.1:58230 conn38: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:33.931-0500 I NETWORK [conn96] received client metadata from 127.0.0.1:45090 conn96: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.870-0500 I INDEX [conn60] validating the internal structure of index lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.888-0500 I INDEX [conn137] validating collection local.oplog.rs (UUID: f999d0d7-cb6c-4d2c-a5ff-807a7ed09766)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.914-0500 I COMMAND [conn134] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.906-0500 I COMMAND [conn55] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.873-0500 I INDEX [conn101] validating the internal structure of index _id_ on collection config.lockpings
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.905-0500 W STORAGE [conn61] Could not complete validation of table:index-26--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.869-0500 I INDEX [conn63] validating collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:33.934-0500 I NETWORK [conn38] end connection 127.0.0.1:58230 (0 connections now open)
[CheckReplDBHashInBackground:job0] Resuming the background check repl dbhash thread.
[executor:fsm_workload_test:job0] 2019-11-26T14:31:33.936-0500 Running agg_out.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval TestData = new Object(); TestData["usingReplicaSetShards"] = true; TestData["runningWithAutoSplit"] = false; TestData["runningWithBalancer"] = false; TestData["fsmWorkloads"] = ["jstests/concurrency/fsm_workloads/agg_out.js"]; TestData["resmokeDbPathPrefix"] = "/home/nz_linux/data/job0/resmoke"; TestData["dbNamePrefix"] = "test2_"; TestData["sameDB"] = false; TestData["sameCollection"] = false; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "resmoke_runner"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); --readMode=commands mongodb://localhost:20007,localhost:20008 jstests/concurrency/fsm_libs/resmoke_runner.js
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:33.934-0500 I NETWORK [conn96] end connection 127.0.0.1:45090 (0 connections now open)
[executor:fsm_workload_test:job0] 2019-11-26T14:31:33.936-0500 Running agg_out:CheckReplDBHashInBackground...
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.870-0500 W STORAGE [conn60] Could not complete validation of table:index-33--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[fsm_workload_test:agg_out] 2019-11-26T14:31:33.937-0500 Starting FSM workload jstests/concurrency/fsm_workloads/agg_out.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval TestData = new Object(); TestData["usingReplicaSetShards"] = true; TestData["runningWithAutoSplit"] = false; TestData["runningWithBalancer"] = false; TestData["fsmWorkloads"] = ["jstests/concurrency/fsm_workloads/agg_out.js"]; TestData["resmokeDbPathPrefix"] = "/home/nz_linux/data/job0/resmoke"; TestData["dbNamePrefix"] = "test2_"; TestData["sameDB"] = false; TestData["sameCollection"] = false; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "resmoke_runner"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); --readMode=commands mongodb://localhost:20007,localhost:20008 jstests/concurrency/fsm_libs/resmoke_runner.js
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.894-0500 I INDEX [conn137] Validation complete for collection local.oplog.rs (UUID: f999d0d7-cb6c-4d2c-a5ff-807a7ed09766). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:33.938-0500 Starting JSTest jstests/hooks/run_check_repl_dbhash_background.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_check_repl_dbhash_background"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_check_repl_dbhash_background.js
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.915-0500 I INDEX [conn134] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.906-0500 W STORAGE [conn55] Could not complete validation of table:collection-21--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.873-0500 W STORAGE [conn101] Could not complete validation of table:index-33-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.905-0500 I INDEX [conn61] validating collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.869-0500 I INDEX [conn63] validating index consistency _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.870-0500 I INDEX [conn60] validating collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.895-0500 I COMMAND [conn137] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.917-0500 I INDEX [conn134] validating collection local.replset.election (UUID: 801ad0de-17c3-44b2-a878-e91b8de004c5)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.906-0500 I INDEX [conn55] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.873-0500 I INDEX [conn101] validating the internal structure of index ping_1 on collection config.lockpings
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.905-0500 I INDEX [conn61] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.869-0500 I INDEX [conn63] validating index consistency lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.870-0500 I INDEX [conn60] validating index consistency _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.896-0500 I INDEX [conn137] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.917-0500 I INDEX [conn134] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.906-0500 W STORAGE [conn55] Could not complete validation of table:index-22--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.873-0500 W STORAGE [conn101] Could not complete validation of table:index-34-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.905-0500 I INDEX [conn61] Validation complete for collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.869-0500 I INDEX [conn63] Validation complete for collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.870-0500 I INDEX [conn60] validating index consistency lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.898-0500 I INDEX [conn137] validating collection local.replset.election (UUID: 101a66fe-c3c0-4bee-94b9-e9bb8d04aa79)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.917-0500 I INDEX [conn134] Validation complete for collection local.replset.election (UUID: 801ad0de-17c3-44b2-a878-e91b8de004c5). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.906-0500 I INDEX [conn55] validating collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.873-0500 I INDEX [conn101] validating collection config.lockpings (UUID: f662f115-623a-496b-9953-7132cdf8c056)
[fsm_workload_test:agg_out] 2019-11-26T14:31:33.949-0500 FSM workload jstests/concurrency/fsm_workloads/agg_out.js started with pid 15592.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.906-0500 I COMMAND [conn61] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.870-0500 I COMMAND [conn63] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.870-0500 I INDEX [conn60] Validation complete for collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.898-0500 I INDEX [conn137] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.918-0500 I COMMAND [conn134] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.906-0500 I INDEX [conn55] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.873-0500 I INDEX [conn101] validating index consistency _id_ on collection config.lockpings
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.906-0500 W STORAGE [conn61] Could not complete validation of table:collection-21--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.870-0500 W STORAGE [conn63] Could not complete validation of table:collection-21--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.871-0500 I COMMAND [conn60] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.898-0500 I INDEX [conn137] Validation complete for collection local.replset.election (UUID: 101a66fe-c3c0-4bee-94b9-e9bb8d04aa79). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.919-0500 I INDEX [conn134] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.906-0500 I INDEX [conn55] Validation complete for collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.873-0500 I INDEX [conn101] validating index consistency ping_1 on collection config.lockpings
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.906-0500 I INDEX [conn61] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.870-0500 I INDEX [conn63] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.871-0500 W STORAGE [conn60] Could not complete validation of table:collection-21--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.898-0500 I COMMAND [conn137] CMD: validate local.replset.minvalid, full:true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:33.952-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js started with pid 15595.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.921-0500 I INDEX [conn134] validating collection local.replset.minvalid (UUID: a96fd08c-e1c8-43e5-868a-0849697b175e)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.909-0500 I COMMAND [conn55] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.873-0500 I INDEX [conn101] Validation complete for collection config.lockpings (UUID: f662f115-623a-496b-9953-7132cdf8c056). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.906-0500 W STORAGE [conn61] Could not complete validation of table:index-22--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.870-0500 W STORAGE [conn63] Could not complete validation of table:index-22--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.871-0500 I INDEX [conn60] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.899-0500 I INDEX [conn137] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.921-0500 I INDEX [conn134] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.919-0500 W STORAGE [conn55] Could not complete validation of table:collection-16--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.874-0500 I COMMAND [conn101] CMD: validate config.locks, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.906-0500 I INDEX [conn61] validating collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.870-0500 I INDEX [conn63] validating collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.871-0500 W STORAGE [conn60] Could not complete validation of table:index-22--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.901-0500 I INDEX [conn137] validating collection local.replset.minvalid (UUID: 5dfed1a1-c7a1-4f91-a3da-2544e54d2e9a)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.921-0500 I INDEX [conn134] Validation complete for collection local.replset.minvalid (UUID: a96fd08c-e1c8-43e5-868a-0849697b175e). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.919-0500 I INDEX [conn55] validating collection local.oplog.rs (UUID: 307925b3-4143-4c06-a46a-f04119b3afb4)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.874-0500 W STORAGE [conn101] Could not complete validation of table:collection-28-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.906-0500 I INDEX [conn61] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.870-0500 I INDEX [conn63] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.871-0500 I INDEX [conn60] validating collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.901-0500 I INDEX [conn137] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.922-0500 I COMMAND [conn134] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.933-0500 I INDEX [conn55] Validation complete for collection local.oplog.rs (UUID: 307925b3-4143-4c06-a46a-f04119b3afb4). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.874-0500 I INDEX [conn101] validating the internal structure of index _id_ on collection config.locks
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.906-0500 I INDEX [conn61] Validation complete for collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.870-0500 I INDEX [conn63] Validation complete for collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.871-0500 I INDEX [conn60] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.901-0500 I INDEX [conn137] Validation complete for collection local.replset.minvalid (UUID: 5dfed1a1-c7a1-4f91-a3da-2544e54d2e9a). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.927-0500 I INDEX [conn134] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.934-0500 I COMMAND [conn55] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.874-0500 W STORAGE [conn101] Could not complete validation of table:index-29-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.909-0500 I COMMAND [conn61] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.872-0500 I COMMAND [conn63] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.871-0500 I INDEX [conn60] Validation complete for collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.902-0500 I COMMAND [conn137] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.928-0500 I INDEX [conn134] validating collection local.replset.oplogTruncateAfterPoint (UUID: 4ac06258-0ea7-46c8-b773-0c637830872b)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.935-0500 I INDEX [conn55] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.874-0500 I INDEX [conn101] validating the internal structure of index ts_1 on collection config.locks
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.919-0500 W STORAGE [conn61] Could not complete validation of table:collection-16--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.913-0500 W STORAGE [conn63] Could not complete validation of table:collection-16--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.872-0500 I COMMAND [conn60] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.905-0500 I INDEX [conn137] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.929-0500 I INDEX [conn134] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.937-0500 I INDEX [conn55] validating collection local.replset.election (UUID: 7b059263-7419-4cf5-8072-b44957d729c9)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.874-0500 W STORAGE [conn101] Could not complete validation of table:index-30-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.919-0500 I INDEX [conn61] validating collection local.oplog.rs (UUID: 6c707c3f-4064-4e35-98fb-b2fff8245539)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.913-0500 I INDEX [conn63] validating collection local.oplog.rs (UUID: 88962763-38f7-4965-bfd6-b2a62304ae0e)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.873-0500 W STORAGE [conn60] Could not complete validation of table:collection-16--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.907-0500 I INDEX [conn137] validating collection local.replset.oplogTruncateAfterPoint (UUID: 31ce824c-ef86-4223-a4be-3069dae7b5f2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.929-0500 I INDEX [conn134] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 4ac06258-0ea7-46c8-b773-0c637830872b). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.937-0500 I INDEX [conn55] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.874-0500 I INDEX [conn101] validating the internal structure of index state_1_process_1 on collection config.locks
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.932-0500 I INDEX [conn61] Validation complete for collection local.oplog.rs (UUID: 6c707c3f-4064-4e35-98fb-b2fff8245539). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.970-0500 I INDEX [conn63] Validation complete for collection local.oplog.rs (UUID: 88962763-38f7-4965-bfd6-b2a62304ae0e). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.873-0500 I INDEX [conn60] validating collection local.oplog.rs (UUID: 6d43bede-f05f-41b1-b7ac-5a32b66b8140)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.907-0500 I INDEX [conn137] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.929-0500 I COMMAND [conn134] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.937-0500 I INDEX [conn55] Validation complete for collection local.replset.election (UUID: 7b059263-7419-4cf5-8072-b44957d729c9). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.874-0500 W STORAGE [conn101] Could not complete validation of table:index-31-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.933-0500 I COMMAND [conn61] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.971-0500 I COMMAND [conn63] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.902-0500 I INDEX [conn60] Validation complete for collection local.oplog.rs (UUID: 6d43bede-f05f-41b1-b7ac-5a32b66b8140). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.907-0500 I INDEX [conn137] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 31ce824c-ef86-4223-a4be-3069dae7b5f2). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.930-0500 I INDEX [conn134] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.937-0500 I COMMAND [conn55] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.874-0500 I INDEX [conn101] validating collection config.locks (UUID: dbde06c7-d8ac-4f80-ab9f-cae486f16451)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.934-0500 I INDEX [conn61] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.972-0500 I INDEX [conn63] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.902-0500 I COMMAND [conn60] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.907-0500 I COMMAND [conn137] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.932-0500 I INDEX [conn134] validating collection local.startup_log (UUID: e8e71921-e80f-42ad-92d0-ad769374a694)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.937-0500 W STORAGE [conn55] Could not complete validation of table:collection-4--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.874-0500 I INDEX [conn101] validating index consistency _id_ on collection config.locks
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.936-0500 I INDEX [conn61] validating collection local.replset.election (UUID: 6a83721b-d0f2-438c-a2e3-ec6a11e75236)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.974-0500 I INDEX [conn63] validating collection local.replset.election (UUID: d0928956-d7fc-46fe-a9bc-1f07f2435457)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.903-0500 I INDEX [conn60] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.908-0500 I INDEX [conn137] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.932-0500 I INDEX [conn134] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.937-0500 I INDEX [conn55] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.874-0500 I INDEX [conn101] validating index consistency ts_1 on collection config.locks
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.936-0500 I INDEX [conn61] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.974-0500 I INDEX [conn63] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.905-0500 I INDEX [conn60] validating collection local.replset.election (UUID: bf7b5380-e70a-475e-ad1b-16751bee6907)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.910-0500 I INDEX [conn137] validating collection local.startup_log (UUID: fd9e05bb-cd6c-441c-9265-3783d4065b03)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.932-0500 I INDEX [conn134] Validation complete for collection local.startup_log (UUID: e8e71921-e80f-42ad-92d0-ad769374a694). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.939-0500 I INDEX [conn55] validating collection local.replset.minvalid (UUID: e1166351-a2a9-4335-b202-a653b252b811)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.874-0500 I INDEX [conn101] validating index consistency state_1_process_1 on collection config.locks
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.936-0500 I INDEX [conn61] Validation complete for collection local.replset.election (UUID: 6a83721b-d0f2-438c-a2e3-ec6a11e75236). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.974-0500 I INDEX [conn63] Validation complete for collection local.replset.election (UUID: d0928956-d7fc-46fe-a9bc-1f07f2435457). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.905-0500 I INDEX [conn60] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.910-0500 I INDEX [conn137] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.933-0500 I STORAGE [TimestampMonitor] Removing drop-pending idents with drop timestamps before timestamp Timestamp(1574796686, 8)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.939-0500 I INDEX [conn55] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.874-0500 I INDEX [conn101] Validation complete for collection config.locks (UUID: dbde06c7-d8ac-4f80-ab9f-cae486f16451). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.936-0500 I COMMAND [conn61] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.975-0500 I COMMAND [conn63] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.905-0500 I INDEX [conn60] Validation complete for collection local.replset.election (UUID: bf7b5380-e70a-475e-ad1b-16751bee6907). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.910-0500 I INDEX [conn137] Validation complete for collection local.startup_log (UUID: fd9e05bb-cd6c-441c-9265-3783d4065b03). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.933-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-30-8224331490264904478 (ns: test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 14)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.939-0500 I INDEX [conn55] Validation complete for collection local.replset.minvalid (UUID: e1166351-a2a9-4335-b202-a653b252b811). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.875-0500 I COMMAND [conn101] CMD: validate config.migrations, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.936-0500 W STORAGE [conn61] Could not complete validation of table:collection-4--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.975-0500 W STORAGE [conn63] Could not complete validation of table:collection-4--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.906-0500 I COMMAND [conn60] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.911-0500 I COMMAND [conn137] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.933-0500 I COMMAND [conn134] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.940-0500 I COMMAND [conn55] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.875-0500 W STORAGE [conn101] Could not complete validation of table:collection-22-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.936-0500 I INDEX [conn61] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.975-0500 I INDEX [conn63] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.906-0500 W STORAGE [conn60] Could not complete validation of table:collection-4--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.911-0500 I INDEX [conn137] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.934-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-31-8224331490264904478 (ns: test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 14)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.944-0500 I INDEX [conn55] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.875-0500 I INDEX [conn101] validating the internal structure of index _id_ on collection config.migrations
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.938-0500 I INDEX [conn61] validating collection local.replset.minvalid (UUID: 3f481e27-9697-4b6d-b77b-0bd9b43c5dfa)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.976-0500 I INDEX [conn63] validating collection local.replset.minvalid (UUID: 6eb6e647-60c7-450a-a905-f04052287b8a)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.906-0500 I INDEX [conn60] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.913-0500 I INDEX [conn137] validating collection local.system.replset (UUID: 3eb8c3e8-f477-448c-9a25-5db5ef40b0d6)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.935-0500 I INDEX [conn134] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.946-0500 I INDEX [conn55] validating collection local.replset.oplogTruncateAfterPoint (UUID: 022b88bb-9282-4f39-aad1-6988341f4ac1)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.875-0500 W STORAGE [conn101] Could not complete validation of table:index-23-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.938-0500 I INDEX [conn61] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.976-0500 I INDEX [conn63] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.907-0500 I INDEX [conn60] validating collection local.replset.minvalid (UUID: 6654b1c2-f323-4c78-9165-5ff31d331960)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.913-0500 I INDEX [conn137] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.935-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-29-8224331490264904478 (ns: test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 14)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.946-0500 I INDEX [conn55] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.875-0500 I INDEX [conn101] validating the internal structure of index ns_1_min_1 on collection config.migrations
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.938-0500 I INDEX [conn61] Validation complete for collection local.replset.minvalid (UUID: 3f481e27-9697-4b6d-b77b-0bd9b43c5dfa). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.976-0500 I INDEX [conn63] Validation complete for collection local.replset.minvalid (UUID: 6eb6e647-60c7-450a-a905-f04052287b8a). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.907-0500 I INDEX [conn60] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.913-0500 I INDEX [conn137] Validation complete for collection local.system.replset (UUID: 3eb8c3e8-f477-448c-9a25-5db5ef40b0d6). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.938-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-33-8224331490264904478 (ns: config.cache.chunks.test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 22)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.946-0500 I INDEX [conn55] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 022b88bb-9282-4f39-aad1-6988341f4ac1). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.875-0500 W STORAGE [conn101] Could not complete validation of table:index-24-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.939-0500 I COMMAND [conn61] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.977-0500 I COMMAND [conn63] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.907-0500 I INDEX [conn60] Validation complete for collection local.replset.minvalid (UUID: 6654b1c2-f323-4c78-9165-5ff31d331960). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.914-0500 I COMMAND [conn137] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.938-0500 I INDEX [conn134] validating collection local.system.replset (UUID: 318b7af2-23ac-427e-bba7-a3e3f5b1e60d)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.946-0500 I COMMAND [conn55] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.875-0500 I INDEX [conn101] validating collection config.migrations (UUID: 550e32ef-0dd4-48f9-bb5e-9e21bec0734f)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.942-0500 I INDEX [conn61] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.980-0500 I INDEX [conn63] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.908-0500 I COMMAND [conn60] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.915-0500 I INDEX [conn137] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.938-0500 I INDEX [conn134] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.947-0500 I INDEX [conn55] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.875-0500 I INDEX [conn101] validating index consistency _id_ on collection config.migrations
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.944-0500 I INDEX [conn61] validating collection local.replset.oplogTruncateAfterPoint (UUID: ae67a1b2-b2be-4d7e-8242-18f3082bc280)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.982-0500 I INDEX [conn63] validating collection local.replset.oplogTruncateAfterPoint (UUID: 5d41bfc8-ebca-43f3-a038-30023495a91a)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.910-0500 I INDEX [conn60] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.917-0500 I INDEX [conn137] validating collection local.system.rollback.id (UUID: 223114bc-2956-4d9b-8f0a-5c567c2cb10e)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.938-0500 I INDEX [conn134] Validation complete for collection local.system.replset (UUID: 318b7af2-23ac-427e-bba7-a3e3f5b1e60d). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.950-0500 I INDEX [conn55] validating collection local.startup_log (UUID: 62f9eac5-a715-4818-9af1-edc47894f622)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.875-0500 I INDEX [conn101] validating index consistency ns_1_min_1 on collection config.migrations
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.944-0500 I INDEX [conn61] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.982-0500 I INDEX [conn63] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.913-0500 I INDEX [conn60] validating collection local.replset.oplogTruncateAfterPoint (UUID: fe211210-ae1b-4ab2-81d6-86b025cc1404)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.917-0500 I INDEX [conn137] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.939-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-34-8224331490264904478 (ns: config.cache.chunks.test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 22)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.950-0500 I INDEX [conn55] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.875-0500 I INDEX [conn101] Validation complete for collection config.migrations (UUID: 550e32ef-0dd4-48f9-bb5e-9e21bec0734f). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.944-0500 I INDEX [conn61] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: ae67a1b2-b2be-4d7e-8242-18f3082bc280). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.982-0500 I INDEX [conn63] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 5d41bfc8-ebca-43f3-a038-30023495a91a). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.913-0500 I INDEX [conn60] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.917-0500 I INDEX [conn137] Validation complete for collection local.system.rollback.id (UUID: 223114bc-2956-4d9b-8f0a-5c567c2cb10e). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.939-0500 I COMMAND [conn134] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.950-0500 I INDEX [conn55] Validation complete for collection local.startup_log (UUID: 62f9eac5-a715-4818-9af1-edc47894f622). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.876-0500 I COMMAND [conn101] CMD: validate config.mongos, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.944-0500 I COMMAND [conn61] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.983-0500 I COMMAND [conn63] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.913-0500 I INDEX [conn60] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: fe211210-ae1b-4ab2-81d6-86b025cc1404). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.918-0500 I COMMAND [conn137] CMD: validate test1_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.940-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-32-8224331490264904478 (ns: config.cache.chunks.test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 22)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.950-0500 I COMMAND [conn55] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.876-0500 W STORAGE [conn101] Could not complete validation of table:collection-43-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.945-0500 I INDEX [conn61] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.983-0500 I INDEX [conn63] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.913-0500 I COMMAND [conn60] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.920-0500 I INDEX [conn137] validating the internal structure of index _id_ on collection test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.941-0500 I INDEX [conn134] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.951-0500 I INDEX [conn55] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.876-0500 I INDEX [conn101] validating the internal structure of index _id_ on collection config.mongos
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.947-0500 I INDEX [conn61] validating collection local.startup_log (UUID: fb2ea5d2-ac7b-4697-a368-9f5d41483423)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.947-0500 I INDEX [conn61] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.947-0500 I INDEX [conn61] Validation complete for collection local.startup_log (UUID: fb2ea5d2-ac7b-4697-a368-9f5d41483423). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.922-0500 I INDEX [conn137] validating the internal structure of index _id_hashed on collection test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.944-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-45-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 1572)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.953-0500 I INDEX [conn55] validating collection local.system.replset (UUID: c43cc3e4-845d-4144-8406-83bf4df96d39)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.876-0500 W STORAGE [conn101] Could not complete validation of table:index-44-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.985-0500 I INDEX [conn63] validating collection local.startup_log (UUID: e0cc0511-0005-4584-a461-5ae30058b4c6)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.914-0500 I INDEX [conn60] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.948-0500 I COMMAND [conn61] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.924-0500 I INDEX [conn137] validating collection test1_fsmdb0.fsmcoll0 (UUID: dccb4b9f-92a4-4a8c-933f-ac40a7941a38)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.946-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-46-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 1572)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.953-0500 I INDEX [conn55] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.876-0500 I INDEX [conn101] validating collection config.mongos (UUID: 57207abe-6d8d-4102-a526-bc847dba6c09)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.876-0500 I INDEX [conn101] validating index consistency _id_ on collection config.mongos
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.876-0500 I INDEX [conn101] Validation complete for collection config.mongos (UUID: 57207abe-6d8d-4102-a526-bc847dba6c09). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.949-0500 I INDEX [conn61] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.926-0500 I INDEX [conn137] validating index consistency _id_ on collection test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.947-0500 I INDEX [conn134] validating collection local.system.rollback.id (UUID: 2d9a033a-73d1-44ef-b7d1-30b6243b0419)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.953-0500 I INDEX [conn55] Validation complete for collection local.system.replset (UUID: c43cc3e4-845d-4144-8406-83bf4df96d39). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.985-0500 I INDEX [conn63] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.916-0500 I INDEX [conn60] validating collection local.startup_log (UUID: 7b6988ea-0c65-41a6-9855-5680c2c711a1)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.876-0500 I COMMAND [conn101] CMD: validate config.settings, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.951-0500 I INDEX [conn61] validating collection local.system.replset (UUID: 2b695a66-e9c6-4bba-a36e-eb0a5cf356ba)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.926-0500 I INDEX [conn137] validating index consistency _id_hashed on collection test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.947-0500 I INDEX [conn134] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.954-0500 I COMMAND [conn55] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.985-0500 I INDEX [conn63] Validation complete for collection local.startup_log (UUID: e0cc0511-0005-4584-a461-5ae30058b4c6). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.916-0500 I INDEX [conn60] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.876-0500 W STORAGE [conn101] Could not complete validation of table:collection-45-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.951-0500 I INDEX [conn61] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.926-0500 I INDEX [conn137] Validation complete for collection test1_fsmdb0.fsmcoll0 (UUID: dccb4b9f-92a4-4a8c-933f-ac40a7941a38). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.947-0500 I INDEX [conn134] Validation complete for collection local.system.rollback.id (UUID: 2d9a033a-73d1-44ef-b7d1-30b6243b0419). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.955-0500 I INDEX [conn55] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.986-0500 I COMMAND [conn63] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.916-0500 I INDEX [conn60] Validation complete for collection local.startup_log (UUID: 7b6988ea-0c65-41a6-9855-5680c2c711a1). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.877-0500 I INDEX [conn101] validating the internal structure of index _id_ on collection config.settings
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.951-0500 I INDEX [conn61] Validation complete for collection local.system.replset (UUID: 2b695a66-e9c6-4bba-a36e-eb0a5cf356ba). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:31.928-0500 I NETWORK [conn137] end connection 127.0.0.1:46672 (38 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.948-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-44-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 1572)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.956-0500 I INDEX [conn55] validating collection local.system.rollback.id (UUID: af3b2fdb-b5ae-49b3-a026-c55e1bf822c0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.987-0500 I INDEX [conn63] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.917-0500 I COMMAND [conn60] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.877-0500 W STORAGE [conn101] Could not complete validation of table:index-46-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.951-0500 I COMMAND [conn61] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.006-0500 I NETWORK [conn136] end connection 127.0.0.1:46654 (37 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.949-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-53-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 1817)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.957-0500 I INDEX [conn55] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.989-0500 I INDEX [conn63] validating collection local.system.replset (UUID: 3b8c02e8-ec29-4e79-912d-3e315d1d851c)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.918-0500 I INDEX [conn60] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.877-0500 I INDEX [conn101] validating collection config.settings (UUID: 6d167d1d-0483-49b9-9ac8-ee5b66996698)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.952-0500 I INDEX [conn61] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.016-0500 I NETWORK [conn135] end connection 127.0.0.1:46652 (36 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.949-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-58-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 1817)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.957-0500 I INDEX [conn55] Validation complete for collection local.system.rollback.id (UUID: af3b2fdb-b5ae-49b3-a026-c55e1bf822c0). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.989-0500 I INDEX [conn63] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.920-0500 I INDEX [conn60] validating collection local.system.replset (UUID: 920cbf66-0930-4ef5-82e9-10d7319f0fda)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.877-0500 I INDEX [conn101] validating index consistency _id_ on collection config.settings
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.954-0500 I INDEX [conn61] validating collection local.system.rollback.id (UUID: d6027364-802b-4e8d-ae7f-556bc4252840)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.950-0500 I COMMAND [conn134] CMD: validate test1_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.825-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46692 #138 (37 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.958-0500 I COMMAND [conn55] CMD: validate test1_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.989-0500 I INDEX [conn63] Validation complete for collection local.system.replset (UUID: 3b8c02e8-ec29-4e79-912d-3e315d1d851c). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.920-0500 I INDEX [conn60] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.877-0500 I INDEX [conn101] Validation complete for collection config.settings (UUID: 6d167d1d-0483-49b9-9ac8-ee5b66996698). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.954-0500 I INDEX [conn61] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.950-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-48-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 1817)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.830-0500 I NETWORK [conn138] received client metadata from 127.0.0.1:46692 conn138: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.958-0500 W STORAGE [conn55] Could not complete validation of table:collection-109--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.990-0500 I COMMAND [conn63] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.920-0500 I INDEX [conn60] Validation complete for collection local.system.replset (UUID: 920cbf66-0930-4ef5-82e9-10d7319f0fda). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.877-0500 I COMMAND [conn101] CMD: validate config.shards, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.954-0500 I INDEX [conn61] Validation complete for collection local.system.rollback.id (UUID: d6027364-802b-4e8d-ae7f-556bc4252840). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.952-0500 I INDEX [conn134] validating the internal structure of index _id_ on collection test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.842-0500 I COMMAND [conn55] CMD: drop test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.958-0500 I INDEX [conn55] validating the internal structure of index _id_ on collection test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.958-0500 W STORAGE [conn55] Could not complete validation of table:index-110--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.958-0500 I INDEX [conn55] validating the internal structure of index _id_hashed on collection test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.877-0500 W STORAGE [conn101] Could not complete validation of table:collection-25-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.956-0500 I COMMAND [conn61] CMD: validate test1_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.953-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-54-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 2832)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.848-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.agg_out took 1 ms and found the collection is not sharded
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.990-0500 I INDEX [conn63] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.920-0500 I COMMAND [conn60] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.958-0500 W STORAGE [conn55] Could not complete validation of table:index-111--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.877-0500 I INDEX [conn101] validating the internal structure of index _id_ on collection config.shards
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.956-0500 W STORAGE [conn61] Could not complete validation of table:collection-109--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.954-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-60-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 2832)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.848-0500 I SHARDING [conn55] Updating metadata for collection test1_fsmdb0.agg_out from collection version: 1|0||5ddd7d8e3bbfe7fa5630e252, shard version: 0|0||5ddd7d8e3bbfe7fa5630e252 to collection version: due to UUID change
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.848-0500 I COMMAND [ShardServerCatalogCacheLoader-1] CMD: drop config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.921-0500 I INDEX [conn60] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.959-0500 I INDEX [conn55] validating collection test1_fsmdb0.fsmcoll0 (UUID: dccb4b9f-92a4-4a8c-933f-ac40a7941a38)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.877-0500 W STORAGE [conn101] Could not complete validation of table:index-26-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.956-0500 I INDEX [conn61] validating the internal structure of index _id_ on collection test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.955-0500 I INDEX [conn134] validating the internal structure of index _id_hashed on collection test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.992-0500 I INDEX [conn63] validating collection local.system.rollback.id (UUID: 1099f6d7-f170-471c-a0ac-dc97bd7e42b0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.848-0500 I STORAGE [ShardServerCatalogCacheLoader-1] dropCollection: config.cache.chunks.test1_fsmdb0.agg_out (d42e625c-196f-4a50-b0c5-66d06bbde62c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.923-0500 I INDEX [conn60] validating collection local.system.rollback.id (UUID: 9434a858-83b3-4d87-8d66-64bde405790b)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.960-0500 I INDEX [conn55] validating index consistency _id_ on collection test1_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:33.968-0500 MongoDB shell version v0.0.0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.877-0500 I INDEX [conn101] validating the internal structure of index host_1 on collection config.shards
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.956-0500 W STORAGE [conn61] Could not complete validation of table:index-110--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.957-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-49-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 2832)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.992-0500 I INDEX [conn63] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.848-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Finishing collection drop for config.cache.chunks.test1_fsmdb0.agg_out (d42e625c-196f-4a50-b0c5-66d06bbde62c).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.923-0500 I INDEX [conn60] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.960-0500 I INDEX [conn55] validating index consistency _id_hashed on collection test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.877-0500 W STORAGE [conn101] Could not complete validation of table:index-27-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.956-0500 I INDEX [conn61] validating the internal structure of index _id_hashed on collection test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.959-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-55-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3014)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.993-0500 I INDEX [conn63] Validation complete for collection local.system.rollback.id (UUID: 1099f6d7-f170-471c-a0ac-dc97bd7e42b0). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.848-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test1_fsmdb0.agg_out (d42e625c-196f-4a50-b0c5-66d06bbde62c)'. Ident: 'index-109--2588534479858262356', commit timestamp: 'Timestamp(1574796692, 11)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.923-0500 I INDEX [conn60] Validation complete for collection local.system.rollback.id (UUID: 9434a858-83b3-4d87-8d66-64bde405790b). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.961-0500 I INDEX [conn55] Validation complete for collection test1_fsmdb0.fsmcoll0 (UUID: dccb4b9f-92a4-4a8c-933f-ac40a7941a38). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.877-0500 I INDEX [conn101] validating collection config.shards (UUID: ed6a2b77-0788-4ad3-a1b0-ccd61535c24f)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.956-0500 W STORAGE [conn61] Could not complete validation of table:index-111--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.960-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-62-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3014)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.994-0500 I COMMAND [conn63] CMD: validate test1_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.848-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test1_fsmdb0.agg_out (d42e625c-196f-4a50-b0c5-66d06bbde62c)'. Ident: 'index-110--2588534479858262356', commit timestamp: 'Timestamp(1574796692, 11)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.925-0500 I COMMAND [conn60] CMD: validate test1_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:31.962-0500 I NETWORK [conn55] end connection 127.0.0.1:35238 (9 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.878-0500 I INDEX [conn101] validating index consistency _id_ on collection config.shards
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.956-0500 I INDEX [conn61] validating collection test1_fsmdb0.fsmcoll0 (UUID: dccb4b9f-92a4-4a8c-933f-ac40a7941a38)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.961-0500 I INDEX [conn134] validating collection test1_fsmdb0.agg_out (UUID: 5e50e75c-c327-4f05-bb46-1ea87905b919)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.994-0500 W STORAGE [conn63] Could not complete validation of table:collection-307--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.848-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Deferring table drop for collection 'config.cache.chunks.test1_fsmdb0.agg_out'. Ident: collection-108--2588534479858262356, commit timestamp: Timestamp(1574796692, 11)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.925-0500 W STORAGE [conn60] Could not complete validation of table:collection-307--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:32.016-0500 I NETWORK [conn54] end connection 127.0.0.1:35208 (8 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.878-0500 I INDEX [conn101] validating index consistency host_1 on collection config.shards
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.958-0500 I INDEX [conn61] validating index consistency _id_ on collection test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.962-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-50-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3014)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.994-0500 I INDEX [conn63] validating the internal structure of index _id_ on collection test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.858-0500 I COMMAND [conn55] CMD: drop test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.925-0500 I INDEX [conn60] validating the internal structure of index _id_ on collection test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:32.852-0500 I COMMAND [ReplWriterWorker-3] CMD: drop config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.878-0500 I INDEX [conn101] Validation complete for collection config.shards (UUID: ed6a2b77-0788-4ad3-a1b0-ccd61535c24f). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.958-0500 I INDEX [conn61] validating index consistency _id_hashed on collection test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.962-0500 I INDEX [conn134] validating index consistency _id_ on collection test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.994-0500 W STORAGE [conn63] Could not complete validation of table:index-308--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.858-0500 I STORAGE [conn55] dropCollection: test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.925-0500 W STORAGE [conn60] Could not complete validation of table:index-308--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:32.852-0500 I STORAGE [ReplWriterWorker-3] dropCollection: config.cache.chunks.test1_fsmdb0.agg_out (d42e625c-196f-4a50-b0c5-66d06bbde62c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796692, 11), t: 1 } and commit timestamp Timestamp(1574796692, 11)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.878-0500 I COMMAND [conn101] CMD: validate config.system.sessions, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.958-0500 I INDEX [conn61] Validation complete for collection test1_fsmdb0.fsmcoll0 (UUID: dccb4b9f-92a4-4a8c-933f-ac40a7941a38). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.963-0500 I INDEX [conn134] validating index consistency _id_hashed on collection test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.995-0500 I INDEX [conn63] validating the internal structure of index _id_hashed on collection test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.858-0500 I STORAGE [conn55] Finishing collection drop for test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.925-0500 I INDEX [conn60] validating the internal structure of index _id_hashed on collection test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:32.852-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for config.cache.chunks.test1_fsmdb0.agg_out (d42e625c-196f-4a50-b0c5-66d06bbde62c).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.878-0500 W STORAGE [conn101] Could not complete validation of table:collection-53-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:31.960-0500 I NETWORK [conn61] end connection 127.0.0.1:51878 (9 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.963-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-56-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3071)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.995-0500 W STORAGE [conn63] Could not complete validation of table:index-317--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.858-0500 I STORAGE [conn55] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38)'. Ident: 'index-102--2588534479858262356', commit timestamp: 'Timestamp(1574796692, 16)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.925-0500 W STORAGE [conn60] Could not complete validation of table:index-317--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:32.853-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test1_fsmdb0.agg_out (d42e625c-196f-4a50-b0c5-66d06bbde62c)'. Ident: 'index-118--7234316082034423155', commit timestamp: 'Timestamp(1574796692, 11)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.878-0500 I INDEX [conn101] validating the internal structure of index _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:32.016-0500 I NETWORK [conn60] end connection 127.0.0.1:51846 (8 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.963-0500 I INDEX [conn134] Validation complete for collection test1_fsmdb0.agg_out (UUID: 5e50e75c-c327-4f05-bb46-1ea87905b919). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.995-0500 I INDEX [conn63] validating collection test1_fsmdb0.agg_out (UUID: 5e50e75c-c327-4f05-bb46-1ea87905b919)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.858-0500 I STORAGE [conn55] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38)'. Ident: 'index-103--2588534479858262356', commit timestamp: 'Timestamp(1574796692, 16)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.925-0500 I INDEX [conn60] validating collection test1_fsmdb0.agg_out (UUID: 5e50e75c-c327-4f05-bb46-1ea87905b919)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:32.853-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test1_fsmdb0.agg_out (d42e625c-196f-4a50-b0c5-66d06bbde62c)'. Ident: 'index-119--7234316082034423155', commit timestamp: 'Timestamp(1574796692, 11)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.878-0500 W STORAGE [conn101] Could not complete validation of table:index-54-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:33.973-0500 MongoDB shell version v0.0.0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:32.852-0500 I COMMAND [ReplWriterWorker-14] CMD: drop config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.963-0500 I COMMAND [conn134] CMD: validate test1_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.996-0500 I INDEX [conn63] validating index consistency _id_ on collection test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.858-0500 I STORAGE [conn55] Deferring table drop for collection 'test1_fsmdb0.fsmcoll0'. Ident: collection-101--2588534479858262356, commit timestamp: Timestamp(1574796692, 16)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.927-0500 I INDEX [conn60] validating index consistency _id_ on collection test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:32.853-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'config.cache.chunks.test1_fsmdb0.agg_out'. Ident: collection-117--7234316082034423155, commit timestamp: Timestamp(1574796692, 11)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.878-0500 I INDEX [conn101] validating collection config.system.sessions (UUID: 9014747b-5aa2-462f-9e13-1e6b27298390)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:32.852-0500 I STORAGE [ReplWriterWorker-14] dropCollection: config.cache.chunks.test1_fsmdb0.agg_out (d42e625c-196f-4a50-b0c5-66d06bbde62c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796692, 11), t: 1 } and commit timestamp Timestamp(1574796692, 11)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.963-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-64-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3071)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.997-0500 I INDEX [conn63] validating index consistency _id_hashed on collection test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.866-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.fsmcoll0 took 0 ms and found the collection is not sharded
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.927-0500 I INDEX [conn60] validating index consistency _id_hashed on collection test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:32.861-0500 I COMMAND [ReplWriterWorker-10] CMD: drop test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.878-0500 I INDEX [conn101] validating index consistency _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:32.852-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for config.cache.chunks.test1_fsmdb0.agg_out (d42e625c-196f-4a50-b0c5-66d06bbde62c).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.964-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-51-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3071)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.997-0500 I INDEX [conn63] Validation complete for collection test1_fsmdb0.agg_out (UUID: 5e50e75c-c327-4f05-bb46-1ea87905b919). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.866-0500 I SHARDING [conn55] Updating metadata for collection test1_fsmdb0.fsmcoll0 from collection version: 1|3||5ddd7d7d3bbfe7fa5630d6e7, shard version: 1|3||5ddd7d7d3bbfe7fa5630d6e7 to collection version: due to UUID change
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.927-0500 I INDEX [conn60] Validation complete for collection test1_fsmdb0.agg_out (UUID: 5e50e75c-c327-4f05-bb46-1ea87905b919). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:32.861-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796692, 16), t: 1 } and commit timestamp Timestamp(1574796692, 16)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.878-0500 I INDEX [conn101] Validation complete for collection config.system.sessions (UUID: 9014747b-5aa2-462f-9e13-1e6b27298390). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:32.852-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test1_fsmdb0.agg_out (d42e625c-196f-4a50-b0c5-66d06bbde62c)'. Ident: 'index-118--2310912778499990807', commit timestamp: 'Timestamp(1574796692, 11)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.966-0500 I INDEX [conn134] validating the internal structure of index _id_ on collection test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.997-0500 I COMMAND [conn63] CMD: validate test1_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.867-0500 I COMMAND [ShardServerCatalogCacheLoader-1] CMD: drop config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.928-0500 I COMMAND [conn60] CMD: validate test1_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:32.861-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.879-0500 I COMMAND [conn101] CMD: validate config.tags, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:32.852-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test1_fsmdb0.agg_out (d42e625c-196f-4a50-b0c5-66d06bbde62c)'. Ident: 'index-119--2310912778499990807', commit timestamp: 'Timestamp(1574796692, 11)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.967-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-57-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3579)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.998-0500 W STORAGE [conn63] Could not complete validation of table:collection-45--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.867-0500 I STORAGE [ShardServerCatalogCacheLoader-1] dropCollection: config.cache.chunks.test1_fsmdb0.fsmcoll0 (06773b9f-88ae-4430-b4bd-32b9c52979b6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.928-0500 W STORAGE [conn60] Could not complete validation of table:collection-45--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.928-0500 I INDEX [conn60] validating the internal structure of index _id_ on collection test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.879-0500 W STORAGE [conn101] Could not complete validation of table:collection-35-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:32.853-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'config.cache.chunks.test1_fsmdb0.agg_out'. Ident: collection-117--2310912778499990807, commit timestamp: Timestamp(1574796692, 11)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.968-0500 I INDEX [conn134] validating the internal structure of index _id_hashed on collection test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.998-0500 I INDEX [conn63] validating the internal structure of index _id_ on collection test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.867-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Finishing collection drop for config.cache.chunks.test1_fsmdb0.fsmcoll0 (06773b9f-88ae-4430-b4bd-32b9c52979b6).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.867-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test1_fsmdb0.fsmcoll0 (06773b9f-88ae-4430-b4bd-32b9c52979b6)'. Ident: 'index-105--2588534479858262356', commit timestamp: 'Timestamp(1574796692, 25)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.867-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test1_fsmdb0.fsmcoll0 (06773b9f-88ae-4430-b4bd-32b9c52979b6)'. Ident: 'index-106--2588534479858262356', commit timestamp: 'Timestamp(1574796692, 25)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.879-0500 I INDEX [conn101] validating the internal structure of index _id_ on collection config.tags
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:32.860-0500 I COMMAND [ReplWriterWorker-12] CMD: drop test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.970-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-66-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3579)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.998-0500 W STORAGE [conn63] Could not complete validation of table:index-46--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.998-0500 I INDEX [conn63] validating the internal structure of index _id_hashed on collection test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.928-0500 W STORAGE [conn60] Could not complete validation of table:index-46--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.867-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Deferring table drop for collection 'config.cache.chunks.test1_fsmdb0.fsmcoll0'. Ident: collection-104--2588534479858262356, commit timestamp: Timestamp(1574796692, 25)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.879-0500 W STORAGE [conn101] Could not complete validation of table:index-36-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:32.861-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796692, 16), t: 1 } and commit timestamp Timestamp(1574796692, 16)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.972-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-52-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3579)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:32.861-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38)'. Ident: 'index-110--7234316082034423155', commit timestamp: 'Timestamp(1574796692, 16)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.998-0500 W STORAGE [conn63] Could not complete validation of table:index-47--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.928-0500 I INDEX [conn60] validating the internal structure of index _id_hashed on collection test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.874-0500 I COMMAND [conn55] dropDatabase test1_fsmdb0 - starting
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.879-0500 I INDEX [conn101] validating the internal structure of index ns_1_min_1 on collection config.tags
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:32.861-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.973-0500 I INDEX [conn134] validating collection test1_fsmdb0.fsmcoll0 (UUID: dccb4b9f-92a4-4a8c-933f-ac40a7941a38)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:32.861-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38)'. Ident: 'index-111--7234316082034423155', commit timestamp: 'Timestamp(1574796692, 16)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:31.999-0500 I INDEX [conn63] validating collection test1_fsmdb0.fsmcoll0 (UUID: dccb4b9f-92a4-4a8c-933f-ac40a7941a38)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.928-0500 W STORAGE [conn60] Could not complete validation of table:index-47--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.874-0500 I COMMAND [conn55] dropDatabase test1_fsmdb0 - dropped 0 collection(s)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.879-0500 W STORAGE [conn101] Could not complete validation of table:index-37-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:32.861-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38)'. Ident: 'index-110--2310912778499990807', commit timestamp: 'Timestamp(1574796692, 16)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.974-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-70-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313) with drop timestamp Timestamp(1574796670, 4653)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:32.861-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test1_fsmdb0.fsmcoll0'. Ident: collection-109--7234316082034423155, commit timestamp: Timestamp(1574796692, 16)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.000-0500 I INDEX [conn63] validating index consistency _id_ on collection test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.929-0500 I INDEX [conn60] validating collection test1_fsmdb0.fsmcoll0 (UUID: dccb4b9f-92a4-4a8c-933f-ac40a7941a38)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.874-0500 I COMMAND [conn55] dropDatabase test1_fsmdb0 - finished
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.879-0500 I INDEX [conn101] validating the internal structure of index ns_1_tag_1 on collection config.tags
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:32.861-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38)'. Ident: 'index-111--2310912778499990807', commit timestamp: 'Timestamp(1574796692, 16)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.974-0500 I INDEX [conn134] validating index consistency _id_ on collection test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:32.870-0500 I COMMAND [ReplWriterWorker-14] CMD: drop config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.000-0500 I INDEX [conn63] validating index consistency _id_hashed on collection test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.930-0500 I INDEX [conn60] validating index consistency _id_ on collection test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.880-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test1_fsmdb0 took 0 ms and failed :: caused by :: NamespaceNotFound: database test1_fsmdb0 not found
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.879-0500 W STORAGE [conn101] Could not complete validation of table:index-38-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:32.861-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test1_fsmdb0.fsmcoll0'. Ident: collection-109--2310912778499990807, commit timestamp: Timestamp(1574796692, 16)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.975-0500 I INDEX [conn134] validating index consistency _id_hashed on collection test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:32.871-0500 I STORAGE [ReplWriterWorker-14] dropCollection: config.cache.chunks.test1_fsmdb0.fsmcoll0 (06773b9f-88ae-4430-b4bd-32b9c52979b6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796692, 25), t: 1 } and commit timestamp Timestamp(1574796692, 25)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.000-0500 I INDEX [conn63] Validation complete for collection test1_fsmdb0.fsmcoll0 (UUID: dccb4b9f-92a4-4a8c-933f-ac40a7941a38). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.930-0500 I INDEX [conn60] validating index consistency _id_hashed on collection test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:32.880-0500 I SHARDING [conn55] setting this node's cached database version for test1_fsmdb0 to {}
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.879-0500 I INDEX [conn101] validating collection config.tags (UUID: d225b508-e40e-4c3c-a716-26adc4561055)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:32.870-0500 I COMMAND [ReplWriterWorker-2] CMD: drop config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.975-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-74-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313) with drop timestamp Timestamp(1574796670, 4653)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:32.871-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for config.cache.chunks.test1_fsmdb0.fsmcoll0 (06773b9f-88ae-4430-b4bd-32b9c52979b6).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.002-0500 I NETWORK [conn63] end connection 127.0.0.1:52236 (11 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.931-0500 I INDEX [conn60] Validation complete for collection test1_fsmdb0.fsmcoll0 (UUID: dccb4b9f-92a4-4a8c-933f-ac40a7941a38). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.879-0500 I INDEX [conn101] validating index consistency _id_ on collection config.tags
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:32.871-0500 I STORAGE [ReplWriterWorker-2] dropCollection: config.cache.chunks.test1_fsmdb0.fsmcoll0 (06773b9f-88ae-4430-b4bd-32b9c52979b6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796692, 25), t: 1 } and commit timestamp Timestamp(1574796692, 25)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.975-0500 I INDEX [conn134] Validation complete for collection test1_fsmdb0.fsmcoll0 (UUID: dccb4b9f-92a4-4a8c-933f-ac40a7941a38). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:32.871-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test1_fsmdb0.fsmcoll0 (06773b9f-88ae-4430-b4bd-32b9c52979b6)'. Ident: 'index-114--7234316082034423155', commit timestamp: 'Timestamp(1574796692, 25)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.016-0500 I NETWORK [conn62] end connection 127.0.0.1:52210 (10 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:31.932-0500 I NETWORK [conn60] end connection 127.0.0.1:53130 (12 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.879-0500 I INDEX [conn101] validating index consistency ns_1_min_1 on collection config.tags
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:32.871-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for config.cache.chunks.test1_fsmdb0.fsmcoll0 (06773b9f-88ae-4430-b4bd-32b9c52979b6).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.976-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-68-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313) with drop timestamp Timestamp(1574796670, 4653)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:32.871-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test1_fsmdb0.fsmcoll0 (06773b9f-88ae-4430-b4bd-32b9c52979b6)'. Ident: 'index-115--7234316082034423155', commit timestamp: 'Timestamp(1574796692, 25)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.842-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.016-0500 I NETWORK [conn59] end connection 127.0.0.1:53096 (11 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.879-0500 I INDEX [conn101] validating index consistency ns_1_tag_1 on collection config.tags
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:32.871-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test1_fsmdb0.fsmcoll0 (06773b9f-88ae-4430-b4bd-32b9c52979b6)'. Ident: 'index-114--2310912778499990807', commit timestamp: 'Timestamp(1574796692, 25)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.976-0500 I NETWORK [conn134] end connection 127.0.0.1:39200 (38 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:32.871-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'config.cache.chunks.test1_fsmdb0.fsmcoll0'. Ident: collection-113--7234316082034423155, commit timestamp: Timestamp(1574796692, 25)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.843-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test1_fsmdb0.agg_out (5e50e75c-c327-4f05-bb46-1ea87905b919) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796692, 5), t: 1 } and commit timestamp Timestamp(1574796692, 5)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.840-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.879-0500 I INDEX [conn101] Validation complete for collection config.tags (UUID: d225b508-e40e-4c3c-a716-26adc4561055). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:32.871-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test1_fsmdb0.fsmcoll0 (06773b9f-88ae-4430-b4bd-32b9c52979b6)'. Ident: 'index-115--2310912778499990807', commit timestamp: 'Timestamp(1574796692, 25)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.976-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-77-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad) with drop timestamp Timestamp(1574796671, 3)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:32.876-0500 I COMMAND [ReplWriterWorker-6] dropDatabase test1_fsmdb0 - starting
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.843-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test1_fsmdb0.agg_out (5e50e75c-c327-4f05-bb46-1ea87905b919).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.841-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test1_fsmdb0.agg_out (5e50e75c-c327-4f05-bb46-1ea87905b919) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796692, 5), t: 1 } and commit timestamp Timestamp(1574796692, 5)
[fsm_workload_test:agg_out] 2019-11-26T14:31:34.020-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.880-0500 I COMMAND [conn101] CMD: validate config.transactions, full:true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:34.023-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:32.871-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'config.cache.chunks.test1_fsmdb0.fsmcoll0'. Ident: collection-113--2310912778499990807, commit timestamp: Timestamp(1574796692, 25)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.595-0500 Implicit session: session { "id" : UUID("413c458c-0e94-4aa0-9f41-5ad037de5265") }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.977-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-78-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad) with drop timestamp Timestamp(1574796671, 3)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.595-0500 Implicit session: session { "id" : UUID("c808c9a2-8c63-4923-aa54-235ad652431c") }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:32.876-0500 I COMMAND [ReplWriterWorker-6] dropDatabase test1_fsmdb0 - dropped 0 collection(s)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.595-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.843-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (5e50e75c-c327-4f05-bb46-1ea87905b919)'. Ident: 'index-308--4104909142373009110', commit timestamp: 'Timestamp(1574796692, 5)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.596-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.021-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45092 #97 (1 connection now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.596-0500 true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.039-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46724 #139 (38 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.596-0500 true
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:34.073-0500 I NETWORK [listener] connection accepted from 127.0.0.1:58288 #39 (1 connection now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.596-0500 2019-11-26T14:31:34.037-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.841-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test1_fsmdb0.agg_out (5e50e75c-c327-4f05-bb46-1ea87905b919).
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.596-0500 2019-11-26T14:31:34.032-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.880-0500 W STORAGE [conn101] Could not complete validation of table:collection-15-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.596-0500 2019-11-26T14:31:34.037-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:32.876-0500 I COMMAND [ReplWriterWorker-1] dropDatabase test1_fsmdb0 - starting
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.597-0500 2019-11-26T14:31:34.032-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.979-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-75-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad) with drop timestamp Timestamp(1574796671, 3)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.597-0500 2019-11-26T14:31:34.038-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:32.876-0500 I COMMAND [ReplWriterWorker-6] dropDatabase test1_fsmdb0 - finished
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.597-0500 2019-11-26T14:31:34.033-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.843-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (5e50e75c-c327-4f05-bb46-1ea87905b919)'. Ident: 'index-317--4104909142373009110', commit timestamp: 'Timestamp(1574796692, 5)'
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.597-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.021-0500 I NETWORK [conn97] received client metadata from 127.0.0.1:45092 conn97: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.598-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.841-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (5e50e75c-c327-4f05-bb46-1ea87905b919)'. Ident: 'index-308--8000595249233899911', commit timestamp: 'Timestamp(1574796692, 5)'
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.598-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.039-0500 I NETWORK [conn139] received client metadata from 127.0.0.1:46724 conn139: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.598-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.880-0500 I INDEX [conn101] validating the internal structure of index _id_ on collection config.transactions
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.598-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:32.876-0500 I COMMAND [ReplWriterWorker-1] dropDatabase test1_fsmdb0 - dropped 0 collection(s)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.598-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.981-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-82-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440) with drop timestamp Timestamp(1574796671, 5)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.599-0500 [jsTest] New session started with sessionID: { "id" : UUID("ffcab262-2ac4-47df-adfa-ba61da7481a6") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:34.073-0500 I NETWORK [conn39] received client metadata from 127.0.0.1:58288 conn39: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:32.882-0500 I SHARDING [ReplWriterWorker-13] setting this node's cached database version for test1_fsmdb0 to {}
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.599-0500 [jsTest] New session started with sessionID: { "id" : UUID("9b3db2bf-59df-41fe-85dc-b78f2673f665") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.843-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-307--4104909142373009110, commit timestamp: Timestamp(1574796692, 5)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.599-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.023-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45094 #98 (2 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.599-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.841-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (5e50e75c-c327-4f05-bb46-1ea87905b919)'. Ident: 'index-317--8000595249233899911', commit timestamp: 'Timestamp(1574796692, 5)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.600-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.039-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46726 #140 (39 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.600-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.880-0500 W STORAGE [conn101] Could not complete validation of table:index-16-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.600-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:32.876-0500 I COMMAND [ReplWriterWorker-1] dropDatabase test1_fsmdb0 - finished
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.600-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.982-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-84-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440) with drop timestamp Timestamp(1574796671, 5)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.601-0500 2019-11-26T14:31:34.036-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:34.555-0500 I NETWORK [listener] connection accepted from 127.0.0.1:58342 #40 (2 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.601-0500 2019-11-26T14:31:34.040-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.039-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35278 #56 (9 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.601-0500 2019-11-26T14:31:34.036-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.851-0500 I COMMAND [ReplWriterWorker-2] CMD: drop config.cache.chunks.test1_fsmdb0.agg_out
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.601-0500 2019-11-26T14:31:34.040-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.023-0500 I NETWORK [conn98] received client metadata from 127.0.0.1:45094 conn98: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.601-0500 2019-11-26T14:31:34.036-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.841-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-307--8000595249233899911, commit timestamp: Timestamp(1574796692, 5)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.602-0500 2019-11-26T14:31:34.040-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.042-0500 I NETWORK [conn140] received client metadata from 127.0.0.1:46726 conn140: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.602-0500 2019-11-26T14:31:34.036-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.880-0500 I INDEX [conn101] validating collection config.transactions (UUID: c2741992-901b-4092-a01f-3dfe88ab21c5)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.602-0500 2019-11-26T14:31:34.041-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:32.882-0500 I SHARDING [ReplWriterWorker-4] setting this node's cached database version for test1_fsmdb0 to {}
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.602-0500 2019-11-26T14:31:34.037-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.983-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-79-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440) with drop timestamp Timestamp(1574796671, 5)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.602-0500 2019-11-26T14:31:34.041-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:34.555-0500 I NETWORK [conn40] received client metadata from 127.0.0.1:58342 conn40: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.602-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.039-0500 I NETWORK [conn56] received client metadata from 127.0.0.1:35278 conn56: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.603-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.852-0500 I STORAGE [ReplWriterWorker-2] dropCollection: config.cache.chunks.test1_fsmdb0.agg_out (ad34fc50-677f-4846-b03c-7b24f5f1669a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796692, 9), t: 1 } and commit timestamp Timestamp(1574796692, 9)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.603-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.062-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45136 #99 (3 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.603-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.850-0500 I COMMAND [ReplWriterWorker-1] CMD: drop config.cache.chunks.test1_fsmdb0.agg_out
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.603-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.043-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46740 #141 (40 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.603-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.880-0500 I INDEX [conn101] validating index consistency _id_ on collection config.transactions
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.604-0500 [jsTest] New session started with sessionID: { "id" : UUID("ea7a010e-a9ff-4abf-9212-e8aa572f44cc") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.039-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51920 #62 (9 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.604-0500 [jsTest] New session started with sessionID: { "id" : UUID("e59aa90a-a735-4dd8-9de6-b60dbc4dffad") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.983-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-83-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0) with drop timestamp Timestamp(1574796671, 1013)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.604-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:34.581-0500 I NETWORK [listener] connection accepted from 127.0.0.1:58350 #41 (3 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.604-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.043-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35296 #57 (10 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.604-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.852-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for config.cache.chunks.test1_fsmdb0.agg_out (ad34fc50-677f-4846-b03c-7b24f5f1669a).
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.605-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.062-0500 I NETWORK [conn99] received client metadata from 127.0.0.1:45136 conn99: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.605-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.850-0500 I STORAGE [ReplWriterWorker-1] dropCollection: config.cache.chunks.test1_fsmdb0.agg_out (ad34fc50-677f-4846-b03c-7b24f5f1669a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796692, 9), t: 1 } and commit timestamp Timestamp(1574796692, 9)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.605-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.044-0500 I NETWORK [conn141] received client metadata from 127.0.0.1:46740 conn141: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.605-0500 2019-11-26T14:31:34.038-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.880-0500 I INDEX [conn101] Validation complete for collection config.transactions (UUID: c2741992-901b-4092-a01f-3dfe88ab21c5). No corruption found.
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.605-0500 2019-11-26T14:31:34.043-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.039-0500 I NETWORK [conn62] received client metadata from 127.0.0.1:51920 conn62: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.605-0500 2019-11-26T14:31:34.038-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.984-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-86-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0) with drop timestamp Timestamp(1574796671, 1013)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.606-0500 2019-11-26T14:31:34.043-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:34.582-0500 I NETWORK [conn41] received client metadata from 127.0.0.1:58350 conn41: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.606-0500 2019-11-26T14:31:34.038-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.044-0500 I NETWORK [conn57] received client metadata from 127.0.0.1:35296 conn57: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.606-0500 2019-11-26T14:31:34.043-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.852-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test1_fsmdb0.agg_out (ad34fc50-677f-4846-b03c-7b24f5f1669a)'. Ident: 'index-326--4104909142373009110', commit timestamp: 'Timestamp(1574796692, 9)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.606-0500 2019-11-26T14:31:34.038-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.073-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45144 #100 (4 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.606-0500 2019-11-26T14:31:34.043-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.850-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for config.cache.chunks.test1_fsmdb0.agg_out (ad34fc50-677f-4846-b03c-7b24f5f1669a).
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.607-0500 2019-11-26T14:31:34.039-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.044-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46742 #142 (41 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.607-0500 2019-11-26T14:31:34.044-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.881-0500 I COMMAND [conn101] CMD: validate config.version, full:true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.607-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.043-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51934 #63 (10 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.607-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.985-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-80-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0) with drop timestamp Timestamp(1574796671, 1013)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.607-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:34.593-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.608-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.093-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35334 #58 (11 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.608-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.852-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test1_fsmdb0.agg_out (ad34fc50-677f-4846-b03c-7b24f5f1669a)'. Ident: 'index-329--4104909142373009110', commit timestamp: 'Timestamp(1574796692, 9)'
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.608-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.073-0500 I NETWORK [conn100] received client metadata from 127.0.0.1:45144 conn100: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.608-0500 [jsTest] New session started with sessionID: { "id" : UUID("d57ec6a2-f193-4b33-91be-5c1ff67c1f6b") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.850-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test1_fsmdb0.agg_out (ad34fc50-677f-4846-b03c-7b24f5f1669a)'. Ident: 'index-326--8000595249233899911', commit timestamp: 'Timestamp(1574796692, 9)'
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.609-0500 [jsTest] New session started with sessionID: { "id" : UUID("d3aeffb4-4527-4202-8dc6-c274cbfdbd20") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.044-0500 I NETWORK [conn142] received client metadata from 127.0.0.1:46742 conn142: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.609-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.881-0500 W STORAGE [conn101] Could not complete validation of table:collection-39-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.609-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.043-0500 I NETWORK [conn63] received client metadata from 127.0.0.1:51934 conn63: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.609-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.986-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-89-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4) with drop timestamp Timestamp(1574796671, 1014)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.609-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:34.594-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.fsmcoll0 to version 1|3||5ddd7d96cf8184c2e1493933 took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.610-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.093-0500 I NETWORK [conn58] received client metadata from 127.0.0.1:35334 conn58: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.610-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.852-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'config.cache.chunks.test1_fsmdb0.agg_out'. Ident: collection-325--4104909142373009110, commit timestamp: Timestamp(1574796692, 9)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.610-0500 Skipping data consistency checks for 1-node CSRS: { "type" : "replica set", "primary" : "localhost:20000", "nodes" : [ "localhost:20000" ] }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.096-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45172 #101 (5 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.610-0500 setting random seed: 1467416768
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.850-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test1_fsmdb0.agg_out (ad34fc50-677f-4846-b03c-7b24f5f1669a)'. Ident: 'index-329--8000595249233899911', commit timestamp: 'Timestamp(1574796692, 9)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.610-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.071-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46750 #143 (42 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.610-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.881-0500 I INDEX [conn101] validating the internal structure of index _id_ on collection config.version
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.611-0500 Implicit session: session { "id" : UUID("16dada5b-97bd-4532-b981-3f2b59b6de76") }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.092-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51972 #64 (11 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.611-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.987-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-90-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4) with drop timestamp Timestamp(1574796671, 1014)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.611-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:34.824-0500 I COMMAND [conn41] command test2_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("86806ebc-9bc8-4f4d-8806-c8bf25e31db3") }, $db: "test2_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9\", to: \"test2_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: cannot rename to a sharded collection" errName:IllegalOperation errCode:20 reslen:575 protocol:op_msg 231ms
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.611-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.108-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35346 #59 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.611-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.857-0500 I COMMAND [ReplWriterWorker-5] CMD: drop test1_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.612-0500 [jsTest] New session started with sessionID: { "id" : UUID("9a8bac3e-c894-4e36-bf6b-1c36295f4814") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.097-0500 I NETWORK [conn101] received client metadata from 127.0.0.1:45172 conn101: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.612-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.850-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'config.cache.chunks.test1_fsmdb0.agg_out'. Ident: collection-325--8000595249233899911, commit timestamp: Timestamp(1574796692, 9)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.612-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.071-0500 I NETWORK [conn143] received client metadata from 127.0.0.1:46750 conn143: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.612-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.881-0500 W STORAGE [conn101] Could not complete validation of table:index-40-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.612-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.092-0500 I NETWORK [conn64] received client metadata from 127.0.0.1:51972 conn64: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.613-0500 [jsTest] New session started with sessionID: { "id" : UUID("43a0e795-e212-48e0-9623-b4f6f61bbaae") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.989-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-87-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4) with drop timestamp Timestamp(1574796671, 1014)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.613-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:34.824-0500 I COMMAND [conn40] command test2_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("81d4e471-c714-4e93-a360-4ad87027dca4") }, $db: "test2_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220\", to: \"test2_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: cannot rename to a sharded collection" errName:IllegalOperation errCode:20 reslen:575 protocol:op_msg 231ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.613-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.108-0500 I NETWORK [conn59] received client metadata from 127.0.0.1:35346 conn59: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.613-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.858-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796692, 15), t: 1 } and commit timestamp Timestamp(1574796692, 15)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.613-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.119-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45182 #102 (6 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.614-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.856-0500 I COMMAND [ReplWriterWorker-4] CMD: drop test1_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.614-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.088-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46770 #144 (43 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.614-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.881-0500 I INDEX [conn101] validating collection config.version (UUID: d52b8328-6d55-4f54-8cfd-e715a58e3315)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.614-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.107-0500 I NETWORK [listener] connection accepted from 127.0.0.1:51984 #65 (12 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.615-0500 [jsTest] New session started with sessionID: { "id" : UUID("02564174-c7c5-497c-aff6-a754a27b5afe") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.989-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-71-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 1521)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.615-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:35.332-0500 I COMMAND [conn41] command test2_fsmdb0 appName: "tid:3" command: enableSharding { enableSharding: "test2_fsmdb0", lsid: { id: UUID("86806ebc-9bc8-4f4d-8806-c8bf25e31db3") }, $clusterTime: { clusterTime: Timestamp(1574796694, 3098), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:163 protocol:op_msg 505ms
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.615-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.118-0500 W CONTROL [conn59] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 43 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.615-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.858-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38).
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.615-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.119-0500 I NETWORK [conn102] received client metadata from 127.0.0.1:45182 conn102: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.616-0500 [jsTest] New session started with sessionID: { "id" : UUID("b7954863-ae3a-4cf5-9214-0feed3df6979") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.856-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796692, 15), t: 1 } and commit timestamp Timestamp(1574796692, 15)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.616-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.089-0500 I NETWORK [conn144] received client metadata from 127.0.0.1:46770 conn144: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.616-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.881-0500 I INDEX [conn101] validating index consistency _id_ on collection config.version
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.616-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.107-0500 I NETWORK [conn65] received client metadata from 127.0.0.1:51984 conn65: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.617-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.990-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-72-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 1521)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.617-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:35.605-0500 I COMMAND [conn41] command test2_fsmdb0.agg_out appName: "tid:3" command: shardCollection { shardCollection: "test2_fsmdb0.agg_out", key: { _id: "hashed" }, lsid: { id: UUID("86806ebc-9bc8-4f4d-8806-c8bf25e31db3") }, $clusterTime: { clusterTime: Timestamp(1574796695, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:244 protocol:op_msg 272ms
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.617-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.123-0500 W CONTROL [conn59] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 43 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.617-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.858-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38)'. Ident: 'index-46--4104909142373009110', commit timestamp: 'Timestamp(1574796692, 15)'
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.618-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.124-0500 I NETWORK [conn101] end connection 127.0.0.1:45172 (5 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.618-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.856-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38).
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.618-0500 [jsTest] New session started with sessionID: { "id" : UUID("105ac1ab-b1f4-48b7-b943-9cc2bcc91a54") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.091-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46772 #145 (44 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.618-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.881-0500 I INDEX [conn101] Validation complete for collection config.version (UUID: d52b8328-6d55-4f54-8cfd-e715a58e3315). No corruption found.
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.619-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.118-0500 W CONTROL [conn65] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 40 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.619-0500 [jsTest] New session started with sessionID: { "id" : UUID("b0fde595-38c9-4581-8e82-f47e2419a95e") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.991-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-69-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 1521)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.619-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:35.606-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.619-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.125-0500 I NETWORK [conn59] end connection 127.0.0.1:35346 (11 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.619-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.858-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38)'. Ident: 'index-47--4104909142373009110', commit timestamp: 'Timestamp(1574796692, 15)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.620-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.147-0500 I NETWORK [conn102] end connection 127.0.0.1:45182 (4 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.620-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.856-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38)'. Ident: 'index-46--8000595249233899911', commit timestamp: 'Timestamp(1574796692, 15)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.620-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.092-0500 I NETWORK [conn145] received client metadata from 127.0.0.1:46772 conn145: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.620-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.883-0500 I COMMAND [conn101] CMD: validate local.oplog.rs, full:true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.620-0500 Running data consistency checks for replica set: shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.123-0500 W CONTROL [conn65] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 40 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.621-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.621-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.992-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-95-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 2027)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.621-0500 [jsTest] New session started with sessionID: { "id" : UUID("dbcf3c11-0c16-4fc9-8720-a3307b89dd44") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:35.607-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.agg_out to version 1|0||5ddd7d96cf8184c2e1493a53 took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.621-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.140-0500 I STORAGE [ReplWriterWorker-4] createCollection: test2_fsmdb0.fsmcoll0 with provided UUID: 11da2d1e-3dd5-4812-9686-c490a6bdfff0 and options: { uuid: UUID("11da2d1e-3dd5-4812-9686-c490a6bdfff0") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.621-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.858-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test1_fsmdb0.fsmcoll0'. Ident: collection-45--4104909142373009110, commit timestamp: Timestamp(1574796692, 15)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.622-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.150-0500 I NETWORK [conn98] end connection 127.0.0.1:45094 (3 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.622-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.856-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38)'. Ident: 'index-47--8000595249233899911', commit timestamp: 'Timestamp(1574796692, 15)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.622-0500 [jsTest] New session started with sessionID: { "id" : UUID("8efe2d09-83bd-4a5c-834a-2c76c400617e") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.093-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46778 #146 (45 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.622-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.883-0500 W STORAGE [conn101] Could not complete validation of table:collection-10-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.622-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.125-0500 I NETWORK [conn65] end connection 127.0.0.1:51984 (11 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.623-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.993-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-98-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 2027)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.623-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:35.610-0500 I COMMAND [conn40] command test2_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("81d4e471-c714-4e93-a360-4ad87027dca4") }, $clusterTime: { clusterTime: Timestamp(1574796694, 3098), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test2_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e\", to: \"test2_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test2_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:714 protocol:op_msg 784ms
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.623-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.156-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test2_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.623-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.871-0500 I COMMAND [ReplWriterWorker-15] CMD: drop config.cache.chunks.test1_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.623-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.276-0500 I COMMAND [conn100] command test2_fsmdb0.fsmcoll0 appName: "MongoDB Shell" command: shardCollection { shardCollection: "test2_fsmdb0.fsmcoll0", key: { _id: "hashed" }, lsid: { id: UUID("c124d5e2-848c-4352-aca8-03195d9029ac") }, $clusterTime: { clusterTime: Timestamp(1574796694, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:245 protocol:op_msg 152ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.624-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.856-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test1_fsmdb0.fsmcoll0'. Ident: collection-45--8000595249233899911, commit timestamp: Timestamp(1574796692, 15)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.624-0500 [jsTest] New session started with sessionID: { "id" : UUID("9c01833b-78e3-4ff8-b348-0a17d7a5ad51") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.093-0500 I NETWORK [conn146] received client metadata from 127.0.0.1:46778 conn146: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.624-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.883-0500 I INDEX [conn101] validating collection local.oplog.rs (UUID: 5bb0c359-7cb9-48f8-8ff8-4b4c84c12ec5)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.624-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.140-0500 I STORAGE [ReplWriterWorker-9] createCollection: test2_fsmdb0.fsmcoll0 with provided UUID: 11da2d1e-3dd5-4812-9686-c490a6bdfff0 and options: { uuid: UUID("11da2d1e-3dd5-4812-9686-c490a6bdfff0") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.624-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.994-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-92-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 2027)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.624-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.157-0500 I NETWORK [conn56] end connection 127.0.0.1:35278 (10 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.625-0500 [jsTest] New session started with sessionID: { "id" : UUID("51637727-c421-4e20-b45e-5cf0e0c87159") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.871-0500 I STORAGE [ReplWriterWorker-15] dropCollection: config.cache.chunks.test1_fsmdb0.fsmcoll0 (24d02c72-11d8-48c7-b13e-109658af75b4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796692, 23), t: 1 } and commit timestamp Timestamp(1574796692, 23)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.625-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.370-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.625-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.869-0500 I COMMAND [ReplWriterWorker-11] CMD: drop config.cache.chunks.test1_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.625-0500 Recreating replica set from config {
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.104-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46782 #147 (46 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.625-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.883-0500 I INDEX [conn101] Validation complete for collection local.oplog.rs (UUID: 5bb0c359-7cb9-48f8-8ff8-4b4c84c12ec5). No corruption found.
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.625-0500 "_id" : "config-rs",
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.155-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test2_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.626-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.994-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-96-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 2530)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.626-0500 "version" : 1,
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.173-0500 I INDEX [ReplWriterWorker-10] index build: starting on test2_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.626-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.871-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for config.cache.chunks.test1_fsmdb0.fsmcoll0 (24d02c72-11d8-48c7-b13e-109658af75b4).
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.626-0500 "configsvr" : true,
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.371-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.fsmcoll0 to version 1|3||5ddd7d96cf8184c2e1493933 took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.626-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.869-0500 I STORAGE [ReplWriterWorker-11] dropCollection: config.cache.chunks.test1_fsmdb0.fsmcoll0 (24d02c72-11d8-48c7-b13e-109658af75b4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796692, 23), t: 1 } and commit timestamp Timestamp(1574796692, 23)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.627-0500 "protocolVersion" : NumberLong(1),
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.104-0500 I NETWORK [conn147] received client metadata from 127.0.0.1:46782 conn147: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.627-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.884-0500 I COMMAND [conn101] CMD: validate local.replset.election, full:true
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.627-0500 "writeConcernMajorityJournalDefault" : true,
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.157-0500 I NETWORK [conn62] end connection 127.0.0.1:51920 (10 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.627-0500 [jsTest] New session started with sessionID: { "id" : UUID("e0e48131-16f1-4d66-a4c0-2037b9e8b522") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.996-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-100-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 2530)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.627-0500 "members" : [
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.173-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.628-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.871-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test1_fsmdb0.fsmcoll0 (24d02c72-11d8-48c7-b13e-109658af75b4)'. Ident: 'index-50--4104909142373009110', commit timestamp: 'Timestamp(1574796692, 23)'
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.628-0500 {
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.467-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.agg_out took 0 ms and found the collection is not sharded
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.628-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.869-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for config.cache.chunks.test1_fsmdb0.fsmcoll0 (24d02c72-11d8-48c7-b13e-109658af75b4).
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.628-0500 "_id" : 0,
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.106-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46784 #148 (47 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.628-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.885-0500 I INDEX [conn101] validating the internal structure of index _id_ on collection local.replset.election
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.628-0500 "host" : "localhost:20000",
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.173-0500 I INDEX [ReplWriterWorker-12] index build: starting on test2_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.629-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.997-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-93-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 2530)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.629-0500 "arbiterOnly" : false,
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.173-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 60ed4600-812a-4bda-adc8-57b624984bf2: test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0 ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.629-0500 Implicit session: session { "id" : UUID("123b9cbc-5d74-4778-bd2e-b0b0c8cb888b") }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.871-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test1_fsmdb0.fsmcoll0 (24d02c72-11d8-48c7-b13e-109658af75b4)'. Ident: 'index-51--4104909142373009110', commit timestamp: 'Timestamp(1574796692, 23)'
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.629-0500 "buildIndexes" : true,
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.545-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45192 #103 (4 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.629-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.869-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test1_fsmdb0.fsmcoll0 (24d02c72-11d8-48c7-b13e-109658af75b4)'. Ident: 'index-50--8000595249233899911', commit timestamp: 'Timestamp(1574796692, 23)'
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.630-0500 "hidden" : false,
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.107-0500 I NETWORK [conn148] received client metadata from 127.0.0.1:46784 conn148: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.630-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.887-0500 I INDEX [conn101] validating collection local.replset.election (UUID: 5f00e271-c3c6-4d7b-9d39-1c8e9e8a77d4)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.630-0500 "priority" : 1,
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.173-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.630-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.998-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-97-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 3036)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.630-0500 "tags" : {
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.173-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.631-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.871-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'config.cache.chunks.test1_fsmdb0.fsmcoll0'. Ident: collection-49--4104909142373009110, commit timestamp: Timestamp(1574796692, 23)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.631-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.546-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45194 #104 (5 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.631-0500 [jsTest] New session started with sessionID: { "id" : UUID("14ca1bb1-0519-43b6-96a0-c98f5aa61c02") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.869-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test1_fsmdb0.fsmcoll0 (24d02c72-11d8-48c7-b13e-109658af75b4)'. Ident: 'index-51--8000595249233899911', commit timestamp: 'Timestamp(1574796692, 23)'
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.631-0500 },
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.117-0500 W CONTROL [conn148] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 47 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.631-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.887-0500 I INDEX [conn101] validating index consistency _id_ on collection local.replset.election
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.632-0500 "slaveDelay" : NumberLong(0),
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.173-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 796a6d96-71c9-421c-af44-ed66d0e3440c: test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0 ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.632-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:31.999-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-102-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 3036)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.632-0500 "votes" : 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.173-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.632-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.874-0500 I COMMAND [ReplWriterWorker-10] dropDatabase test1_fsmdb0 - starting
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.632-0500 }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.546-0500 I NETWORK [conn103] received client metadata from 127.0.0.1:45192 conn103: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.633-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.869-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'config.cache.chunks.test1_fsmdb0.fsmcoll0'. Ident: collection-49--8000595249233899911, commit timestamp: Timestamp(1574796692, 23)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.633-0500 ],
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.120-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.633-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.887-0500 I INDEX [conn101] Validation complete for collection local.replset.election (UUID: 5f00e271-c3c6-4d7b-9d39-1c8e9e8a77d4). No corruption found.
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.633-0500 "settings" : {
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.173-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.633-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.000-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-94-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 3036)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.634-0500 "chainingAllowed" : true,
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.176-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test2_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.634-0500 [jsTest] New session started with sessionID: { "id" : UUID("516a0b45-7606-4b07-a1f2-c5b9b3dd24a3") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.874-0500 I COMMAND [ReplWriterWorker-10] dropDatabase test1_fsmdb0 - dropped 0 collection(s)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.634-0500 "heartbeatIntervalMillis" : 2000,
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.546-0500 I NETWORK [conn104] received client metadata from 127.0.0.1:45194 conn104: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.634-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.872-0500 I COMMAND [ReplWriterWorker-15] dropDatabase test1_fsmdb0 - starting
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.634-0500 "heartbeatTimeoutSecs" : 10,
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.121-0500 I SHARDING [conn55] setting this node's cached database version for test2_fsmdb0 to { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.635-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.887-0500 I COMMAND [conn101] CMD: validate local.replset.minvalid, full:true
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.635-0500 "electionTimeoutMillis" : 86400000,
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.174-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.635-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.000-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-105-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796674, 6)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.635-0500 "catchUpTimeoutMillis" : -1,
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.177-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 60ed4600-812a-4bda-adc8-57b624984bf2: test2_fsmdb0.fsmcoll0 ( 11da2d1e-3dd5-4812-9686-c490a6bdfff0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.635-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.874-0500 I COMMAND [ReplWriterWorker-10] dropDatabase test1_fsmdb0 - finished
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.636-0500 "catchUpTakeoverDelayMillis" : 30000,
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.547-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45196 #105 (6 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.636-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.872-0500 I COMMAND [ReplWriterWorker-15] dropDatabase test1_fsmdb0 - dropped 0 collection(s)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.636-0500 "getLastErrorModes" : {
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.122-0500 W CONTROL [conn148] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 47 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.636-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.888-0500 I INDEX [conn101] validating the internal structure of index _id_ on collection local.replset.minvalid
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.636-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.176-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test2_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.636-0500 [jsTest] New session started with sessionID: { "id" : UUID("de2d6c5a-1b5d-4435-85eb-66be3bd112b9") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.001-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-106-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796674, 6)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.637-0500 },
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.246-0500 I STORAGE [ReplWriterWorker-6] createCollection: config.cache.chunks.test2_fsmdb0.fsmcoll0 with provided UUID: e923876b-cb14-4999-bce6-e0591b1153b2 and options: { uuid: UUID("e923876b-cb14-4999-bce6-e0591b1153b2") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.637-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:32.882-0500 I SHARDING [ReplWriterWorker-1] setting this node's cached database version for test1_fsmdb0 to {}
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.547-0500 I NETWORK [conn105] received client metadata from 127.0.0.1:45196 conn105: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.637-0500 "getLastErrorDefaults" : {
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.872-0500 I COMMAND [ReplWriterWorker-15] dropDatabase test1_fsmdb0 - finished
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.637-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.124-0500 I NETWORK [conn147] end connection 127.0.0.1:46782 (46 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.637-0500 "w" : 1,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.890-0500 I INDEX [conn101] validating collection local.replset.minvalid (UUID: ce934bfb-84f4-4d44-a963-37c09c6c95a6)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.638-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.179-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 796a6d96-71c9-421c-af44-ed66d0e3440c: test2_fsmdb0.fsmcoll0 ( 11da2d1e-3dd5-4812-9686-c490a6bdfff0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.638-0500 "wtimeout" : 0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.002-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-104-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796674, 6)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.638-0500 Running data consistency checks for replica set: shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.263-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns config.cache.chunks.test2_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.638-0500 },
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.036-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52278 #64 (11 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.638-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.555-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45198 #106 (7 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.639-0500 "replicaSetId" : ObjectId("5ddd7d655cde74b6784bb14d")
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:32.881-0500 I SHARDING [ReplWriterWorker-13] setting this node's cached database version for test1_fsmdb0 to {}
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.639-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.125-0500 I NETWORK [conn148] end connection 127.0.0.1:46784 (45 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.639-0500 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.890-0500 I INDEX [conn101] validating index consistency _id_ on collection local.replset.minvalid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.639-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.246-0500 I STORAGE [ReplWriterWorker-1] createCollection: config.cache.chunks.test2_fsmdb0.fsmcoll0 with provided UUID: e923876b-cb14-4999-bce6-e0591b1153b2 and options: { uuid: UUID("e923876b-cb14-4999-bce6-e0591b1153b2") }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.262-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns config.cache.chunks.test2_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.639-0500 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.281-0500 I INDEX [ReplWriterWorker-5] index build: starting on config.cache.chunks.test2_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.639-0500 [jsTest] New session started with sessionID: { "id" : UUID("6872c0fa-8a04-4675-b5c6-53b2d68b6900") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.037-0500 I NETWORK [conn64] received client metadata from 127.0.0.1:52278 conn64: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.640-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.555-0500 I NETWORK [conn106] received client metadata from 127.0.0.1:45198 conn106: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.640-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.036-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53164 #61 (12 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.036-0500 I NETWORK [conn61] received client metadata from 127.0.0.1:53164 conn61: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.640-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.890-0500 I INDEX [conn101] Validation complete for collection local.replset.minvalid (UUID: ce934bfb-84f4-4d44-a963-37c09c6c95a6). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.640-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.004-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-115-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f) with drop timestamp Timestamp(1574796674, 1015)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.640-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.280-0500 I INDEX [ReplWriterWorker-13] index build: starting on config.cache.chunks.test2_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.641-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.281-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.641-0500 [jsTest] New session started with sessionID: { "id" : UUID("36c7c40d-5a7a-480b-b8ad-b2470127e3be") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.041-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52300 #65 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.641-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.557-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45202 #107 (8 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.641-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.127-0500 I STORAGE [conn55] createCollection: test2_fsmdb0.fsmcoll0 with provided UUID: 11da2d1e-3dd5-4812-9686-c490a6bdfff0 and options: { uuid: UUID("11da2d1e-3dd5-4812-9686-c490a6bdfff0") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.641-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.041-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53186 #62 (13 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.641-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.891-0500 I COMMAND [conn101] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.642-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.005-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-120-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f) with drop timestamp Timestamp(1574796674, 1015)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.642-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.280-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.642-0500 [jsTest] New session started with sessionID: { "id" : UUID("d16eb044-d9d3-4cbd-9388-a3eab305ca6c") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.281-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 795110fa-a9a7-4d5d-9866-e1d46cd48ea6: config.cache.chunks.test2_fsmdb0.fsmcoll0 (e923876b-cb14-4999-bce6-e0591b1153b2 ): indexes: 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.642-0500 Recreating replica set from config {
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.041-0500 I NETWORK [conn65] received client metadata from 127.0.0.1:52300 conn65: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.642-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.557-0500 I NETWORK [conn107] received client metadata from 127.0.0.1:45202 conn107: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.642-0500 "_id" : "shard-rs0",
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.138-0500 I INDEX [conn55] index build: done building index _id_ on ns test2_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.643-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.041-0500 I NETWORK [conn62] received client metadata from 127.0.0.1:53186 conn62: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.643-0500 "version" : 2,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.895-0500 I INDEX [conn101] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.643-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.006-0500 I NETWORK [conn133] end connection 127.0.0.1:39180 (37 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.643-0500 "protocolVersion" : NumberLong(1),
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.280-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 48dff4cf-8b10-4a22-8ce4-64b1e4ac8630: config.cache.chunks.test2_fsmdb0.fsmcoll0 (e923876b-cb14-4999-bce6-e0591b1153b2 ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.643-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.281-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.643-0500 "writeConcernMajorityJournalDefault" : true,
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.085-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52332 #66 (13 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.644-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.558-0500 I NETWORK [conn104] end connection 127.0.0.1:45194 (7 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.644-0500 "members" : [
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.138-0500 I INDEX [conn55] Registering index build: e1856d6f-b86a-4694-9413-f24c1789c0cd
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.644-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.086-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53222 #63 (14 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.644-0500 {
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.900-0500 I INDEX [conn101] validating collection local.replset.oplogTruncateAfterPoint (UUID: b5258dce-fb89-4436-a191-b8586ea2e6c0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.644-0500 [jsTest] New session started with sessionID: { "id" : UUID("6302147b-f50c-4534-8add-610d65dc02dc") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.006-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-113-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f) with drop timestamp Timestamp(1574796674, 1015)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.644-0500 "_id" : 0,
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.280-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.645-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.282-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.645-0500 "host" : "localhost:20001",
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.085-0500 I NETWORK [conn66] received client metadata from 127.0.0.1:52332 conn66: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.645-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.558-0500 I NETWORK [conn103] end connection 127.0.0.1:45192 (6 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.645-0500 "arbiterOnly" : false,
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.151-0500 I NETWORK [conn140] end connection 127.0.0.1:46726 (44 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.645-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.086-0500 I NETWORK [conn63] received client metadata from 127.0.0.1:53222 conn63: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.645-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.646-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.646-0500 "priority" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.646-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.646-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.646-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.646-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.646-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.646-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.646-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.646-0500 "_id" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.646-0500 "host" : "localhost:20002",
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.646-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.646-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.646-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.646-0500 "priority" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.646-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.646-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.646-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.646-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.646-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500 "_id" : 2,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500 "host" : "localhost:20003",
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500 "priority" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500 ],
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500 "settings" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500 "chainingAllowed" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500 "heartbeatIntervalMillis" : 2000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500 "heartbeatTimeoutSecs" : 10,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500 "electionTimeoutMillis" : 86400000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500 "catchUpTimeoutMillis" : -1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500 "catchUpTakeoverDelayMillis" : 30000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500 "getLastErrorModes" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.647-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500 "getLastErrorDefaults" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500 "w" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500 "wtimeout" : 0
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500 "replicaSetId" : ObjectId("5ddd7d683bbfe7fa5630d3b8")
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500 [jsTest] New session started with sessionID: { "id" : UUID("f48f7886-21a0-465b-bfb0-95c9b63f8820") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500 Recreating replica set from config {
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500 "_id" : "shard-rs1",
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500 "version" : 2,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500 "protocolVersion" : NumberLong(1),
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500 "writeConcernMajorityJournalDefault" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500 "members" : [
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500 "_id" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.648-0500 "host" : "localhost:20004",
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500 "priority" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500 "_id" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500 "host" : "localhost:20005",
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500 "priority" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.649-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500 "_id" : 2,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500 "host" : "localhost:20006",
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500 "priority" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500 ],
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500 "settings" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500 "chainingAllowed" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500 "heartbeatIntervalMillis" : 2000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500 "heartbeatTimeoutSecs" : 10,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500 "electionTimeoutMillis" : 86400000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500 "catchUpTimeoutMillis" : -1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500 "catchUpTakeoverDelayMillis" : 30000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500 "getLastErrorModes" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.650-0500 "getLastErrorDefaults" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500 "w" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500 "wtimeout" : 0
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500 "replicaSetId" : ObjectId("5ddd7d6bcf8184c2e1492eba")
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500 [jsTest] New session started with sessionID: { "id" : UUID("fccb7a4d-1372-4312-a20b-2cdb49137a26") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500 [jsTest] New session started with sessionID: { "id" : UUID("abf0e47b-01a9-4c18-8b18-0f12e3f068e4") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500 [jsTest] New session started with sessionID: { "id" : UUID("572d4326-1ec8-41ee-ae4a-238f0e74656d") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.651-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500 [jsTest] New session started with sessionID: { "id" : UUID("22b25883-a2d7-4cf7-b44e-8dcdaef157e2") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500 [jsTest] New session started with sessionID: { "id" : UUID("77565c0b-d064-4dbb-8427-6edf30bf6c33") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500 [jsTest] New session started with sessionID: { "id" : UUID("a602f043-c908-47a0-a53d-4213ccdb53a8") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.652-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500 [jsTest] Workload(s) started: jstests/concurrency/fsm_workloads/agg_out.js
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500 [jsTest] New session started with sessionID: { "id" : UUID("c124d5e2-848c-4352-aca8-03195d9029ac") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500 Using 5 threads (requested 5)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500 Implicit session: session { "id" : UUID("2cf33cb2-57b4-4132-9483-e76c4026ff78") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500 Implicit session: session { "id" : UUID("301718ce-10ae-4d95-9f2e-af554fcb26e3") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500 Implicit session: session { "id" : UUID("65f92671-6715-4305-a750-385cee326fef") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.653-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500 Implicit session: session { "id" : UUID("2854c92f-3fe1-4ae7-8d44-2574a251b9c3") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500 Implicit session: session { "id" : UUID("4942bfda-c623-4bb4-ae28-97fe704cc5db") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500 [tid:2] setting random seed: 1031834282
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500 [tid:0] setting random seed: 2891058233
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500 [tid:1] setting random seed: 506169768
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500 [tid:4] setting random seed: 2891409764
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500 [tid:3] setting random seed: 1366134420
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500 [tid:2]
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500 [jsTest] New session started with sessionID: { "id" : UUID("edd46bd6-7845-419a-b11b-5e92d9ad8dd5") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500 [tid:3]
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500 [jsTest] New session started with sessionID: { "id" : UUID("86806ebc-9bc8-4f4d-8806-c8bf25e31db3") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.654-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.655-0500 [tid:4]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.900-0500 I INDEX [conn101] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.655-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js finished.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.007-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-110-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a) with drop timestamp Timestamp(1574796674, 1518)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.281-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.655-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.282-0500 I SHARDING [ReplWriterWorker-11] Marking collection config.cache.chunks.test2_fsmdb0.fsmcoll0 as collection version:
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.656-0500 Starting JSTest jstests/hooks/run_check_repl_dbhash_background.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_check_repl_dbhash_background"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_check_repl_dbhash_background.js
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.130-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52364 #67 (14 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.656-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.657-0500 [jsTest] New session started with sessionID: { "id" : UUID("3be876c6-d64e-4ca7-b4fd-5b1e9c261d78") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.657-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.657-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.657-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.657-0500 [tid:1]
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.657-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.657-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.657-0500 [jsTest] New session started with sessionID: { "id" : UUID("81d4e471-c714-4e93-a360-4ad87027dca4") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.657-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.657-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.657-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.657-0500 [tid:0]
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.657-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.657-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.657-0500 [jsTest] New session started with sessionID: { "id" : UUID("dc61b4f9-aa45-4579-b7ea-98c3efc65c32") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.560-0500 I NETWORK [conn105] end connection 127.0.0.1:45196 (5 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.658-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.662-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:35.663-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.151-0500 I INDEX [conn55] index build: starting on test2_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.131-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53254 #64 (15 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.900-0500 I INDEX [conn101] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: b5258dce-fb89-4436-a191-b8586ea2e6c0). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.008-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-116-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a) with drop timestamp Timestamp(1574796674, 1518)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.282-0500 I SHARDING [ReplWriterWorker-0] Marking collection config.cache.chunks.test2_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.286-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 4 side writes (inserted: 4, deleted: 0) for 'lastmod_1' in 0 ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.130-0500 I NETWORK [conn67] received client metadata from 127.0.0.1:52364 conn67: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.572-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45204 #108 (6 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.151-0500 I INDEX [conn55] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.131-0500 I NETWORK [conn64] received client metadata from 127.0.0.1:53254 conn64: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.901-0500 I COMMAND [conn101] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.009-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-107-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a) with drop timestamp Timestamp(1574796674, 1518)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.284-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: drain applied 4 side writes (inserted: 4, deleted: 0) for 'lastmod_1' in 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.286-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.141-0500 W CONTROL [conn67] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 718 }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.572-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45206 #109 (7 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.151-0500 I STORAGE [conn55] Index build initialized: e1856d6f-b86a-4694-9413-f24c1789c0cd: test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.141-0500 W CONTROL [conn64] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 323 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.901-0500 I INDEX [conn101] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.009-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-119-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954) with drop timestamp Timestamp(1574796674, 1971)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.284-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index lastmod_1 on ns config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.287-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 795110fa-a9a7-4d5d-9866-e1d46cd48ea6: config.cache.chunks.test2_fsmdb0.fsmcoll0 ( e923876b-cb14-4999-bce6-e0591b1153b2 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.145-0500 W CONTROL [conn67] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 718 }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.572-0500 I NETWORK [conn108] received client metadata from 127.0.0.1:45204 conn108: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.151-0500 I INDEX [conn55] Waiting for index build to complete: e1856d6f-b86a-4694-9413-f24c1789c0cd
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.146-0500 W CONTROL [conn64] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 323 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.904-0500 I INDEX [conn101] validating collection local.startup_log (UUID: a1488758-c116-4144-adba-02b8f3b8144d)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.010-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-122-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954) with drop timestamp Timestamp(1574796674, 1971)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.287-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 48dff4cf-8b10-4a22-8ce4-64b1e4ac8630: config.cache.chunks.test2_fsmdb0.fsmcoll0 ( e923876b-cb14-4999-bce6-e0591b1153b2 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.447-0500 I STORAGE [ReplWriterWorker-8] createCollection: test2_fsmdb0.agg_out with provided UUID: 9d032268-b7b7-4429-b5aa-61c323334f6e and options: { uuid: UUID("9d032268-b7b7-4429-b5aa-61c323334f6e") }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.148-0500 I NETWORK [conn67] end connection 127.0.0.1:52364 (13 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.572-0500 I NETWORK [conn109] received client metadata from 127.0.0.1:45206 conn109: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.152-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.148-0500 I NETWORK [conn64] end connection 127.0.0.1:53254 (14 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.904-0500 I INDEX [conn101] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.012-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-117-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954) with drop timestamp Timestamp(1574796674, 1971)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.447-0500 I STORAGE [ReplWriterWorker-10] createCollection: test2_fsmdb0.agg_out with provided UUID: 9d032268-b7b7-4429-b5aa-61c323334f6e and options: { uuid: UUID("9d032268-b7b7-4429-b5aa-61c323334f6e") }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.462-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.157-0500 I NETWORK [conn64] end connection 127.0.0.1:52278 (12 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.582-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45210 #110 (8 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.152-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.157-0500 I NETWORK [conn61] end connection 127.0.0.1:53164 (13 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.904-0500 I INDEX [conn101] Validation complete for collection local.startup_log (UUID: a1488758-c116-4144-adba-02b8f3b8144d). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.013-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-125-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb) with drop timestamp Timestamp(1574796674, 2528)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.462-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.498-0500 I INDEX [ReplWriterWorker-13] index build: starting on test2_fsmdb0.agg_out properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.667-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js started with pid 15690.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.204-0500 I STORAGE [ReplWriterWorker-7] createCollection: test2_fsmdb0.fsmcoll0 with provided UUID: 11da2d1e-3dd5-4812-9686-c490a6bdfff0 and options: { uuid: UUID("11da2d1e-3dd5-4812-9686-c490a6bdfff0") }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.582-0500 I NETWORK [conn110] received client metadata from 127.0.0.1:45210 conn110: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.154-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.190-0500 I STORAGE [ReplWriterWorker-7] createCollection: test2_fsmdb0.fsmcoll0 with provided UUID: 11da2d1e-3dd5-4812-9686-c490a6bdfff0 and options: { uuid: UUID("11da2d1e-3dd5-4812-9686-c490a6bdfff0") }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.904-0500 I COMMAND [conn101] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.014-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-126-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb) with drop timestamp Timestamp(1574796674, 2528)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.498-0500 I INDEX [ReplWriterWorker-6] index build: starting on test2_fsmdb0.agg_out properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.498-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.218-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.584-0500 I NETWORK [conn109] end connection 127.0.0.1:45206 (7 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.157-0500 I NETWORK [conn139] end connection 127.0.0.1:46724 (43 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.203-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.905-0500 I INDEX [conn101] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.015-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-124-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb) with drop timestamp Timestamp(1574796674, 2528)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.498-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.498-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 53257554-f5cb-4b16-8181-273ab402b6c0: test2_fsmdb0.agg_out (9d032268-b7b7-4429-b5aa-61c323334f6e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.235-0500 I INDEX [ReplWriterWorker-2] index build: starting on test2_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.584-0500 I NETWORK [conn108] end connection 127.0.0.1:45204 (6 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.157-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: e1856d6f-b86a-4694-9413-f24c1789c0cd: test2_fsmdb0.fsmcoll0 ( 11da2d1e-3dd5-4812-9686-c490a6bdfff0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.218-0500 I INDEX [ReplWriterWorker-1] index build: starting on test2_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.907-0500 I INDEX [conn101] validating collection local.system.replset (UUID: ea98bf03-b956-4e01-b9a4-857e601cceda)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.016-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-111-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796674, 2529)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.498-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: acf702db-2fd5-4c4b-af3f-e75a2972042b: test2_fsmdb0.agg_out (9d032268-b7b7-4429-b5aa-61c323334f6e ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.498-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.235-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.743-0500 I COMMAND [conn106] command test2_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("dc61b4f9-aa45-4579-b7ea-98c3efc65c32") }, $db: "test2_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 151ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.157-0500 I INDEX [conn55] Index build completed: e1856d6f-b86a-4694-9413-f24c1789c0cd
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.218-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.907-0500 I INDEX [conn101] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.016-0500 I NETWORK [conn132] end connection 127.0.0.1:39178 (36 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.498-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.498-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.235-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 5ed6793c-3cc6-4e1b-b0dc-e76fca095fed: test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0 ): indexes: 1
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.750-0500 I COMMAND [conn110] command test2_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("edd46bd6-7845-419a-b11b-5e92d9ad8dd5") }, $db: "test2_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 157ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.175-0500 I SHARDING [conn55] CMD: shardcollection: { _shardsvrShardCollection: "test2_fsmdb0.fsmcoll0", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("c124d5e2-848c-4352-aca8-03195d9029ac"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796694, 11), signature: { hash: BinData(0, 45C1DDF53A6E2BCB4EC1A70E851FF4768B381088), keyId: 6763700092420489256 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45144", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796694, 11), t: 1 } }, $db: "admin" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.218-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 1cfd3983-835c-4328-8400-a85f60138d9c: test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0 ): indexes: 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.907-0500 I INDEX [conn101] Validation complete for collection local.system.replset (UUID: ea98bf03-b956-4e01-b9a4-857e601cceda). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.017-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-112-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796674, 2529)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:35.689-0500 MongoDB shell version v0.0.0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.499-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:31:36.203-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.501-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test2_fsmdb0.agg_out
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.559-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.236-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:31:37.559-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.754-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.560-0500 Implicit session: session { "id" : UUID("2a4460f0-d726-4468-92b3-65eac80310b2") }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.175-0500 I SHARDING [conn55] about to log metadata event into changelog: { _id: "nz_desktop:20004-2019-11-26T14:31:34.175-0500-5ddd7d96cf8184c2e1493932", server: "nz_desktop:20004", shard: "shard-rs1", clientAddr: "127.0.0.1:46028", time: new Date(1574796694175), what: "shardCollection.start", ns: "test2_fsmdb0.fsmcoll0", details: { shardKey: { _id: "hashed" }, collection: "test2_fsmdb0.fsmcoll0", uuid: UUID("11da2d1e-3dd5-4812-9686-c490a6bdfff0"), empty: true, fromMapReduce: false, primary: "shard-rs1:shard-rs1/localhost:20004,localhost:20005,localhost:20006", numChunks: 4 } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:37.560-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.218-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.560-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.908-0500 I COMMAND [conn101] CMD: validate local.system.rollback.id, full:true
[fsm_workload_test:agg_out] 2019-11-26T14:31:37.560-0500 [jsTest] New session started with sessionID: { "id" : UUID("4c1c0c11-e6d3-4bb2-8856-76e3963284b8") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:35.686-0500 I NETWORK [conn40] end connection 127.0.0.1:58342 (2 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.560-0500 true
[fsm_workload_test:agg_out] 2019-11-26T14:31:37.560-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.017-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-108-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796674, 2529)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.561-0500 2019-11-26T14:31:35.747-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.502-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test2_fsmdb0.agg_out
[fsm_workload_test:agg_out] 2019-11-26T14:31:37.561-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.502-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 53257554-f5cb-4b16-8181-273ab402b6c0: test2_fsmdb0.agg_out ( 9d032268-b7b7-4429-b5aa-61c323334f6e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.561-0500 2019-11-26T14:31:35.747-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.237-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:31:37.561-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.561-0500 2019-11-26T14:31:35.748-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.561-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.561-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.561-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.562-0500 [jsTest] New session started with sessionID: { "id" : UUID("8deba6e4-1fe8-4047-8bb7-2b338206cbba") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.562-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.562-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.562-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.562-0500 2019-11-26T14:31:35.751-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.562-0500 2019-11-26T14:31:35.751-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.562-0500 2019-11-26T14:31:35.752-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.562-0500 2019-11-26T14:31:35.752-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.562-0500 2019-11-26T14:31:35.752-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.562-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.562-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.562-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.562-0500 [jsTest] New session started with sessionID: { "id" : UUID("a88e614d-8391-4db4-b922-5d97ec011901") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.562-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.562-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.562-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.562-0500 2019-11-26T14:31:35.754-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.562-0500 2019-11-26T14:31:35.754-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.562-0500 2019-11-26T14:31:35.754-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.562-0500 2019-11-26T14:31:35.754-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.562-0500 2019-11-26T14:31:35.755-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.562-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.756-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.fsmcoll0 to version 1|3||5ddd7d96cf8184c2e1493933 took 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:31:37.562-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.227-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.fsmcoll0 to version 1|3||5ddd7d96cf8184c2e1493933 took 1 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.563-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.218-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:31:37.563-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.909-0500 I INDEX [conn101] validating the internal structure of index _id_ on collection local.system.rollback.id
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.563-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:36.154-0500 I COMMAND [conn41] command test2_fsmdb0 appName: "tid:3" command: enableSharding { enableSharding: "test2_fsmdb0", lsid: { id: UUID("86806ebc-9bc8-4f4d-8806-c8bf25e31db3") }, $clusterTime: { clusterTime: Timestamp(1574796695, 527), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:163 protocol:op_msg 507ms
[fsm_workload_test:agg_out] 2019-11-26T14:31:37.563-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.018-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-133-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae) with drop timestamp Timestamp(1574796674, 3539)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.563-0500 [jsTest] New session started with sessionID: { "id" : UUID("fc8656c7-c687-49cd-a590-8f640371a9e9") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.504-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: acf702db-2fd5-4c4b-af3f-e75a2972042b: test2_fsmdb0.agg_out ( 9d032268-b7b7-4429-b5aa-61c323334f6e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[fsm_workload_test:agg_out] 2019-11-26T14:31:37.564-0500 [jsTest] Workload(s) completed in 1918 ms: jstests/concurrency/fsm_workloads/agg_out.js
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.624-0500 I STORAGE [ReplWriterWorker-15] createCollection: test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f with provided UUID: 7a23accc-ea31-4729-b99e-5394e0ac262c and options: { uuid: UUID("7a23accc-ea31-4729-b99e-5394e0ac262c"), temp: true }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.564-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.239-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test2_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:37.564-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.784-0500 I COMMAND [conn107] command test2_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("3be876c6-d64e-4ca7-b4fd-5b1e9c261d78") }, $db: "test2_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 192ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.564-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.227-0500 I SHARDING [conn55] Marking collection test2_fsmdb0.fsmcoll0 as collection version: 1|3||5ddd7d96cf8184c2e1493933, shard version: 1|3||5ddd7d96cf8184c2e1493933
[fsm_workload_test:agg_out] 2019-11-26T14:31:37.564-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.221-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test2_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.564-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.911-0500 I INDEX [conn101] validating collection local.system.rollback.id (UUID: 0ad52f2a-9d3e-4f9f-b91b-17a9c570ab7e)
[fsm_workload_test:agg_out] 2019-11-26T14:31:37.565-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:36.163-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.565-0500 Skipping data consistency checks for 1-node CSRS: { "type" : "replica set", "primary" : "localhost:20000", "nodes" : [ "localhost:20000" ] }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.020-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-134-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae) with drop timestamp Timestamp(1574796674, 3539)
[fsm_workload_test:agg_out] 2019-11-26T14:31:37.565-0500 FSM workload jstests/concurrency/fsm_workloads/agg_out.js finished.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.624-0500 I STORAGE [ReplWriterWorker-15] createCollection: test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f with provided UUID: 7a23accc-ea31-4729-b99e-5394e0ac262c and options: { uuid: UUID("7a23accc-ea31-4729-b99e-5394e0ac262c"), temp: true }
[executor:fsm_workload_test:job0] 2019-11-26T14:31:37.566-0500 agg_out.js ran in 3.63 seconds: no failures detected.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.565-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.638-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.566-0500 Implicit session: session { "id" : UUID("9eed9a06-1666-406d-9781-cff6e9e79577") }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.241-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 5ed6793c-3cc6-4e1b-b0dc-e76fca095fed: test2_fsmdb0.fsmcoll0 ( 11da2d1e-3dd5-4812-9686-c490a6bdfff0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.567-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.826-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.agg_out to version 1|0||5ddd7d96cf8184c2e1493a53 took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.567-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.227-0500 I STORAGE [ShardServerCatalogCacheLoader-1] createCollection: config.cache.chunks.test2_fsmdb0.fsmcoll0 with provided UUID: e923876b-cb14-4999-bce6-e0591b1153b2 and options: { uuid: UUID("e923876b-cb14-4999-bce6-e0591b1153b2") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.567-0500 Implicit session: session { "id" : UUID("15a703ed-db4b-435e-a8cc-047fe33aebbf") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.567-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.225-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 1cfd3983-835c-4328-8400-a85f60138d9c: test2_fsmdb0.fsmcoll0 ( 11da2d1e-3dd5-4812-9686-c490a6bdfff0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.567-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.911-0500 I INDEX [conn101] validating index consistency _id_ on collection local.system.rollback.id
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.567-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:36.164-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.agg_out to version 1|0||5ddd7d96cf8184c2e1493a53 took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.567-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.021-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-131-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae) with drop timestamp Timestamp(1574796674, 3539)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.567-0500 [jsTest] New session started with sessionID: { "id" : UUID("cdec958c-96d7-4fd2-bd53-1de13196d71f") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.637-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.568-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0] Pausing the background check repl dbhash thread.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.568-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.639-0500 I STORAGE [ReplWriterWorker-0] createCollection: test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545 with provided UUID: b1a5c7a3-d406-439a-9c39-a502710d3e37 and options: { uuid: UUID("b1a5c7a3-d406-439a-9c39-a502710d3e37"), temp: true }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.568-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.265-0500 I STORAGE [ReplWriterWorker-10] createCollection: config.cache.chunks.test2_fsmdb0.fsmcoll0 with provided UUID: c904d8e5-593f-4133-b81d-a4e28a1049f0 and options: { uuid: UUID("c904d8e5-593f-4133-b81d-a4e28a1049f0") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.568-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:34.861-0500 I COMMAND [conn106] command test2_fsmdb0.agg_out appName: "tid:0" command: shardCollection { shardCollection: "test2_fsmdb0.agg_out", key: { _id: "hashed" }, lsid: { id: UUID("dc61b4f9-aa45-4579-b7ea-98c3efc65c32") }, $clusterTime: { clusterTime: Timestamp(1574796694, 1590), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:244 protocol:op_msg 111ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.568-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.244-0500 I INDEX [ShardServerCatalogCacheLoader-1] index build: done building index _id_ on ns config.cache.chunks.test2_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.569-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.249-0500 I STORAGE [ReplWriterWorker-15] createCollection: config.cache.chunks.test2_fsmdb0.fsmcoll0 with provided UUID: c904d8e5-593f-4133-b81d-a4e28a1049f0 and options: { uuid: UUID("c904d8e5-593f-4133-b81d-a4e28a1049f0") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.569-0500 [jsTest] New session started with sessionID: { "id" : UUID("77a7638c-a563-452b-b0b8-9d2e3ddf57ff") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.911-0500 I INDEX [conn101] Validation complete for collection local.system.rollback.id (UUID: 0ad52f2a-9d3e-4f9f-b91b-17a9c570ab7e). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.569-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:36.201-0500 I NETWORK [conn41] end connection 127.0.0.1:58350 (1 connection now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.569-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.022-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-137-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555) with drop timestamp Timestamp(1574796676, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.569-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.638-0500 I STORAGE [ReplWriterWorker-4] createCollection: test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545 with provided UUID: b1a5c7a3-d406-439a-9c39-a502710d3e37 and options: { uuid: UUID("b1a5c7a3-d406-439a-9c39-a502710d3e37"), temp: true }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.569-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.654-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.569-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.570-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.280-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns config.cache.chunks.test2_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.570-0500 [jsTest] New session started with sessionID: { "id" : UUID("aaf026d6-7360-4c06-af83-076be85d8a06") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.597-0500 I COMMAND [conn106] command test2_fsmdb0.agg_out appName: "tid:0" command: collMod { collMod: "agg_out", validationAction: "warn", lsid: { id: UUID("dc61b4f9-aa45-4579-b7ea-98c3efc65c32") }, $clusterTime: { clusterTime: Timestamp(1574796694, 3610), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test2_fsmdb0" } numYields:0 reslen:249 protocol:op_msg 718ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.570-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.244-0500 I INDEX [ShardServerCatalogCacheLoader-1] Registering index build: eaf803e5-d567-491c-8c87-823bb6e37762
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.570-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.264-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns config.cache.chunks.test2_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.570-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:31.912-0500 I NETWORK [conn101] end connection 127.0.0.1:56464 (34 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.570-0500 Running data consistency checks for replica set: shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:36.210-0500 I NETWORK [conn39] end connection 127.0.0.1:58288 (0 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.571-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.023-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-138-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555) with drop timestamp Timestamp(1574796676, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.571-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.653-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.571-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.571-0500 [jsTest] New session started with sessionID: { "id" : UUID("891d1df7-ccdb-4390-b7d4-a3cf3dc30db0") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.655-0500 I STORAGE [ReplWriterWorker-11] createCollection: test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc with provided UUID: 08932b51-9933-4490-ab6b-1df6cfb57633 and options: { uuid: UUID("08932b51-9933-4490-ab6b-1df6cfb57633"), temp: true }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.571-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.295-0500 I INDEX [ReplWriterWorker-7] index build: starting on config.cache.chunks.test2_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.571-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.597-0500 I COMMAND [conn110] command test2_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("edd46bd6-7845-419a-b11b-5e92d9ad8dd5") }, $clusterTime: { clusterTime: Timestamp(1574796694, 1590), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test2_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946\", to: \"test2_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test2_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:713 protocol:op_msg 843ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.572-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.260-0500 I INDEX [ShardServerCatalogCacheLoader-1] index build: starting on config.cache.chunks.test2_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.572-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.282-0500 I INDEX [ReplWriterWorker-7] index build: starting on config.cache.chunks.test2_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.572-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:32.006-0500 I NETWORK [conn100] end connection 127.0.0.1:56432 (33 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.572-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.023-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-135-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555) with drop timestamp Timestamp(1574796676, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.572-0500 [jsTest] New session started with sessionID: { "id" : UUID("1e985ccd-41b4-4fea-b803-f664c690c71f") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.653-0500 I STORAGE [ReplWriterWorker-14] createCollection: test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc with provided UUID: 08932b51-9933-4490-ab6b-1df6cfb57633 and options: { uuid: UUID("08932b51-9933-4490-ab6b-1df6cfb57633"), temp: true }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.572-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.670-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.573-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.295-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.573-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.600-0500 I COMMAND [conn107] command test2_fsmdb0.agg_out appName: "tid:4" command: collMod { collMod: "agg_out", validationAction: "error", lsid: { id: UUID("3be876c6-d64e-4ca7-b4fd-5b1e9c261d78") }, $clusterTime: { clusterTime: Timestamp(1574796694, 3611), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test2_fsmdb0" } numYields:0 reslen:249 protocol:op_msg 712ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.573-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.260-0500 I INDEX [ShardServerCatalogCacheLoader-1] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.573-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.260-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Index build initialized: eaf803e5-d567-491c-8c87-823bb6e37762: config.cache.chunks.test2_fsmdb0.fsmcoll0 (e923876b-cb14-4999-bce6-e0591b1153b2 ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.573-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.573-0500 [jsTest] New session started with sessionID: { "id" : UUID("49fe22fc-42e8-4b47-b3ca-be2e0a8ff8b8") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.573-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.574-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:32.016-0500 I NETWORK [conn99] end connection 127.0.0.1:56430 (32 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.574-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.574-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.024-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-141-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896) with drop timestamp Timestamp(1574796676, 507)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.574-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.666-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.574-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.671-0500 I STORAGE [ReplWriterWorker-4] createCollection: test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 with provided UUID: d997bf94-238b-49fc-9338-fc2aecfcb151 and options: { uuid: UUID("d997bf94-238b-49fc-9338-fc2aecfcb151"), temp: true }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.574-0500 [jsTest] New session started with sessionID: { "id" : UUID("a3b0f2aa-dbc1-46a5-ada5-901be63d9511") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.574-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.295-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: ca99834a-65cb-4e4f-9bda-b83f04e88cba: config.cache.chunks.test2_fsmdb0.fsmcoll0 (c904d8e5-593f-4133-b81d-a4e28a1049f0 ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.575-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.617-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.575-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.575-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.282-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.575-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.260-0500 I INDEX [ShardServerCatalogCacheLoader-1] Waiting for index build to complete: eaf803e5-d567-491c-8c87-823bb6e37762
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.575-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:32.835-0500 I SHARDING [conn23] distributed lock 'test1_fsmdb0' acquired for 'dropDatabase', ts : 5ddd7d945cde74b6784bb558
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.575-0500 [jsTest] New session started with sessionID: { "id" : UUID("9ac19421-9785-44f8-a5e1-2aa2dabbcaa5") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.025-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-142-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896) with drop timestamp Timestamp(1574796676, 507)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.576-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.666-0500 I STORAGE [ReplWriterWorker-0] createCollection: test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 with provided UUID: d997bf94-238b-49fc-9338-fc2aecfcb151 and options: { uuid: UUID("d997bf94-238b-49fc-9338-fc2aecfcb151"), temp: true }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.576-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.687-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.576-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.295-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.576-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.619-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.agg_out to version 1|0||5ddd7d96cf8184c2e1493a53 took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.576-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.282-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: bf438bf6-bfb1-4002-832c-a99bbbc4df16: config.cache.chunks.test2_fsmdb0.fsmcoll0 (c904d8e5-593f-4133-b81d-a4e28a1049f0 ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.576-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.260-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.577-0500 [jsTest] New session started with sessionID: { "id" : UUID("3c750652-9e70-4385-bf3e-27bce885be9b") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:32.835-0500 I SHARDING [conn23] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:32.835-0500-5ddd7d945cde74b6784bb55b", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55596", time: new Date(1574796692835), what: "dropDatabase.start", ns: "test1_fsmdb0", details: {} }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.577-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.026-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-139-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896) with drop timestamp Timestamp(1574796676, 507)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.577-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.680-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.577-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.688-0500 I STORAGE [ReplWriterWorker-8] createCollection: test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 with provided UUID: 08dbd6f5-b7d8-47c3-b06b-600c165e66f1 and options: { uuid: UUID("08dbd6f5-b7d8-47c3-b06b-600c165e66f1"), temp: true }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.577-0500 Running data consistency checks for replica set: shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.296-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.577-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.634-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.578-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.282-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.578-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.261-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.578-0500 [jsTest] New session started with sessionID: { "id" : UUID("7d7733c3-ae9c-41b0-9d68-0891d761757d") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:32.837-0500 I SHARDING [conn23] distributed lock 'test1_fsmdb0.agg_out' acquired for 'dropCollection', ts : 5ddd7d945cde74b6784bb55e
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.578-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.028-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-129-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 510)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.578-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.681-0500 I STORAGE [ReplWriterWorker-10] createCollection: test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 with provided UUID: 08dbd6f5-b7d8-47c3-b06b-600c165e66f1 and options: { uuid: UUID("08dbd6f5-b7d8-47c3-b06b-600c165e66f1"), temp: true }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.578-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.704-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.579-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.579-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.296-0500 I SHARDING [ReplWriterWorker-4] Marking collection config.cache.chunks.test2_fsmdb0.fsmcoll0 as collection version:
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.579-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.635-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.agg_out to version 1|0||5ddd7d96cf8184c2e1493a53 took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.579-0500 [jsTest] New session started with sessionID: { "id" : UUID("d6149994-3972-4719-ac96-7cc944de469f") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.283-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.579-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.264-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.test2_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.579-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:32.837-0500 I SHARDING [conn23] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:32.837-0500-5ddd7d945cde74b6784bb560", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55596", time: new Date(1574796692837), what: "dropCollection.start", ns: "test1_fsmdb0.agg_out", details: {} }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.579-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.029-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-130-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 510)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.580-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.698-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.580-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.722-0500 I INDEX [ReplWriterWorker-12] index build: starting on test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.580-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.299-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: drain applied 4 side writes (inserted: 4, deleted: 0) for 'lastmod_1' in 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.580-0500 [jsTest] New session started with sessionID: { "id" : UUID("06bf5d1d-cd06-476f-b7ae-6fb3c7c35a0b") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.645-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.580-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.283-0500 I SHARDING [ReplWriterWorker-14] Marking collection config.cache.chunks.test2_fsmdb0.fsmcoll0 as collection version:
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.580-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.265-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: eaf803e5-d567-491c-8c87-823bb6e37762: config.cache.chunks.test2_fsmdb0.fsmcoll0 ( e923876b-cb14-4999-bce6-e0591b1153b2 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.581-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:32.848-0500 I SHARDING [conn23] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:32.848-0500-5ddd7d945cde74b6784bb569", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55596", time: new Date(1574796692848), what: "dropCollection", ns: "test1_fsmdb0.agg_out", details: {} }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:37.581-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js finished.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.029-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-127-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 510)
[executor:fsm_workload_test:job0] 2019-11-26T14:31:37.581-0500 agg_out:CheckReplDBHashInBackground ran in 3.64 seconds: no failures detected.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.714-0500 I INDEX [ReplWriterWorker-12] index build: starting on test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.722-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.299-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index lastmod_1 on ns config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:34.300-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: ca99834a-65cb-4e4f-9bda-b83f04e88cba: config.cache.chunks.test2_fsmdb0.fsmcoll0 ( c904d8e5-593f-4133-b81d-a4e28a1049f0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.286-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 4 side writes (inserted: 4, deleted: 0) for 'lastmod_1' in 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.265-0500 I INDEX [ShardServerCatalogCacheLoader-1] Index build completed: eaf803e5-d567-491c-8c87-823bb6e37762
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:32.852-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7d945cde74b6784bb55e' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.030-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-146-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 1014)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.715-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.723-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: e70ff40b-c14e-4d2a-acfb-899c785e6289: test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545 (b1a5c7a3-d406-439a-9c39-a502710d3e37 ): indexes: 1
[executor:fsm_workload_test:job0] 2019-11-26T14:31:37.583-0500 Running agg_out:CheckReplDBHash...
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.647-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.agg_out to version 1|0||5ddd7d96cf8184c2e1493a53 took 0 ms
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:37.584-0500 Starting JSTest jstests/hooks/run_check_repl_dbhash.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_check_repl_dbhash"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_check_repl_dbhash.js
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:35.752-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52394 #68 (13 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.286-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.265-0500 I SHARDING [ShardServerCatalogCacheLoader-1] Marking collection config.cache.chunks.test2_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:32.853-0500 I SHARDING [conn23] distributed lock 'test1_fsmdb0.fsmcoll0' acquired for 'dropCollection', ts : 5ddd7d945cde74b6784bb56c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.031-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-148-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 1014)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.715-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: dea0e0fe-4548-4d77-8dcb-01e2ff5d4f5a: test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545 (b1a5c7a3-d406-439a-9c39-a502710d3e37 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.723-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.675-0500 I NETWORK [conn106] end connection 127.0.0.1:45198 (5 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:35.752-0500 I NETWORK [conn68] received client metadata from 127.0.0.1:52394 conn68: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:34.288-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: bf438bf6-bfb1-4002-832c-a99bbbc4df16: config.cache.chunks.test2_fsmdb0.fsmcoll0 ( c904d8e5-593f-4133-b81d-a4e28a1049f0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.270-0500 I SHARDING [conn55] Created 4 chunk(s) for: test2_fsmdb0.fsmcoll0, producing collection version 1|3||5ddd7d96cf8184c2e1493933
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:32.853-0500 I SHARDING [conn23] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:32.853-0500-5ddd7d945cde74b6784bb56e", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55596", time: new Date(1574796692853), what: "dropCollection.start", ns: "test1_fsmdb0.fsmcoll0", details: {} }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.032-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-143-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 1014)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.715-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.723-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.682-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:35.817-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52418 #69 (14 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:35.752-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53286 #65 (14 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.270-0500 I SHARDING [conn55] about to log metadata event into changelog: { _id: "nz_desktop:20004-2019-11-26T14:31:34.270-0500-5ddd7d96cf8184c2e149395d", server: "nz_desktop:20004", shard: "shard-rs1", clientAddr: "127.0.0.1:46028", time: new Date(1574796694270), what: "shardCollection.end", ns: "test2_fsmdb0.fsmcoll0", details: { version: "1|3||5ddd7d96cf8184c2e1493933" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:32.867-0500 I SHARDING [conn23] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:32.867-0500-5ddd7d945cde74b6784bb577", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55596", time: new Date(1574796692867), what: "dropCollection", ns: "test1_fsmdb0.fsmcoll0", details: {} }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.033-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-147-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 1520)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.715-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.727-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.684-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.agg_out to version 1|0||5ddd7d96cf8184c2e1493a53 took 0 ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:35.817-0500 I NETWORK [conn69] received client metadata from 127.0.0.1:52418 conn69: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:35.752-0500 I NETWORK [conn65] received client metadata from 127.0.0.1:53286 conn65: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.272-0500 I COMMAND [conn55] command admin.$cmd appName: "MongoDB Shell" command: _shardsvrShardCollection { _shardsvrShardCollection: "test2_fsmdb0.fsmcoll0", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("c124d5e2-848c-4352-aca8-03195d9029ac"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796694, 11), signature: { hash: BinData(0, 45C1DDF53A6E2BCB4EC1A70E851FF4768B381088), keyId: 6763700092420489256 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45144", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796694, 11), t: 1 } }, $db: "admin" } numYields:0 reslen:415 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 9 } }, ReplicationStateTransition: { acquireCount: { w: 15 } }, Global: { acquireCount: { r: 8, w: 7 } }, Database: { acquireCount: { r: 8, w: 7, W: 1 } }, Collection: { acquireCount: { r: 8, w: 3, W: 4 } }, Mutex: { acquireCount: { r: 16, W: 4 } } } flowControl:{ acquireCount: 5 } storage:{} protocol:op_msg 145ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:32.869-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7d945cde74b6784bb56c' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.033-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-150-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 1520)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.718-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.737-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: e70ff40b-c14e-4d2a-acfb-899c785e6289: test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545 ( b1a5c7a3-d406-439a-9c39-a502710d3e37 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:37.590-0500 JSTest jstests/hooks/run_check_repl_dbhash.js started with pid 15731.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.698-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:35.828-0500 W CONTROL [conn69] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 718 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:35.818-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53308 #66 (15 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.436-0500 I STORAGE [conn55] createCollection: test2_fsmdb0.agg_out with generated UUID: 9d032268-b7b7-4429-b5aa-61c323334f6e and options: {}
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:32.881-0500 I SHARDING [conn23] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:32.881-0500-5ddd7d945cde74b6784bb57f", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55596", time: new Date(1574796692881), what: "dropDatabase", ns: "test1_fsmdb0", details: {} }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.035-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-144-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 1520)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.727-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: dea0e0fe-4548-4d77-8dcb-01e2ff5d4f5a: test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545 ( b1a5c7a3-d406-439a-9c39-a502710d3e37 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.743-0500 I INDEX [ReplWriterWorker-0] index build: starting on test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.699-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.agg_out to version 1|0||5ddd7d96cf8184c2e1493a53 took 0 ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:35.843-0500 W CONTROL [conn69] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 718 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:35.818-0500 I NETWORK [conn66] received client metadata from 127.0.0.1:53308 conn66: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.445-0500 I INDEX [conn55] index build: done building index _id_ on ns test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:32.884-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7d945cde74b6784bb558' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.036-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-157-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b) with drop timestamp Timestamp(1574796676, 3095)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.736-0500 I INDEX [ReplWriterWorker-4] index build: starting on test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.743-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.730-0500 I NETWORK [conn110] end connection 127.0.0.1:45210 (4 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:35.846-0500 I NETWORK [conn69] end connection 127.0.0.1:52418 (13 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:35.828-0500 W CONTROL [conn66] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 323 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.467-0500 I INDEX [conn65] Registering index build: a7f6f2cd-7446-460d-b9e9-cfdb44e3f49f
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.032-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56498 #102 (33 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.037-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-158-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b) with drop timestamp Timestamp(1574796676, 3095)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.736-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.743-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 28a4046d-93d1-432d-9acd-4909d5716e35: test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f (7a23accc-ea31-4729-b99e-5394e0ac262c ): indexes: 1
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.739-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45212 #111 (5 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:35.870-0500 I NETWORK [conn68] end connection 127.0.0.1:52394 (12 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:35.844-0500 W CONTROL [conn66] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 323 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.478-0500 I INDEX [conn65] index build: starting on test2_fsmdb0.agg_out properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.032-0500 I NETWORK [conn102] received client metadata from 127.0.0.1:56498 conn102: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.038-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-155-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b) with drop timestamp Timestamp(1574796676, 3095)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.736-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 7589e0a2-a89a-47fb-a39d-99814080ddd5: test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f (7a23accc-ea31-4729-b99e-5394e0ac262c ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.744-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.739-0500 I NETWORK [conn111] received client metadata from 127.0.0.1:45212 conn111: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:36.211-0500 I NETWORK [conn66] end connection 127.0.0.1:52332 (11 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:35.846-0500 I NETWORK [conn66] end connection 127.0.0.1:53308 (14 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.478-0500 I INDEX [conn65] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.033-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56500 #103 (34 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.039-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-162-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7) with drop timestamp Timestamp(1574796676, 3224)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.736-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.744-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.807-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45234 #112 (6 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:36.223-0500 I NETWORK [conn65] end connection 127.0.0.1:52300 (10 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:35.870-0500 I NETWORK [conn65] end connection 127.0.0.1:53286 (13 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.478-0500 I STORAGE [conn65] Index build initialized: a7f6f2cd-7446-460d-b9e9-cfdb44e3f49f: test2_fsmdb0.agg_out (9d032268-b7b7-4429-b5aa-61c323334f6e ): indexes: 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.033-0500 I NETWORK [conn103] received client metadata from 127.0.0.1:56500 conn103: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.039-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-168-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7) with drop timestamp Timestamp(1574796676, 3224)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.737-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.749-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 5 side writes (inserted: 5, deleted: 0) for '_id_hashed' in 1 ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.807-0500 I NETWORK [conn112] received client metadata from 127.0.0.1:45234 conn112: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:36.211-0500 I NETWORK [conn63] end connection 127.0.0.1:53222 (12 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.478-0500 I INDEX [conn65] Waiting for index build to complete: a7f6f2cd-7446-460d-b9e9-cfdb44e3f49f
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.037-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56510 #104 (35 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.040-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-159-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7) with drop timestamp Timestamp(1574796676, 3224)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.738-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.750-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 59 side writes (inserted: 59, deleted: 0) for '_id_hashed' in 0 ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.815-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45238 #113 (7 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:36.223-0500 I NETWORK [conn62] end connection 127.0.0.1:53186 (11 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.478-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.037-0500 I NETWORK [conn104] received client metadata from 127.0.0.1:56510 conn104: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.041-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-163-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990) with drop timestamp Timestamp(1574796676, 3292)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.741-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 7589e0a2-a89a-47fb-a39d-99814080ddd5: test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f ( 7a23accc-ea31-4729-b99e-5394e0ac262c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.750-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.759-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 28a4046d-93d1-432d-9acd-4909d5716e35: test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f ( 7a23accc-ea31-4729-b99e-5394e0ac262c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.479-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.038-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56512 #105 (36 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.043-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-164-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990) with drop timestamp Timestamp(1574796676, 3292)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.758-0500 I INDEX [ReplWriterWorker-5] index build: starting on test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.815-0500 I NETWORK [conn113] received client metadata from 127.0.0.1:45238 conn113: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.768-0500 I INDEX [ReplWriterWorker-14] index build: starting on test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.768-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.038-0500 I NETWORK [conn105] received client metadata from 127.0.0.1:56512 conn105: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.044-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-160-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990) with drop timestamp Timestamp(1574796676, 3292)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.758-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.845-0500 I NETWORK [conn112] end connection 127.0.0.1:45234 (6 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.480-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.768-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 4cea2a18-fed8-446b-9ac3-6b49e6b74ba6: test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc (08932b51-9933-4490-ab6b-1df6cfb57633 ): indexes: 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.065-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56540 #106 (37 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.045-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-153-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 3540)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.758-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 8ac5654b-016d-4a82-a819-02b8974ded80: test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc (08932b51-9933-4490-ab6b-1df6cfb57633 ): indexes: 1
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.859-0500 I NETWORK [conn113] end connection 127.0.0.1:45238 (5 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.482-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: a7f6f2cd-7446-460d-b9e9-cfdb44e3f49f: test2_fsmdb0.agg_out ( 9d032268-b7b7-4429-b5aa-61c323334f6e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.768-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.065-0500 I NETWORK [conn106] received client metadata from 127.0.0.1:56540 conn106: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.045-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-154-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 3540)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.758-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:35.863-0500 I NETWORK [conn111] end connection 127.0.0.1:45212 (4 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.482-0500 I INDEX [conn65] Index build completed: a7f6f2cd-7446-460d-b9e9-cfdb44e3f49f
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.769-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.077-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56550 #107 (38 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.046-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-151-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 3540)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.759-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.763-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.594-0500 I STORAGE [conn82] createCollection: test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f with generated UUID: 7a23accc-ea31-4729-b99e-5394e0ac262c and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.771-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.077-0500 I NETWORK [conn107] received client metadata from 127.0.0.1:56550 conn107: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.047-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-173-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef) with drop timestamp Timestamp(1574796676, 4046)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:36.176-0500 I COMMAND [conn107] command test2_fsmdb0 appName: "tid:4" command: enableSharding { enableSharding: "test2_fsmdb0", lsid: { id: UUID("3be876c6-d64e-4ca7-b4fd-5b1e9c261d78") }, $clusterTime: { clusterTime: Timestamp(1574796695, 535), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:163 protocol:op_msg 505ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.765-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f (7a23accc-ea31-4729-b99e-5394e0ac262c) to test2_fsmdb0.agg_out and drop 9d032268-b7b7-4429-b5aa-61c323334f6e.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.594-0500 I STORAGE [conn84] createCollection: test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545 with generated UUID: b1a5c7a3-d406-439a-9c39-a502710d3e37 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.773-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f (7a23accc-ea31-4729-b99e-5394e0ac262c) to test2_fsmdb0.agg_out and drop 9d032268-b7b7-4429-b5aa-61c323334f6e.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.080-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56552 #108 (39 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.048-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-174-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef) with drop timestamp Timestamp(1574796676, 4046)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.049-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-172-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef) with drop timestamp Timestamp(1574796676, 4046)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.765-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test2_fsmdb0.agg_out (9d032268-b7b7-4429-b5aa-61c323334f6e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796694, 1469), t: 1 } and commit timestamp Timestamp(1574796694, 1469)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.594-0500 I STORAGE [conn77] createCollection: test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc with generated UUID: 08932b51-9933-4490-ab6b-1df6cfb57633 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.774-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test2_fsmdb0.agg_out (9d032268-b7b7-4429-b5aa-61c323334f6e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796694, 1469), t: 1 } and commit timestamp Timestamp(1574796694, 1469)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.080-0500 I NETWORK [conn108] received client metadata from 127.0.0.1:56552 conn108: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:36.185-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.050-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-167-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 4555)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.765-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test2_fsmdb0.agg_out (9d032268-b7b7-4429-b5aa-61c323334f6e).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.595-0500 I STORAGE [conn88] createCollection: test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 with generated UUID: d997bf94-238b-49fc-9338-fc2aecfcb151 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.774-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test2_fsmdb0.agg_out (9d032268-b7b7-4429-b5aa-61c323334f6e).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.111-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0' acquired for 'dropCollection', ts : 5ddd7d965cde74b6784bb598
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:36.186-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.agg_out to version 1|0||5ddd7d96cf8184c2e1493a53 took 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.051-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-170-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 4555)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.765-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 7a23accc-ea31-4729-b99e-5394e0ac262c from test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f to test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.596-0500 I STORAGE [conn85] createCollection: test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 with generated UUID: 08dbd6f5-b7d8-47c3-b06b-600c165e66f1 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.774-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection 7a23accc-ea31-4729-b99e-5394e0ac262c from test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f to test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.112-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0.fsmcoll0' acquired for 'dropCollection', ts : 5ddd7d965cde74b6784bb59a
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:36.199-0500 I NETWORK [conn107] end connection 127.0.0.1:45202 (3 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.052-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-165-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 4555)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.765-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.agg_out (9d032268-b7b7-4429-b5aa-61c323334f6e)'. Ident: 'index-130--2310912778499990807', commit timestamp: 'Timestamp(1574796694, 1469)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.622-0500 I INDEX [conn82] index build: done building index _id_ on ns test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.774-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.agg_out (9d032268-b7b7-4429-b5aa-61c323334f6e)'. Ident: 'index-130--7234316082034423155', commit timestamp: 'Timestamp(1574796694, 1469)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.113-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d965cde74b6784bb59a' unlocked.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:36.209-0500 I NETWORK [conn97] end connection 127.0.0.1:45092 (2 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.053-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-181-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 5562)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.765-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.agg_out (9d032268-b7b7-4429-b5aa-61c323334f6e)'. Ident: 'index-131--2310912778499990807', commit timestamp: 'Timestamp(1574796694, 1469)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.622-0500 I INDEX [conn82] Registering index build: c18b9612-6853-4d51-9f82-21c83ee413fc
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.774-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.agg_out (9d032268-b7b7-4429-b5aa-61c323334f6e)'. Ident: 'index-131--7234316082034423155', commit timestamp: 'Timestamp(1574796694, 1469)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.114-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d965cde74b6784bb598' unlocked.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:36.210-0500 I NETWORK [conn99] end connection 127.0.0.1:45136 (1 connection now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.054-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-182-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 5562)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.765-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test2_fsmdb0.agg_out'. Ident: collection-129--2310912778499990807, commit timestamp: Timestamp(1574796694, 1469)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.626-0500 I INDEX [conn84] index build: done building index _id_ on ns test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.774-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test2_fsmdb0.agg_out'. Ident: collection-129--7234316082034423155, commit timestamp: Timestamp(1574796694, 1469)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.116-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d965cde74b6784bb5a2
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:36.212-0500 I NETWORK [conn100] end connection 127.0.0.1:45144 (0 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.055-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-178-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 5562)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.766-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 8ac5654b-016d-4a82-a819-02b8974ded80: test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc ( 08932b51-9933-4490-ab6b-1df6cfb57633 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.627-0500 I INDEX [conn84] Registering index build: 51228426-6206-462d-b8f2-e9b94c1b974e
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.775-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 4cea2a18-fed8-446b-9ac3-6b49e6b74ba6: test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc ( 08932b51-9933-4490-ab6b-1df6cfb57633 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.119-0500 I SHARDING [conn19] Registering new database { _id: "test2_fsmdb0", primary: "shard-rs1", partitioned: false, version: { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } } in sharding catalog
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.055-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-180-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 5563)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.785-0500 I INDEX [ReplWriterWorker-11] index build: starting on test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.634-0500 I INDEX [conn77] index build: done building index _id_ on ns test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.791-0500 I INDEX [ReplWriterWorker-1] index build: starting on test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.121-0500 I SHARDING [conn19] Enabling sharding for database [test2_fsmdb0] in config db
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.056-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-188-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 5563)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.785-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.634-0500 I INDEX [conn77] Registering index build: be717c1d-7de2-4baa-bf03-42335dcaa367
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.791-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.791-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 0cf404b3-d30a-40ca-ae26-f8fa95f4b7e8: test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 (d997bf94-238b-49fc-9338-fc2aecfcb151 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.058-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-177-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 5563)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.785-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 50f3ba58-7081-4393-af52-6f330e4acb68: test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 (d997bf94-238b-49fc-9338-fc2aecfcb151 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.641-0500 I INDEX [conn88] index build: done building index _id_ on ns test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.122-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d965cde74b6784bb5a2' unlocked.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.791-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.059-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-179-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 6067)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.785-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.641-0500 I INDEX [conn88] Registering index build: 58b41d0a-e84f-4921-a652-4584a1790492
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.125-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d965cde74b6784bb5ab
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.792-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.060-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-186-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 6067)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.785-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.649-0500 I INDEX [conn85] index build: done building index _id_ on ns test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.126-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0.fsmcoll0' acquired for 'shardCollection', ts : 5ddd7d965cde74b6784bb5ad
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.794-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.061-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-176-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 6067)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.788-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.649-0500 I INDEX [conn85] Registering index build: a9f54b2d-fcae-43a1-8096-8ba49dcef3a6
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.150-0500 I NETWORK [conn103] end connection 127.0.0.1:56500 (38 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.804-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 0cf404b3-d30a-40ca-ae26-f8fa95f4b7e8: test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 ( d997bf94-238b-49fc-9338-fc2aecfcb151 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.062-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-185-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 6573)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.799-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 50f3ba58-7081-4393-af52-6f330e4acb68: test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 ( d997bf94-238b-49fc-9338-fc2aecfcb151 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.662-0500 I INDEX [conn82] index build: starting on test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.157-0500 I NETWORK [conn102] end connection 127.0.0.1:56498 (37 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.813-0500 I INDEX [ReplWriterWorker-7] index build: starting on test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.062-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-190-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 6573)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.807-0500 I INDEX [ReplWriterWorker-5] index build: starting on test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.662-0500 I INDEX [conn82] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.222-0500 D4 TXN [conn52] New transaction started with txnNumber: 0 on session with lsid 355adea8-8e3f-416b-87bb-acc9022c9b32
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.813-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.063-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-183-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 6573)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.807-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.662-0500 I STORAGE [conn82] Index build initialized: c18b9612-6853-4d51-9f82-21c83ee413fc: test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f (7a23accc-ea31-4729-b99e-5394e0ac262c ): indexes: 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.272-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.813-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: c4ded076-b7c1-4f14-917b-d215cfc127c4: test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 (08dbd6f5-b7d8-47c3-b06b-600c165e66f1 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.064-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-193-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 7082)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.807-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 731dbea2-b2a7-4a88-8ea3-cfb8e047e906: test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 (08dbd6f5-b7d8-47c3-b06b-600c165e66f1 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.662-0500 I INDEX [conn82] Waiting for index build to complete: c18b9612-6853-4d51-9f82-21c83ee413fc
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.273-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.fsmcoll0 to version 1|3||5ddd7d96cf8184c2e1493933 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.813-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.066-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-194-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 7082)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.807-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.677-0500 I INDEX [conn84] index build: starting on test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.274-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d965cde74b6784bb5ad' unlocked.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.814-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.067-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-191-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 7082)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.807-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.677-0500 I INDEX [conn84] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.276-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d965cde74b6784bb5ab' unlocked.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.815-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545 (b1a5c7a3-d406-439a-9c39-a502710d3e37) to test2_fsmdb0.agg_out and drop 7a23accc-ea31-4729-b99e-5394e0ac262c.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.068-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-197-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.808-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545 (b1a5c7a3-d406-439a-9c39-a502710d3e37) to test2_fsmdb0.agg_out and drop 7a23accc-ea31-4729-b99e-5394e0ac262c.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.677-0500 I STORAGE [conn84] Index build initialized: 51228426-6206-462d-b8f2-e9b94c1b974e: test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545 (b1a5c7a3-d406-439a-9c39-a502710d3e37 ): indexes: 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.276-0500 I COMMAND [conn19] command admin.$cmd appName: "MongoDB Shell" command: _configsvrShardCollection { _configsvrShardCollection: "test2_fsmdb0.fsmcoll0", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("c124d5e2-848c-4352-aca8-03195d9029ac"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1574796694, 9), signature: { hash: BinData(0, 45C1DDF53A6E2BCB4EC1A70E851FF4768B381088), keyId: 6763700092420489256 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45144", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796694, 9), t: 1 } }, $db: "admin" } numYields:0 reslen:587 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 6 } }, Global: { acquireCount: { r: 2, w: 4 } }, Database: { acquireCount: { r: 2, w: 4 } }, Collection: { acquireCount: { r: 2, w: 4 } }, Mutex: { acquireCount: { r: 10, W: 1 } } } flowControl:{ acquireCount: 4 } storage:{} protocol:op_msg 152ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.816-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.068-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-198-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.810-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.677-0500 I INDEX [conn84] Waiting for index build to complete: 51228426-6206-462d-b8f2-e9b94c1b974e
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.278-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d965cde74b6784bb5cc
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.817-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test2_fsmdb0.agg_out (7a23accc-ea31-4729-b99e-5394e0ac262c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796694, 1590), t: 1 } and commit timestamp Timestamp(1574796694, 1590)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.069-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-195-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.810-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test2_fsmdb0.agg_out (7a23accc-ea31-4729-b99e-5394e0ac262c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796694, 1590), t: 1 } and commit timestamp Timestamp(1574796694, 1590)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.677-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.279-0500 I SHARDING [conn19] Enabling sharding for database [test2_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.817-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test2_fsmdb0.agg_out (7a23accc-ea31-4729-b99e-5394e0ac262c).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.070-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-203-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 3)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.810-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test2_fsmdb0.agg_out (7a23accc-ea31-4729-b99e-5394e0ac262c).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.677-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.280-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d965cde74b6784bb5cc' unlocked.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.817-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection b1a5c7a3-d406-439a-9c39-a502710d3e37 from test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545 to test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.071-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-204-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 3)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.810-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection b1a5c7a3-d406-439a-9c39-a502710d3e37 from test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545 to test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.678-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.282-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d965cde74b6784bb5d2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.817-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.agg_out (7a23accc-ea31-4729-b99e-5394e0ac262c)'. Ident: 'index-134--7234316082034423155', commit timestamp: 'Timestamp(1574796694, 1590)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.072-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-200-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 3)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.810-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.agg_out (7a23accc-ea31-4729-b99e-5394e0ac262c)'. Ident: 'index-134--2310912778499990807', commit timestamp: 'Timestamp(1574796694, 1590)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.678-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.283-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0.fsmcoll0' acquired for 'shardCollection', ts : 5ddd7d965cde74b6784bb5d4
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.817-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.agg_out (7a23accc-ea31-4729-b99e-5394e0ac262c)'. Ident: 'index-145--7234316082034423155', commit timestamp: 'Timestamp(1574796694, 1590)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.074-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-202-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 574)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.810-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.agg_out (7a23accc-ea31-4729-b99e-5394e0ac262c)'. Ident: 'index-145--2310912778499990807', commit timestamp: 'Timestamp(1574796694, 1590)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.680-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.285-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.817-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test2_fsmdb0.agg_out'. Ident: collection-133--7234316082034423155, commit timestamp: Timestamp(1574796694, 1590)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.074-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-208-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 574)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.810-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test2_fsmdb0.agg_out'. Ident: collection-133--2310912778499990807, commit timestamp: Timestamp(1574796694, 1590)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.683-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.285-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.fsmcoll0 to version 1|3||5ddd7d96cf8184c2e1493933 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.818-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: c4ded076-b7c1-4f14-917b-d215cfc127c4: test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 ( 08dbd6f5-b7d8-47c3-b06b-600c165e66f1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.075-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-199-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 574)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.812-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 731dbea2-b2a7-4a88-8ea3-cfb8e047e906: test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 ( 08dbd6f5-b7d8-47c3-b06b-600c165e66f1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.691-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 51228426-6206-462d-b8f2-e9b94c1b974e: test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545 ( b1a5c7a3-d406-439a-9c39-a502710d3e37 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.287-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d965cde74b6784bb5d4' unlocked.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.820-0500 I STORAGE [ReplWriterWorker-2] createCollection: test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 with provided UUID: 68ff9d0d-dbd8-4e48-bf1e-7c8631ea1f95 and options: { uuid: UUID("68ff9d0d-dbd8-4e48-bf1e-7c8631ea1f95"), temp: true }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.076-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-207-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 1015)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.815-0500 I STORAGE [ReplWriterWorker-1] createCollection: test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 with provided UUID: 68ff9d0d-dbd8-4e48-bf1e-7c8631ea1f95 and options: { uuid: UUID("68ff9d0d-dbd8-4e48-bf1e-7c8631ea1f95"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.693-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: c18b9612-6853-4d51-9f82-21c83ee413fc: test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f ( 7a23accc-ea31-4729-b99e-5394e0ac262c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.288-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d965cde74b6784bb5d2' unlocked.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.837-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.077-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-210-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 1015)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.830-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.701-0500 I INDEX [conn77] index build: starting on test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.434-0500 I SHARDING [conn52] distributed lock 'test2_fsmdb0' acquired for 'createCollection', ts : 5ddd7d965cde74b6784bb5e3
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.853-0500 I STORAGE [ReplWriterWorker-6] createCollection: config.cache.chunks.test2_fsmdb0.agg_out with provided UUID: 13bc0717-3ecb-47d5-aedd-db010ec932d6 and options: { uuid: UUID("13bc0717-3ecb-47d5-aedd-db010ec932d6") }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.078-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-205-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 1015)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.834-0500 I STORAGE [ReplWriterWorker-1] createCollection: config.cache.chunks.test2_fsmdb0.agg_out with provided UUID: 13bc0717-3ecb-47d5-aedd-db010ec932d6 and options: { uuid: UUID("13bc0717-3ecb-47d5-aedd-db010ec932d6") }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.701-0500 I INDEX [conn77] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.435-0500 I SHARDING [conn52] distributed lock 'test2_fsmdb0.agg_out' acquired for 'createCollection', ts : 5ddd7d965cde74b6784bb5e5
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.868-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns config.cache.chunks.test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.079-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-213-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 1523)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.848-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns config.cache.chunks.test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.701-0500 I STORAGE [conn77] Index build initialized: be717c1d-7de2-4baa-bf03-42335dcaa367: test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc (08932b51-9933-4490-ab6b-1df6cfb57633 ): indexes: 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.464-0500 I SHARDING [conn52] distributed lock with ts: 5ddd7d965cde74b6784bb5e5' unlocked.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.869-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc (08932b51-9933-4490-ab6b-1df6cfb57633) to test2_fsmdb0.agg_out and drop b1a5c7a3-d406-439a-9c39-a502710d3e37.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.079-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-214-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 1523)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.848-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc (08932b51-9933-4490-ab6b-1df6cfb57633) to test2_fsmdb0.agg_out and drop b1a5c7a3-d406-439a-9c39-a502710d3e37.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.701-0500 I INDEX [conn84] Index build completed: 51228426-6206-462d-b8f2-e9b94c1b974e
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.465-0500 I SHARDING [conn52] distributed lock with ts: 5ddd7d965cde74b6784bb5e3' unlocked.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.869-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test2_fsmdb0.agg_out (b1a5c7a3-d406-439a-9c39-a502710d3e37) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796694, 2222), t: 1 } and commit timestamp Timestamp(1574796694, 2222)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.081-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-211-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 1523)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.849-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test2_fsmdb0.agg_out (b1a5c7a3-d406-439a-9c39-a502710d3e37) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796694, 2222), t: 1 } and commit timestamp Timestamp(1574796694, 2222)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.701-0500 I INDEX [conn77] Waiting for index build to complete: be717c1d-7de2-4baa-bf03-42335dcaa367
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.748-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d965cde74b6784bb5f3
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.869-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test2_fsmdb0.agg_out (b1a5c7a3-d406-439a-9c39-a502710d3e37).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.082-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-219-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2030)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.849-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test2_fsmdb0.agg_out (b1a5c7a3-d406-439a-9c39-a502710d3e37).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.701-0500 I INDEX [conn82] Index build completed: c18b9612-6853-4d51-9f82-21c83ee413fc
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.748-0500 I SHARDING [conn19] Enabling sharding for database [test2_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.869-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection 08932b51-9933-4490-ab6b-1df6cfb57633 from test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc to test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.083-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-222-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2030)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.849-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 08932b51-9933-4490-ab6b-1df6cfb57633 from test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc to test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.701-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.749-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d965cde74b6784bb5f3' unlocked.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.869-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.agg_out (b1a5c7a3-d406-439a-9c39-a502710d3e37)'. Ident: 'index-136--7234316082034423155', commit timestamp: 'Timestamp(1574796694, 2222)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.084-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-216-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2030)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.849-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.agg_out (b1a5c7a3-d406-439a-9c39-a502710d3e37)'. Ident: 'index-136--2310912778499990807', commit timestamp: 'Timestamp(1574796694, 2222)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.702-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.752-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d965cde74b6784bb5f9
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.869-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.agg_out (b1a5c7a3-d406-439a-9c39-a502710d3e37)'. Ident: 'index-143--7234316082034423155', commit timestamp: 'Timestamp(1574796694, 2222)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.085-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-220-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2597)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.849-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.agg_out (b1a5c7a3-d406-439a-9c39-a502710d3e37)'. Ident: 'index-143--2310912778499990807', commit timestamp: 'Timestamp(1574796694, 2222)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.711-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.753-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7d965cde74b6784bb5fb
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.869-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test2_fsmdb0.agg_out'. Ident: collection-135--7234316082034423155, commit timestamp: Timestamp(1574796694, 2222)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.086-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-224-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2597)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.849-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test2_fsmdb0.agg_out'. Ident: collection-135--2310912778499990807, commit timestamp: Timestamp(1574796694, 2222)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.718-0500 I INDEX [conn88] index build: starting on test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.760-0500 D4 TXN [conn52] New transaction started with txnNumber: 0 on session with lsid 63077580-fda7-41f2-8745-2968e55a3821
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.876-0500 I COMMAND [ReplWriterWorker-6] CMD: drop test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.086-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-217-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2597)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.855-0500 I COMMAND [ReplWriterWorker-11] CMD: drop test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.718-0500 I INDEX [conn88] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.857-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.876-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 (d997bf94-238b-49fc-9338-fc2aecfcb151) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796694, 3097), t: 1 } and commit timestamp Timestamp(1574796694, 3097)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.087-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-221-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 3102)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.855-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 (d997bf94-238b-49fc-9338-fc2aecfcb151) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796694, 3097), t: 1 } and commit timestamp Timestamp(1574796694, 3097)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.719-0500 I STORAGE [conn88] Index build initialized: 58b41d0a-e84f-4921-a652-4584a1790492: test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 (d997bf94-238b-49fc-9338-fc2aecfcb151 ): indexes: 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.858-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.agg_out to version 1|0||5ddd7d96cf8184c2e1493a53 took 0 ms
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:37.613-0500 MongoDB shell version v0.0.0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.876-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 (d997bf94-238b-49fc-9338-fc2aecfcb151).
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.603-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.604-0500 Implicit session: session { "id" : UUID("44fd18d0-76ce-48f8-a210-794c742e7238") }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.604-0500 MongoDB server version: 0.0.0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.604-0500 true
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.604-0500 2019-11-26T14:31:37.673-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.604-0500 2019-11-26T14:31:37.673-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.604-0500 2019-11-26T14:31:37.674-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.604-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.604-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.604-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.604-0500 [jsTest] New session started with sessionID: { "id" : UUID("41a1c9dd-bf53-413e-8ab6-d295f7ef927e") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.604-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.604-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.604-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.604-0500 2019-11-26T14:31:37.677-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.604-0500 2019-11-26T14:31:37.678-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.604-0500 2019-11-26T14:31:37.678-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.604-0500 2019-11-26T14:31:37.678-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.604-0500 2019-11-26T14:31:37.678-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.604-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.604-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.604-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.604-0500 [jsTest] New session started with sessionID: { "id" : UUID("eb4f1d52-55de-4f7d-8d89-4d1209efacf9") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.604-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500 2019-11-26T14:31:37.680-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500 2019-11-26T14:31:37.680-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500 2019-11-26T14:31:37.680-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500 2019-11-26T14:31:37.680-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500 2019-11-26T14:31:37.681-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500 [jsTest] New session started with sessionID: { "id" : UUID("ad6b96bc-7d0e-4401-89c3-c567606b217b") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500 Skipping data consistency checks for 1-node CSRS: { "type" : "sharded cluster", "configsvr" : { "type" : "replica set", "primary" : "localhost:20000", "nodes" : [ "localhost:20000" ] }, "shards" : { "shard-rs0" : { "type" : "replica set", "primary" : "localhost:20001", "nodes" : [ "localhost:20001", "localhost:20002", "localhost:20003" ] }, "shard-rs1" : { "type" : "replica set", "primary" : "localhost:20004", "nodes" : [ "localhost:20004", "localhost:20005", "localhost:20006" ] } }, "mongos" : { "type" : "mongos router", "nodes" : [ "localhost:20007", "localhost:20008" ] } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500 Implicit session: session { "id" : UUID("363f7444-5aa4-492a-8726-0810dd8472ea") }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500 Implicit session: session { "id" : UUID("de6b9527-5d9f-489d-8636-b255f526255a") }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500 MongoDB server version: 0.0.0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500 MongoDB server version: 0.0.0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.605-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.606-0500 [jsTest] New session started with sessionID: { "id" : UUID("307db509-d553-42ff-ab0f-dd9f1b801007") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.606-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.606-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.606-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.606-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.606-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.606-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.089-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-228-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 3102)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.606-0500 [jsTest] New session started with sessionID: { "id" : UUID("e7cb66a8-da64-4d9f-a00f-fe9ea185f349") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.855-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 (d997bf94-238b-49fc-9338-fc2aecfcb151).
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.606-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.719-0500 I INDEX [conn88] Waiting for index build to complete: 58b41d0a-e84f-4921-a652-4584a1790492
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.606-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:37.664-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45254 #114 (1 connection now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.606-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:37.678-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53324 #67 (12 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.607-0500 Recreating replica set from config {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.607-0500 "_id" : "shard-rs0",
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:37.678-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52440 #70 (11 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.607-0500 "version" : 2,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.607-0500 "protocolVersion" : NumberLong(1),
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.859-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d965cde74b6784bb5fb' unlocked.
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.607-0500 "writeConcernMajorityJournalDefault" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.607-0500 "members" : [
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.607-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.607-0500 "_id" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.607-0500 "host" : "localhost:20001",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.607-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.607-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.607-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.607-0500 "priority" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.607-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.608-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.608-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.608-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.608-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.608-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.608-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.608-0500 "_id" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.608-0500 "host" : "localhost:20002",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.608-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.608-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.608-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.608-0500 "priority" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.608-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.608-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.608-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.608-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.608-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.608-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.608-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.608-0500 "_id" : 2,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.608-0500 "host" : "localhost:20003",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.608-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.608-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500 "priority" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500 ],
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500 "settings" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500 "chainingAllowed" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500 "heartbeatIntervalMillis" : 2000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500 "heartbeatTimeoutSecs" : 10,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500 "electionTimeoutMillis" : 86400000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500 "catchUpTimeoutMillis" : -1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500 "catchUpTakeoverDelayMillis" : 30000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500 "getLastErrorModes" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500 "getLastErrorDefaults" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500 "w" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500 "wtimeout" : 0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.609-0500 "replicaSetId" : ObjectId("5ddd7d683bbfe7fa5630d3b8")
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500 Recreating replica set from config {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500 "_id" : "shard-rs1",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500 "version" : 2,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500 "protocolVersion" : NumberLong(1),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500 "writeConcernMajorityJournalDefault" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500 "members" : [
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500 "_id" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500 "host" : "localhost:20004",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500 "priority" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500 "_id" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.610-0500 "host" : "localhost:20005",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.611-0500 "arbiterOnly" : false,
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.876-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 (d997bf94-238b-49fc-9338-fc2aecfcb151)'. Ident: 'index-140--7234316082034423155', commit timestamp: 'Timestamp(1574796694, 3097)'
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.611-0500 "buildIndexes" : true,
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.090-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-218-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 3102)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.611-0500 "hidden" : false,
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.855-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 (d997bf94-238b-49fc-9338-fc2aecfcb151)'. Ident: 'index-140--2310912778499990807', commit timestamp: 'Timestamp(1574796694, 3097)'
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.611-0500 "priority" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.611-0500 "tags" : {
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.725-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: be717c1d-7de2-4baa-bf03-42335dcaa367: test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc ( 08932b51-9933-4490-ab6b-1df6cfb57633 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.611-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:37.664-0500 I NETWORK [conn114] received client metadata from 127.0.0.1:45254 conn114: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.611-0500 },
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:37.678-0500 I NETWORK [conn67] received client metadata from 127.0.0.1:53324 conn67: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.611-0500 "slaveDelay" : NumberLong(0),
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:37.678-0500 I NETWORK [conn70] received client metadata from 127.0.0.1:52440 conn70: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.612-0500 "votes" : 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.861-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d965cde74b6784bb5f9' unlocked.
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.612-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.612-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.612-0500 "_id" : 2,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.612-0500 "host" : "localhost:20006",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.612-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.612-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.612-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.612-0500 "priority" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.612-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.612-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.612-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.612-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.612-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.612-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.612-0500 ],
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500 "settings" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500 "chainingAllowed" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500 "heartbeatIntervalMillis" : 2000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500 "heartbeatTimeoutSecs" : 10,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500 "electionTimeoutMillis" : 86400000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500 "catchUpTimeoutMillis" : -1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500 "catchUpTakeoverDelayMillis" : 30000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500 "getLastErrorModes" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500 "getLastErrorDefaults" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500 "w" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500 "wtimeout" : 0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500 "replicaSetId" : ObjectId("5ddd7d6bcf8184c2e1492eba")
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500 [jsTest] New session started with sessionID: { "id" : UUID("7b78b26f-f506-4231-8096-519abe1ef127") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.613-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.614-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.614-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.614-0500 [jsTest] New session started with sessionID: { "id" : UUID("34687f44-29da-4637-88ae-5fc26b14b72d") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.614-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.614-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.614-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.614-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.614-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.614-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.614-0500 [jsTest] New session started with sessionID: { "id" : UUID("e8cb1d94-172d-49e3-a925-a531d9b0c7b9") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.614-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.614-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.614-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.614-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.614-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.614-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.614-0500 [jsTest] New session started with sessionID: { "id" : UUID("80eb4c13-4533-4db3-8812-abba6eb64844") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.614-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.614-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.614-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.614-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.614-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.614-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500 [jsTest] New session started with sessionID: { "id" : UUID("e12d55b7-ce4b-4e1d-85bc-f0ff8a01d9da") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500 [jsTest] New session started with sessionID: { "id" : UUID("1048384e-1242-4da9-8b9d-092d1c1eb274") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500 [jsTest] Freezing nodes: [localhost:20002,localhost:20003]
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500 [jsTest] Freezing nodes: [localhost:20005,localhost:20006]
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.615-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500 ReplSetTest awaitReplication: going to check only localhost:20002,localhost:20003
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500 ReplSetTest awaitReplication: starting: optime for primary, localhost:20001, is { "ts" : Timestamp(1574796697, 6), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500 ReplSetTest awaitReplication: checking secondaries against latest primary optime { "ts" : Timestamp(1574796697, 6), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500 ReplSetTest awaitReplication: checking secondary #0: localhost:20002
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500 ReplSetTest awaitReplication: secondary #0, localhost:20002, is synced
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500 ReplSetTest awaitReplication: checking secondary #1: localhost:20003
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500 ReplSetTest awaitReplication: secondary #1, localhost:20003, is synced
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500 ReplSetTest awaitReplication: finished: all 2 secondaries synced at optime { "ts" : Timestamp(1574796697, 6), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500 checkDBHashesForReplSet checking data hashes against primary: localhost:20001
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500 checkDBHashesForReplSet going to check data hashes on secondary: localhost:20002
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500 checkDBHashesForReplSet going to check data hashes on secondary: localhost:20003
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500 ReplSetTest awaitReplication: going to check only localhost:20005,localhost:20006
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500 ReplSetTest awaitReplication: starting: optime for primary, localhost:20004, is { "ts" : Timestamp(1574796697, 8), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500 ReplSetTest awaitReplication: checking secondaries against latest primary optime { "ts" : Timestamp(1574796697, 8), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500 ReplSetTest awaitReplication: checking secondary #0: localhost:20005
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500 ReplSetTest awaitReplication: secondary #0, localhost:20005, is synced
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500 ReplSetTest awaitReplication: checking secondary #1: localhost:20006
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500 ReplSetTest awaitReplication: secondary #1, localhost:20006, is synced
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500 ReplSetTest awaitReplication: finished: all 2 secondaries synced at optime { "ts" : Timestamp(1574796697, 8), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500 checkDBHashesForReplSet checking data hashes against primary: localhost:20004
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500 checkDBHashesForReplSet going to check data hashes on secondary: localhost:20005
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500 checkDBHashesForReplSet going to check data hashes on secondary: localhost:20006
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.616-0500 Finished data consistency checks for cluster in 388 ms.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.876-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 (d997bf94-238b-49fc-9338-fc2aecfcb151)'. Ident: 'index-149--7234316082034423155', commit timestamp: 'Timestamp(1574796694, 3097)'
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:39.617-0500 JSTest jstests/hooks/run_check_repl_dbhash.js finished.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.091-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-231-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3) with drop timestamp Timestamp(1574796680, 4431)
[executor:fsm_workload_test:job0] 2019-11-26T14:31:39.617-0500 agg_out:CheckReplDBHash ran in 2.03 seconds: no failures detected.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.855-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 (d997bf94-238b-49fc-9338-fc2aecfcb151)'. Ident: 'index-149--2310912778499990807', commit timestamp: 'Timestamp(1574796694, 3097)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.855-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9'. Ident: collection-139--2310912778499990807, commit timestamp: Timestamp(1574796694, 3097)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.855-0500 I COMMAND [ReplWriterWorker-12] CMD: drop test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:37.746-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53356 #68 (13 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:37.745-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52464 #71 (12 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:34.861-0500 I COMMAND [conn19] command admin.$cmd appName: "tid:0" command: _configsvrShardCollection { _configsvrShardCollection: "test2_fsmdb0.agg_out", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("dc61b4f9-aa45-4579-b7ea-98c3efc65c32"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1574796694, 1590), signature: { hash: BinData(0, 45C1DDF53A6E2BCB4EC1A70E851FF4768B381088), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45198", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796694, 1535), t: 1 } }, $db: "admin" } numYields:0 reslen:586 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 6 } }, Global: { acquireCount: { r: 2, w: 4 } }, Database: { acquireCount: { r: 2, w: 4 } }, Collection: { acquireCount: { r: 2, w: 4 } }, Mutex: { acquireCount: { r: 10, W: 1 } } } flowControl:{ acquireCount: 4 } storage:{} protocol:op_msg 110ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.876-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9'. Ident: collection-139--7234316082034423155, commit timestamp: Timestamp(1574796694, 3097)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.092-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-234-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3) with drop timestamp Timestamp(1574796680, 4431)
[executor:fsm_workload_test:job0] 2019-11-26T14:31:39.619-0500 Running agg_out:ValidateCollections...
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.742-0500 I INDEX [conn85] index build: starting on test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:39.620-0500 Starting JSTest jstests/hooks/run_validate_collections.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_validate_collections"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_validate_collections.js
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:37.734-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45276 #115 (2 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.856-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 (08dbd6f5-b7d8-47c3-b06b-600c165e66f1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796694, 3098), t: 1 } and commit timestamp Timestamp(1574796694, 3098)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:37.746-0500 I NETWORK [conn68] received client metadata from 127.0.0.1:53356 conn68: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:37.745-0500 I NETWORK [conn71] received client metadata from 127.0.0.1:52464 conn71: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.330-0500 I SHARDING [conn23] distributed lock 'test2_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d965cde74b6784bb617
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.877-0500 I COMMAND [ReplWriterWorker-2] CMD: drop test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.092-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-229-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3) with drop timestamp Timestamp(1574796680, 4431)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.742-0500 I INDEX [conn85] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:37.734-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45278 #116 (3 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.856-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 (08dbd6f5-b7d8-47c3-b06b-600c165e66f1).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:37.761-0500 I COMMAND [conn68] Attempting to step down in response to replSetStepDown command
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:37.760-0500 I COMMAND [conn71] Attempting to step down in response to replSetStepDown command
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.330-0500 I SHARDING [conn23] Enabling sharding for database [test2_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.877-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 (08dbd6f5-b7d8-47c3-b06b-600c165e66f1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796694, 3098), t: 1 } and commit timestamp Timestamp(1574796694, 3098)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.093-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-237-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a) with drop timestamp Timestamp(1574796680, 4560)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.742-0500 I STORAGE [conn85] Index build initialized: a9f54b2d-fcae-43a1-8096-8ba49dcef3a6: test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 (08dbd6f5-b7d8-47c3-b06b-600c165e66f1 ): indexes: 1
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:37.734-0500 I NETWORK [conn115] received client metadata from 127.0.0.1:45276 conn115: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.856-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 (08dbd6f5-b7d8-47c3-b06b-600c165e66f1)'. Ident: 'index-142--2310912778499990807', commit timestamp: 'Timestamp(1574796694, 3098)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:37.762-0500 I REPL [conn68] 'freezing' for 86400 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:37.761-0500 I REPL [conn71] 'freezing' for 86400 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.331-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7d965cde74b6784bb617' unlocked.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.877-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 (08dbd6f5-b7d8-47c3-b06b-600c165e66f1).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.094-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-238-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a) with drop timestamp Timestamp(1574796680, 4560)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.095-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-235-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a) with drop timestamp Timestamp(1574796680, 4560)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:37.734-0500 I NETWORK [conn116] received client metadata from 127.0.0.1:45278 conn116: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:37.959-0500 I NETWORK [conn115] end connection 127.0.0.1:45276 (2 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:37.958-0500 I REPL [conn68] 'unfreezing'
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:39.626-0500 JSTest jstests/hooks/run_validate_collections.js started with pid 15773.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:37.958-0500 I REPL [conn71] 'unfreezing'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.331-0500 I COMMAND [conn23] command admin.$cmd appName: "tid:3" command: _configsvrEnableSharding { _configsvrEnableSharding: "test2_fsmdb0", writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("86806ebc-9bc8-4f4d-8806-c8bf25e31db3"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1574796694, 3098), signature: { hash: BinData(0, 45C1DDF53A6E2BCB4EC1A70E851FF4768B381088), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58350", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796694, 2351), t: 1 } }, $db: "admin" } numYields:0 reslen:505 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 5 } }, ReplicationStateTransition: { acquireCount: { w: 9 } }, Global: { acquireCount: { r: 5, w: 4 } }, Database: { acquireCount: { r: 4, w: 4 } }, Collection: { acquireCount: { r: 3, w: 4 } }, Mutex: { acquireCount: { r: 10 } }, oplog: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 4 } storage:{} protocol:op_msg 504ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.877-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 (08dbd6f5-b7d8-47c3-b06b-600c165e66f1)'. Ident: 'index-142--7234316082034423155', commit timestamp: 'Timestamp(1574796694, 3098)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.742-0500 I INDEX [conn85] Waiting for index build to complete: a9f54b2d-fcae-43a1-8096-8ba49dcef3a6
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.097-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-241-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e) with drop timestamp Timestamp(1574796680, 5119)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.856-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 (08dbd6f5-b7d8-47c3-b06b-600c165e66f1)'. Ident: 'index-151--2310912778499990807', commit timestamp: 'Timestamp(1574796694, 3098)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:38.056-0500 I NETWORK [conn116] end connection 127.0.0.1:45278 (1 connection now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:37.960-0500 I NETWORK [conn68] end connection 127.0.0.1:53356 (12 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:37.960-0500 I NETWORK [conn71] end connection 127.0.0.1:52464 (11 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.334-0500 I SHARDING [conn23] distributed lock 'test2_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d975cde74b6784bb629
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.877-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 (08dbd6f5-b7d8-47c3-b06b-600c165e66f1)'. Ident: 'index-151--7234316082034423155', commit timestamp: 'Timestamp(1574796694, 3098)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.742-0500 I INDEX [conn77] Index build completed: be717c1d-7de2-4baa-bf03-42335dcaa367
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.098-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-242-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e) with drop timestamp Timestamp(1574796680, 5119)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.856-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220'. Ident: collection-141--2310912778499990807, commit timestamp: Timestamp(1574796694, 3098)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:38.059-0500 I NETWORK [conn114] end connection 127.0.0.1:45254 (0 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:38.067-0500 I NETWORK [conn67] end connection 127.0.0.1:53324 (11 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:38.067-0500 I NETWORK [conn70] end connection 127.0.0.1:52440 (10 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.335-0500 I SHARDING [conn23] distributed lock 'test2_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7d975cde74b6784bb62b
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.877-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220'. Ident: collection-141--7234316082034423155, commit timestamp: Timestamp(1574796694, 3098)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.742-0500 I COMMAND [conn82] renameCollectionForCommand: rename test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f to test2_fsmdb0.agg_out and drop test2_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.098-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-239-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e) with drop timestamp Timestamp(1574796680, 5119)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.872-0500 I INDEX [ReplWriterWorker-3] index build: starting on config.cache.chunks.test2_fsmdb0.agg_out properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.600-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.891-0500 I INDEX [ReplWriterWorker-7] index build: starting on config.cache.chunks.test2_fsmdb0.agg_out properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.742-0500 I COMMAND [conn77] command test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("3be876c6-d64e-4ca7-b4fd-5b1e9c261d78"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796694, 573), signature: { hash: BinData(0, 45C1DDF53A6E2BCB4EC1A70E851FF4768B381088), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45202", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796694, 565), t: 1 } }, $db: "test2_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 108ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.099-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-245-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8) with drop timestamp Timestamp(1574796680, 5559)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.872-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.601-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.agg_out to version 1|0||5ddd7d96cf8184c2e1493a53 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.891-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.742-0500 I STORAGE [conn82] dropCollection: test2_fsmdb0.agg_out (9d032268-b7b7-4429-b5aa-61c323334f6e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796694, 1469), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.100-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-246-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8) with drop timestamp Timestamp(1574796680, 5559)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.872-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 4efe88f5-866e-44b8-bfe7-7d393eb150d0: config.cache.chunks.test2_fsmdb0.agg_out (13bc0717-3ecb-47d5-aedd-db010ec932d6 ): indexes: 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.603-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7d975cde74b6784bb62b' unlocked.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.891-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 0f2cc66d-478c-43ba-bcff-81c69c3d4712: config.cache.chunks.test2_fsmdb0.agg_out (13bc0717-3ecb-47d5-aedd-db010ec932d6 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.742-0500 I STORAGE [conn82] Finishing collection drop for test2_fsmdb0.agg_out (9d032268-b7b7-4429-b5aa-61c323334f6e).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.101-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-243-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8) with drop timestamp Timestamp(1574796680, 5559)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.872-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.604-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7d975cde74b6784bb629' unlocked.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.891-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.742-0500 I STORAGE [conn82] renameCollection: renaming collection 7a23accc-ea31-4729-b99e-5394e0ac262c from test2_fsmdb0.tmp.agg_out.ed0d248d-27b4-482a-9050-438a4f92431f to test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.102-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-249-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc) with drop timestamp Timestamp(1574796680, 5560)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.872-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.604-0500 I COMMAND [conn23] command admin.$cmd appName: "tid:3" command: _configsvrShardCollection { _configsvrShardCollection: "test2_fsmdb0.agg_out", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("86806ebc-9bc8-4f4d-8806-c8bf25e31db3"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1574796695, 2), signature: { hash: BinData(0, 537AF1CA0E5BF829360A450B5C5B4FE27BD08071), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58350", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796695, 2), t: 1 } }, $db: "admin" } numYields:0 reslen:586 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 6 } }, Global: { acquireCount: { r: 2, w: 4 } }, Database: { acquireCount: { r: 2, w: 4 } }, Collection: { acquireCount: { r: 2, w: 4 } }, Mutex: { acquireCount: { r: 10, W: 1 } } } flowControl:{ acquireCount: 4 } storage:{} protocol:op_msg 271ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.892-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.742-0500 I STORAGE [conn82] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.agg_out (9d032268-b7b7-4429-b5aa-61c323334f6e)'. Ident: 'index-121--2588534479858262356', commit timestamp: 'Timestamp(1574796694, 1469)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.103-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-250-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc) with drop timestamp Timestamp(1574796680, 5560)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.874-0500 I STORAGE [ReplWriterWorker-5] createCollection: test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e with provided UUID: b3750738-7e2e-471e-a530-9cc710d06e53 and options: { uuid: UUID("b3750738-7e2e-471e-a530-9cc710d06e53"), temp: true }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.614-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d975cde74b6784bb63e
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.892-0500 I STORAGE [ReplWriterWorker-1] createCollection: test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e with provided UUID: b3750738-7e2e-471e-a530-9cc710d06e53 and options: { uuid: UUID("b3750738-7e2e-471e-a530-9cc710d06e53"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.742-0500 I STORAGE [conn82] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.agg_out (9d032268-b7b7-4429-b5aa-61c323334f6e)'. Ident: 'index-122--2588534479858262356', commit timestamp: 'Timestamp(1574796694, 1469)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.104-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-248-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc) with drop timestamp Timestamp(1574796680, 5560)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.874-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index lastmod_1 on ns config.cache.chunks.test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.615-0500 I SHARDING [conn19] Enabling sharding for database [test2_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.894-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.742-0500 I STORAGE [conn82] Deferring table drop for collection 'test2_fsmdb0.agg_out'. Ident: collection-120--2588534479858262356, commit timestamp: Timestamp(1574796694, 1469)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.105-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-227-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.883-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 4efe88f5-866e-44b8-bfe7-7d393eb150d0: config.cache.chunks.test2_fsmdb0.agg_out ( 13bc0717-3ecb-47d5-aedd-db010ec932d6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.616-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d975cde74b6784bb63e' unlocked.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.904-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 0f2cc66d-478c-43ba-bcff-81c69c3d4712: config.cache.chunks.test2_fsmdb0.agg_out ( 13bc0717-3ecb-47d5-aedd-db010ec932d6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.742-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.106-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-232-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.890-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.620-0500 I SHARDING [conn17] distributed lock 'test2_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d975cde74b6784bb646
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.912-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.743-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.107-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-225-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.905-0500 I INDEX [ReplWriterWorker-9] index build: starting on test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.621-0500 I SHARDING [conn17] distributed lock 'test2_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7d975cde74b6784bb64a
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.926-0500 I INDEX [ReplWriterWorker-11] index build: starting on test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.743-0500 I COMMAND [conn62] command test2_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("dc61b4f9-aa45-4579-b7ea-98c3efc65c32"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6955256874552990242, ns: "test2_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7867739412618517458, ns: "test2_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test2_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796694592), clusterTime: Timestamp(1574796694, 568) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("dc61b4f9-aa45-4579-b7ea-98c3efc65c32"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796694, 568), signature: { hash: BinData(0, 45C1DDF53A6E2BCB4EC1A70E851FF4768B381088), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45198", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796694, 565), t: 1 } }, $db: "test2_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 149ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.108-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-253-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 506)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.905-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.623-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.926-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.743-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.109-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-255-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 506)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.905-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 7f78acad-fc29-48b7-a943-46b9466cddc0: test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 (68ff9d0d-dbd8-4e48-bf1e-7c8631ea1f95 ): indexes: 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.623-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.agg_out to version 1|0||5ddd7d96cf8184c2e1493a53 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.926-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 54baea9c-d6e8-476d-8951-2b962ea79bae: test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 (68ff9d0d-dbd8-4e48-bf1e-7c8631ea1f95 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.744-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.109-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-252-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 506)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.905-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.625-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7d975cde74b6784bb64a' unlocked.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.926-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.746-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.110-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-256-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2213)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.905-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.626-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7d975cde74b6784bb646' unlocked.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.927-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.749-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.112-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-258-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2213)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.906-0500 I SHARDING [ReplWriterWorker-8] Marking collection config.cache.chunks.test2_fsmdb0.agg_out as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.632-0500 I SHARDING [conn17] distributed lock 'test2_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d975cde74b6784bb657
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.928-0500 I SHARDING [ReplWriterWorker-4] Marking collection config.cache.chunks.test2_fsmdb0.agg_out as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.749-0500 I COMMAND [conn84] renameCollectionForCommand: rename test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545 to test2_fsmdb0.agg_out and drop test2_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.113-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-254-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2213)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.907-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.632-0500 I SHARDING [conn17] Enabling sharding for database [test2_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.929-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.749-0500 I STORAGE [conn84] dropCollection: test2_fsmdb0.agg_out (7a23accc-ea31-4729-b99e-5394e0ac262c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796694, 1590), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.114-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-262-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2214)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.909-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 7f78acad-fc29-48b7-a943-46b9466cddc0: test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 ( 68ff9d0d-dbd8-4e48-bf1e-7c8631ea1f95 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.633-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7d975cde74b6784bb657' unlocked.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.930-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 54baea9c-d6e8-476d-8951-2b962ea79bae: test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 ( 68ff9d0d-dbd8-4e48-bf1e-7c8631ea1f95 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.749-0500 I STORAGE [conn84] Finishing collection drop for test2_fsmdb0.agg_out (7a23accc-ea31-4729-b99e-5394e0ac262c).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.115-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-266-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2214)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.927-0500 I INDEX [ReplWriterWorker-3] index build: starting on test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.636-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d975cde74b6784bb65f
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.947-0500 I INDEX [ReplWriterWorker-6] index build: starting on test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.749-0500 I STORAGE [conn84] renameCollection: renaming collection b1a5c7a3-d406-439a-9c39-a502710d3e37 from test2_fsmdb0.tmp.agg_out.e7dba0d0-7654-44df-bf83-0fc62dd6e545 to test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.115-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-260-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2214)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.927-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.637-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7d975cde74b6784bb663
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.947-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.749-0500 I STORAGE [conn84] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.agg_out (7a23accc-ea31-4729-b99e-5394e0ac262c)'. Ident: 'index-129--2588534479858262356', commit timestamp: 'Timestamp(1574796694, 1590)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.116-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-263-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2343)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.927-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: c5a6990e-a54f-46c2-a3fd-9f3ec628d35a: test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e (b3750738-7e2e-471e-a530-9cc710d06e53 ): indexes: 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.639-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.947-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: cbb41476-5bfc-429a-b4b5-571b0e726231: test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e (b3750738-7e2e-471e-a530-9cc710d06e53 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.749-0500 I STORAGE [conn84] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.agg_out (7a23accc-ea31-4729-b99e-5394e0ac262c)'. Ident: 'index-134--2588534479858262356', commit timestamp: 'Timestamp(1574796694, 1590)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.117-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-270-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2343)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.927-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.639-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.agg_out to version 1|0||5ddd7d96cf8184c2e1493a53 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.947-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.749-0500 I STORAGE [conn84] Deferring table drop for collection 'test2_fsmdb0.agg_out'. Ident: collection-124--2588534479858262356, commit timestamp: Timestamp(1574796694, 1590)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.118-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-261-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2343)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.927-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.640-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d975cde74b6784bb663' unlocked.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.947-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.749-0500 I COMMAND [conn65] command test2_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("edd46bd6-7845-419a-b11b-5e92d9ad8dd5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7007090010784509663, ns: "test2_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7308616985994250753, ns: "test2_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test2_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796694592), clusterTime: Timestamp(1574796694, 568) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("edd46bd6-7845-419a-b11b-5e92d9ad8dd5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796694, 568), signature: { hash: BinData(0, 45C1DDF53A6E2BCB4EC1A70E851FF4768B381088), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45210", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796694, 565), t: 1 } }, $db: "test2_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 156ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.120-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-269-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c) with drop timestamp Timestamp(1574796683, 3029)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.930-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.641-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d975cde74b6784bb65f' unlocked.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.949-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.751-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 58b41d0a-e84f-4921-a652-4584a1790492: test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 ( d997bf94-238b-49fc-9338-fc2aecfcb151 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.121-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-276-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c) with drop timestamp Timestamp(1574796683, 3029)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:34.932-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: c5a6990e-a54f-46c2-a3fd-9f3ec628d35a: test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e ( b3750738-7e2e-471e-a530-9cc710d06e53 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.642-0500 I SHARDING [conn17] distributed lock 'test2_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d975cde74b6784bb66e
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:34.950-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: cbb41476-5bfc-429a-b4b5-571b0e726231: test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e ( b3750738-7e2e-471e-a530-9cc710d06e53 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.751-0500 I INDEX [conn88] Index build completed: 58b41d0a-e84f-4921-a652-4584a1790492
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.121-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-267-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c) with drop timestamp Timestamp(1574796683, 3029)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:35.598-0500 I COMMAND [ReplWriterWorker-9] CMD: drop test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.643-0500 I SHARDING [conn17] Enabling sharding for database [test2_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:35.598-0500 I COMMAND [ReplWriterWorker-13] CMD: drop test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.751-0500 I COMMAND [conn88] command test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("86806ebc-9bc8-4f4d-8806-c8bf25e31db3"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796694, 573), signature: { hash: BinData(0, 45C1DDF53A6E2BCB4EC1A70E851FF4768B381088), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58350", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796694, 565), t: 1 } }, $db: "test2_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 109ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.122-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-275-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88) with drop timestamp Timestamp(1574796683, 3034)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:35.598-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 (68ff9d0d-dbd8-4e48-bf1e-7c8631ea1f95) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796695, 5), t: 1 } and commit timestamp Timestamp(1574796695, 5)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.644-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7d975cde74b6784bb66e' unlocked.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:35.598-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 (68ff9d0d-dbd8-4e48-bf1e-7c8631ea1f95) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796695, 5), t: 1 } and commit timestamp Timestamp(1574796695, 5)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.754-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: a9f54b2d-fcae-43a1-8096-8ba49dcef3a6: test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 ( 08dbd6f5-b7d8-47c3-b06b-600c165e66f1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.123-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-278-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88) with drop timestamp Timestamp(1574796683, 3034)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:35.598-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 (68ff9d0d-dbd8-4e48-bf1e-7c8631ea1f95).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.646-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d975cde74b6784bb677
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:35.598-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 (68ff9d0d-dbd8-4e48-bf1e-7c8631ea1f95).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.754-0500 I INDEX [conn85] Index build completed: a9f54b2d-fcae-43a1-8096-8ba49dcef3a6
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.124-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-273-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88) with drop timestamp Timestamp(1574796683, 3034)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:35.598-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 (68ff9d0d-dbd8-4e48-bf1e-7c8631ea1f95)'. Ident: 'index-154--2310912778499990807', commit timestamp: 'Timestamp(1574796695, 5)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.649-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7d975cde74b6784bb67c
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:35.598-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 (68ff9d0d-dbd8-4e48-bf1e-7c8631ea1f95)'. Ident: 'index-154--7234316082034423155', commit timestamp: 'Timestamp(1574796695, 5)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.754-0500 I COMMAND [conn85] command test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("81d4e471-c714-4e93-a360-4ad87027dca4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796694, 573), signature: { hash: BinData(0, 45C1DDF53A6E2BCB4EC1A70E851FF4768B381088), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58342", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796694, 565), t: 1 } }, $db: "test2_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 105ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.125-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-265-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 3540)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:35.598-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 (68ff9d0d-dbd8-4e48-bf1e-7c8631ea1f95)'. Ident: 'index-161--2310912778499990807', commit timestamp: 'Timestamp(1574796695, 5)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.650-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:35.598-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 (68ff9d0d-dbd8-4e48-bf1e-7c8631ea1f95)'. Ident: 'index-161--7234316082034423155', commit timestamp: 'Timestamp(1574796695, 5)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.757-0500 I SHARDING [conn55] CMD: shardcollection: { _shardsvrShardCollection: "test2_fsmdb0.agg_out", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("dc61b4f9-aa45-4579-b7ea-98c3efc65c32"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796694, 1654), signature: { hash: BinData(0, 45C1DDF53A6E2BCB4EC1A70E851FF4768B381088), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45198", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796694, 1592), t: 1 } }, $db: "admin" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.126-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-272-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 3540)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:35.598-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946'. Ident: collection-153--2310912778499990807, commit timestamp: Timestamp(1574796695, 5)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.651-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.agg_out to version 1|0||5ddd7d96cf8184c2e1493a53 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:35.598-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946'. Ident: collection-153--7234316082034423155, commit timestamp: Timestamp(1574796695, 5)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.757-0500 I SHARDING [conn55] about to log metadata event into changelog: { _id: "nz_desktop:20004-2019-11-26T14:31:34.757-0500-5ddd7d96cf8184c2e1493a4c", server: "nz_desktop:20004", shard: "shard-rs1", clientAddr: "127.0.0.1:46028", time: new Date(1574796694757), what: "shardCollection.start", ns: "test2_fsmdb0.agg_out", details: { shardKey: { _id: "hashed" }, collection: "test2_fsmdb0.agg_out", uuid: UUID("b1a5c7a3-d406-439a-9c39-a502710d3e37"), empty: false, fromMapReduce: false, primary: "shard-rs1:shard-rs1/localhost:20004,localhost:20005,localhost:20006", numChunks: 1 } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.127-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-264-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 3540)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:35.619-0500 I COMMAND [ReplWriterWorker-1] CMD: drop test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.652-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d975cde74b6784bb67c' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.757-0500 I STORAGE [conn85] createCollection: test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 with generated UUID: 68ff9d0d-dbd8-4e48-bf1e-7c8631ea1f95 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:35.619-0500 I COMMAND [ReplWriterWorker-3] CMD: drop test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.128-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-283-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 4178)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:35.619-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e (b3750738-7e2e-471e-a530-9cc710d06e53) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796695, 509), t: 1 } and commit timestamp Timestamp(1574796695, 509)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.653-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d975cde74b6784bb677' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.763-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.agg_out to version 1|0||5ddd7d96cf8184c2e1493a53 took 1 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:35.619-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e (b3750738-7e2e-471e-a530-9cc710d06e53) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796695, 509), t: 1 } and commit timestamp Timestamp(1574796695, 509)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.129-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-286-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 4178)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:35.619-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e (b3750738-7e2e-471e-a530-9cc710d06e53).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.671-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d975cde74b6784bb68f
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.764-0500 I STORAGE [ShardServerCatalogCacheLoader-1] createCollection: config.cache.chunks.test2_fsmdb0.agg_out with provided UUID: 13bc0717-3ecb-47d5-aedd-db010ec932d6 and options: { uuid: UUID("13bc0717-3ecb-47d5-aedd-db010ec932d6") }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:35.619-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e (b3750738-7e2e-471e-a530-9cc710d06e53).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.130-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-280-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 4178)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:35.619-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e (b3750738-7e2e-471e-a530-9cc710d06e53)'. Ident: 'index-160--2310912778499990807', commit timestamp: 'Timestamp(1574796695, 509)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.671-0500 I SHARDING [conn19] Enabling sharding for database [test2_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.783-0500 I INDEX [conn85] index build: done building index _id_ on ns test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:35.619-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e (b3750738-7e2e-471e-a530-9cc710d06e53)'. Ident: 'index-160--7234316082034423155', commit timestamp: 'Timestamp(1574796695, 509)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.131-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-284-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5052)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:35.619-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e (b3750738-7e2e-471e-a530-9cc710d06e53)'. Ident: 'index-163--2310912778499990807', commit timestamp: 'Timestamp(1574796695, 509)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.673-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d975cde74b6784bb68f' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.783-0500 I COMMAND [conn77] renameCollectionForCommand: rename test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc to test2_fsmdb0.agg_out and drop test2_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:35.619-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e (b3750738-7e2e-471e-a530-9cc710d06e53)'. Ident: 'index-163--7234316082034423155', commit timestamp: 'Timestamp(1574796695, 509)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.132-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-288-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5052)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:35.619-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e'. Ident: collection-159--2310912778499990807, commit timestamp: Timestamp(1574796695, 509)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.676-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d975cde74b6784bb69b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.784-0500 I STORAGE [conn77] dropCollection: test2_fsmdb0.agg_out (b1a5c7a3-d406-439a-9c39-a502710d3e37) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796694, 2222), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:35.619-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e'. Ident: collection-159--7234316082034423155, commit timestamp: Timestamp(1574796695, 509)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.132-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-281-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5052)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:35.754-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52034 #66 (11 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.677-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7d975cde74b6784bb69d
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.784-0500 I STORAGE [conn77] Finishing collection drop for test2_fsmdb0.agg_out (b1a5c7a3-d406-439a-9c39-a502710d3e37).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:35.754-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35392 #60 (11 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.133-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-285-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5053)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:35.754-0500 I NETWORK [conn66] received client metadata from 127.0.0.1:52034 conn66: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.678-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.784-0500 I STORAGE [conn77] renameCollection: renaming collection 08932b51-9933-4490-ab6b-1df6cfb57633 from test2_fsmdb0.tmp.agg_out.82dd3965-e458-4674-9968-a1581b14d2fc to test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:35.754-0500 I NETWORK [conn60] received client metadata from 127.0.0.1:35392 conn60: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.135-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-292-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5053)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:35.825-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52056 #67 (12 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.679-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.agg_out to version 1|0||5ddd7d96cf8184c2e1493a53 took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.784-0500 I STORAGE [conn77] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.agg_out (b1a5c7a3-d406-439a-9c39-a502710d3e37)'. Ident: 'index-130--2588534479858262356', commit timestamp: 'Timestamp(1574796694, 2222)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:35.826-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35418 #61 (12 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.136-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-282-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5053)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:35.826-0500 I NETWORK [conn67] received client metadata from 127.0.0.1:52056 conn67: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.680-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d975cde74b6784bb69d' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.784-0500 I STORAGE [conn77] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.agg_out (b1a5c7a3-d406-439a-9c39-a502710d3e37)'. Ident: 'index-136--2588534479858262356', commit timestamp: 'Timestamp(1574796694, 2222)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:35.826-0500 I NETWORK [conn61] received client metadata from 127.0.0.1:35418 conn61: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.137-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-291-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5624)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:35.836-0500 W CONTROL [conn67] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 40 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.681-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d975cde74b6784bb69b' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.784-0500 I STORAGE [conn77] Deferring table drop for collection 'test2_fsmdb0.agg_out'. Ident: collection-125--2588534479858262356, commit timestamp: Timestamp(1574796694, 2222)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:35.836-0500 W CONTROL [conn61] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 43 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.137-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-296-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5624)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:35.858-0500 W CONTROL [conn67] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 40 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.688-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d975cde74b6784bb6ac
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.784-0500 I SHARDING [conn55] Marking collection test2_fsmdb0.agg_out as collection version: 1|0||5ddd7d96cf8184c2e1493a53, shard version: 1|0||5ddd7d96cf8184c2e1493a53
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:35.858-0500 W CONTROL [conn61] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 43 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.138-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-289-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5624)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:35.860-0500 I NETWORK [conn67] end connection 127.0.0.1:52056 (11 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.688-0500 I SHARDING [conn19] Enabling sharding for database [test2_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.784-0500 I INDEX [conn85] Registering index build: b9271b1e-24b6-4520-bf36-a453cdc81075
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:35.860-0500 I NETWORK [conn61] end connection 127.0.0.1:35418 (11 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.139-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-295-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 6064)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:35.870-0500 I NETWORK [conn66] end connection 127.0.0.1:52034 (10 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.689-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d975cde74b6784bb6ac' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.784-0500 I SHARDING [conn55] Created 1 chunk(s) for: test2_fsmdb0.agg_out, producing collection version 1|0||5ddd7d96cf8184c2e1493a53
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:35.870-0500 I NETWORK [conn60] end connection 127.0.0.1:35392 (10 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.140-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-298-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 6064)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:36.211-0500 I NETWORK [conn64] end connection 127.0.0.1:51972 (9 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.691-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d975cde74b6784bb6b2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.784-0500 I SHARDING [conn55] about to log metadata event into changelog: { _id: "nz_desktop:20004-2019-11-26T14:31:34.784-0500-5ddd7d96cf8184c2e1493a7d", server: "nz_desktop:20004", shard: "shard-rs1", clientAddr: "127.0.0.1:46028", time: new Date(1574796694784), what: "shardCollection.end", ns: "test2_fsmdb0.agg_out", details: { version: "1|0||5ddd7d96cf8184c2e1493a53" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:36.212-0500 I NETWORK [conn58] end connection 127.0.0.1:35334 (9 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.141-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-293-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 6064)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:36.223-0500 I NETWORK [conn63] end connection 127.0.0.1:51934 (8 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.692-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7d975cde74b6784bb6b4
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.784-0500 I COMMAND [conn64] command test2_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("3be876c6-d64e-4ca7-b4fd-5b1e9c261d78"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7134554159121901201, ns: "test2_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1882519617537953442, ns: "test2_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test2_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796694592), clusterTime: Timestamp(1574796694, 568) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("3be876c6-d64e-4ca7-b4fd-5b1e9c261d78"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796694, 568), signature: { hash: BinData(0, 45C1DDF53A6E2BCB4EC1A70E851FF4768B381088), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45202", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796694, 565), t: 1 } }, $db: "test2_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 191ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:36.223-0500 I NETWORK [conn57] end connection 127.0.0.1:35296 (8 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:37.681-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52076 #68 (9 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.824-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39224 #135 (37 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.693-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.787-0500 I COMMAND [conn64] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:37.680-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35434 #62 (9 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:37.681-0500 I NETWORK [conn68] received client metadata from 127.0.0.1:52076 conn68: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.824-0500 I NETWORK [conn135] received client metadata from 127.0.0.1:39224 conn135: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.694-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.agg_out to version 1|0||5ddd7d96cf8184c2e1493a53 took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.793-0500 I INDEX [ShardServerCatalogCacheLoader-1] index build: done building index _id_ on ns config.cache.chunks.test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:37.681-0500 I NETWORK [conn62] received client metadata from 127.0.0.1:35434 conn62: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:37.745-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52096 #69 (10 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.838-0500 I COMMAND [conn37] CMD: drop test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.695-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d975cde74b6784bb6b4' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.793-0500 I INDEX [ShardServerCatalogCacheLoader-1] Registering index build: fcaf6048-1e19-4791-a19c-f830b8735024
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:37.746-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35460 #63 (10 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:37.745-0500 I NETWORK [conn69] received client metadata from 127.0.0.1:52096 conn69: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.839-0500 I STORAGE [conn37] dropCollection: test1_fsmdb0.agg_out (5e50e75c-c327-4f05-bb46-1ea87905b919) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.697-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d975cde74b6784bb6b2' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.808-0500 I INDEX [conn85] index build: starting on test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:37.746-0500 I NETWORK [conn63] received client metadata from 127.0.0.1:35460 conn63: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:37.761-0500 I COMMAND [conn69] Attempting to step down in response to replSetStepDown command
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.839-0500 I STORAGE [conn37] Finishing collection drop for test1_fsmdb0.agg_out (5e50e75c-c327-4f05-bb46-1ea87905b919).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.716-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d975cde74b6784bb6c3
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.808-0500 I INDEX [conn85] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:37.763-0500 I COMMAND [conn63] Attempting to step down in response to replSetStepDown command
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:37.762-0500 I REPL [conn69] 'freezing' for 86400 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.839-0500 I STORAGE [conn37] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.agg_out (5e50e75c-c327-4f05-bb46-1ea87905b919)'. Ident: 'index-301-8224331490264904478', commit timestamp: 'Timestamp(1574796692, 5)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.717-0500 I SHARDING [conn19] Enabling sharding for database [test2_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.808-0500 I STORAGE [conn85] Index build initialized: b9271b1e-24b6-4520-bf36-a453cdc81075: test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 (68ff9d0d-dbd8-4e48-bf1e-7c8631ea1f95 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:37.764-0500 I REPL [conn63] 'freezing' for 86400 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:38.054-0500 I REPL [conn69] 'unfreezing'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.839-0500 I STORAGE [conn37] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.agg_out (5e50e75c-c327-4f05-bb46-1ea87905b919)'. Ident: 'index-302-8224331490264904478', commit timestamp: 'Timestamp(1574796692, 5)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.718-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d975cde74b6784bb6c3' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.808-0500 I INDEX [conn85] Waiting for index build to complete: b9271b1e-24b6-4520-bf36-a453cdc81075
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:38.055-0500 I REPL [conn63] 'unfreezing'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:38.056-0500 I NETWORK [conn69] end connection 127.0.0.1:52096 (9 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.839-0500 I STORAGE [conn37] Deferring table drop for collection 'test1_fsmdb0.agg_out'. Ident: collection-300-8224331490264904478, commit timestamp: Timestamp(1574796692, 5)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.720-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d975cde74b6784bb6c9
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.809-0500 I COMMAND [conn88] renameCollectionForCommand: rename test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 to test2_fsmdb0.agg_out and drop test2_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:38.056-0500 I NETWORK [conn63] end connection 127.0.0.1:35460 (9 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:38.067-0500 I NETWORK [conn68] end connection 127.0.0.1:52076 (8 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.846-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.agg_out took 0 ms and found the collection is not sharded
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.721-0500 I SHARDING [conn19] distributed lock 'test2_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7d975cde74b6784bb6cb
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.822-0500 I INDEX [ShardServerCatalogCacheLoader-1] index build: starting on config.cache.chunks.test2_fsmdb0.agg_out properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:38.067-0500 I NETWORK [conn62] end connection 127.0.0.1:35434 (8 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.846-0500 I SHARDING [conn37] Updating metadata for collection test1_fsmdb0.agg_out from collection version: 1|0||5ddd7d8e3bbfe7fa5630e252, shard version: 1|0||5ddd7d8e3bbfe7fa5630e252 to collection version: due to UUID change
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.722-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.822-0500 I INDEX [ShardServerCatalogCacheLoader-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.846-0500 I COMMAND [ShardServerCatalogCacheLoader-0] CMD: drop config.cache.chunks.test1_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.723-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.agg_out to version 1|0||5ddd7d96cf8184c2e1493a53 took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.822-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Index build initialized: fcaf6048-1e19-4791-a19c-f830b8735024: config.cache.chunks.test2_fsmdb0.agg_out (13bc0717-3ecb-47d5-aedd-db010ec932d6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.846-0500 I STORAGE [ShardServerCatalogCacheLoader-0] dropCollection: config.cache.chunks.test1_fsmdb0.agg_out (ad34fc50-677f-4846-b03c-7b24f5f1669a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.724-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d975cde74b6784bb6cb' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.822-0500 I INDEX [ShardServerCatalogCacheLoader-1] Waiting for index build to complete: fcaf6048-1e19-4791-a19c-f830b8735024
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.846-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Finishing collection drop for config.cache.chunks.test1_fsmdb0.agg_out (ad34fc50-677f-4846-b03c-7b24f5f1669a).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.725-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7d975cde74b6784bb6c9' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.823-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.846-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test1_fsmdb0.agg_out (ad34fc50-677f-4846-b03c-7b24f5f1669a)'. Ident: 'index-319-8224331490264904478', commit timestamp: 'Timestamp(1574796692, 9)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.748-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56616 #109 (38 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.823-0500 I COMMAND [conn84] renameCollectionForCommand: rename test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 to test2_fsmdb0.agg_out and drop test2_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.846-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test1_fsmdb0.agg_out (ad34fc50-677f-4846-b03c-7b24f5f1669a)'. Ident: 'index-322-8224331490264904478', commit timestamp: 'Timestamp(1574796692, 9)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.748-0500 I NETWORK [conn109] received client metadata from 127.0.0.1:56616 conn109: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.823-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.846-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Deferring table drop for collection 'config.cache.chunks.test1_fsmdb0.agg_out'. Ident: collection-317-8224331490264904478, commit timestamp: Timestamp(1574796692, 9)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.749-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56618 #110 (39 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.823-0500 I COMMAND [conn88] CMD: drop test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.854-0500 I COMMAND [conn37] CMD: drop test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.749-0500 I NETWORK [conn110] received client metadata from 127.0.0.1:56618 conn110: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.823-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.855-0500 I STORAGE [conn37] dropCollection: test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.864-0500 I NETWORK [conn110] end connection 127.0.0.1:56618 (38 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.823-0500 I STORAGE [conn88] dropCollection: test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 (d997bf94-238b-49fc-9338-fc2aecfcb151) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.855-0500 I STORAGE [conn37] Finishing collection drop for test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:35.870-0500 I NETWORK [conn109] end connection 127.0.0.1:56616 (37 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.823-0500 I STORAGE [conn88] Finishing collection drop for test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 (d997bf94-238b-49fc-9338-fc2aecfcb151).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.855-0500 I STORAGE [conn37] Deferring table drop for index '_id_' on collection 'test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38)'. Ident: 'index-37-8224331490264904478', commit timestamp: 'Timestamp(1574796692, 15)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.152-0500 I SHARDING [conn23] distributed lock 'test2_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d975cde74b6784bb67f
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.823-0500 I STORAGE [conn88] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 (d997bf94-238b-49fc-9338-fc2aecfcb151)'. Ident: 'index-132--2588534479858262356', commit timestamp: 'Timestamp(1574796694, 3097)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.855-0500 I STORAGE [conn37] Deferring table drop for index '_id_hashed' on collection 'test1_fsmdb0.fsmcoll0 (dccb4b9f-92a4-4a8c-933f-ac40a7941a38)'. Ident: 'index-38-8224331490264904478', commit timestamp: 'Timestamp(1574796692, 15)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.152-0500 I SHARDING [conn23] Enabling sharding for database [test2_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.823-0500 I STORAGE [conn88] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9 (d997bf94-238b-49fc-9338-fc2aecfcb151)'. Ident: 'index-140--2588534479858262356', commit timestamp: 'Timestamp(1574796694, 3097)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.855-0500 I STORAGE [conn37] Deferring table drop for collection 'test1_fsmdb0.fsmcoll0'. Ident: collection-36-8224331490264904478, commit timestamp: Timestamp(1574796692, 15)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.153-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7d975cde74b6784bb67f' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.823-0500 I COMMAND [conn84] CMD: drop test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.865-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test1_fsmdb0.fsmcoll0 took 0 ms and found the collection is not sharded
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.153-0500 I COMMAND [conn23] command admin.$cmd appName: "tid:3" command: _configsvrEnableSharding { _configsvrEnableSharding: "test2_fsmdb0", writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("86806ebc-9bc8-4f4d-8806-c8bf25e31db3"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1574796695, 527), signature: { hash: BinData(0, 537AF1CA0E5BF829360A450B5C5B4FE27BD08071), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58350", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796695, 527), t: 1 } }, $db: "admin" } numYields:0 reslen:505 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 5 } }, ReplicationStateTransition: { acquireCount: { w: 9 } }, Global: { acquireCount: { r: 5, w: 4 } }, Database: { acquireCount: { r: 4, w: 4 } }, Collection: { acquireCount: { r: 3, w: 4 } }, Mutex: { acquireCount: { r: 10 } }, oplog: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 4 } storage:{} protocol:op_msg 506ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.823-0500 I STORAGE [conn88] Deferring table drop for collection 'test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9'. Ident: collection-127--2588534479858262356, commit timestamp: Timestamp(1574796694, 3097)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.865-0500 I SHARDING [conn37] Updating metadata for collection test1_fsmdb0.fsmcoll0 from collection version: 1|3||5ddd7d7d3bbfe7fa5630d6e7, shard version: 1|1||5ddd7d7d3bbfe7fa5630d6e7 to collection version: due to UUID change
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.156-0500 I SHARDING [conn23] distributed lock 'test2_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d985cde74b6784bb6e1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.823-0500 I STORAGE [conn84] dropCollection: test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 (08dbd6f5-b7d8-47c3-b06b-600c165e66f1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.865-0500 I COMMAND [ShardServerCatalogCacheLoader-0] CMD: drop config.cache.chunks.test1_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.157-0500 I SHARDING [conn23] distributed lock 'test2_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7d985cde74b6784bb6e3
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.823-0500 I STORAGE [conn84] Finishing collection drop for test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 (08dbd6f5-b7d8-47c3-b06b-600c165e66f1).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.865-0500 I STORAGE [ShardServerCatalogCacheLoader-0] dropCollection: config.cache.chunks.test1_fsmdb0.fsmcoll0 (24d02c72-11d8-48c7-b13e-109658af75b4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.159-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.823-0500 I STORAGE [conn84] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 (08dbd6f5-b7d8-47c3-b06b-600c165e66f1)'. Ident: 'index-133--2588534479858262356', commit timestamp: 'Timestamp(1574796694, 3098)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.865-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Finishing collection drop for config.cache.chunks.test1_fsmdb0.fsmcoll0 (24d02c72-11d8-48c7-b13e-109658af75b4).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.159-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.agg_out to version 1|0||5ddd7d96cf8184c2e1493a53 took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.823-0500 I STORAGE [conn84] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220 (08dbd6f5-b7d8-47c3-b06b-600c165e66f1)'. Ident: 'index-142--2588534479858262356', commit timestamp: 'Timestamp(1574796694, 3098)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.865-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test1_fsmdb0.fsmcoll0 (24d02c72-11d8-48c7-b13e-109658af75b4)'. Ident: 'index-41-8224331490264904478', commit timestamp: 'Timestamp(1574796692, 23)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.160-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7d985cde74b6784bb6e3' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.823-0500 I STORAGE [conn84] Deferring table drop for collection 'test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220'. Ident: collection-128--2588534479858262356, commit timestamp: Timestamp(1574796694, 3098)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.865-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test1_fsmdb0.fsmcoll0 (24d02c72-11d8-48c7-b13e-109658af75b4)'. Ident: 'index-42-8224331490264904478', commit timestamp: 'Timestamp(1574796692, 23)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.161-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7d985cde74b6784bb6e1' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.824-0500 I COMMAND [conn81] command test2_fsmdb0.agg_out appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("86806ebc-9bc8-4f4d-8806-c8bf25e31db3"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3254932133138161915, ns: "test2_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1853918703442732226, ns: "test2_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test2_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796694594), clusterTime: Timestamp(1574796694, 565) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("86806ebc-9bc8-4f4d-8806-c8bf25e31db3"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796694, 571), signature: { hash: BinData(0, 45C1DDF53A6E2BCB4EC1A70E851FF4768B381088), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58350", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796694, 565), t: 1 } }, $db: "test2_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9\", to: \"test2_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: cannot rename to a sharded collection" errName:IllegalOperation errCode:20 reslen:745 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 228ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.865-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Deferring table drop for collection 'config.cache.chunks.test1_fsmdb0.fsmcoll0'. Ident: collection-40-8224331490264904478, commit timestamp: Timestamp(1574796692, 23)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.175-0500 I SHARDING [conn17] distributed lock 'test2_fsmdb0' acquired for 'enableSharding', ts : 5ddd7d975cde74b6784bb692
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.824-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.869-0500 I COMMAND [conn37] dropDatabase test1_fsmdb0 - starting
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.175-0500 I SHARDING [conn17] Enabling sharding for database [test2_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.824-0500 I COMMAND [conn80] command test2_fsmdb0.agg_out appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("81d4e471-c714-4e93-a360-4ad87027dca4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 9123515566277368642, ns: "test2_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1134403335354055556, ns: "test2_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test2_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796694594), clusterTime: Timestamp(1574796694, 565) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("81d4e471-c714-4e93-a360-4ad87027dca4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796694, 571), signature: { hash: BinData(0, 45C1DDF53A6E2BCB4EC1A70E851FF4768B381088), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58342", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796694, 565), t: 1 } }, $db: "test2_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220\", to: \"test2_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: cannot rename to a sharded collection" errName:IllegalOperation errCode:20 reslen:745 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 228ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.869-0500 I COMMAND [conn37] dropDatabase test1_fsmdb0 - dropped 0 collection(s)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.176-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7d975cde74b6784bb692' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.824-0500 I SHARDING [ConfigServerCatalogCacheLoader-0] Marking collection config.cache.chunks.test2_fsmdb0.agg_out as collection version:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.869-0500 I COMMAND [conn37] dropDatabase test1_fsmdb0 - finished
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.176-0500 I COMMAND [conn17] command admin.$cmd appName: "tid:4" command: _configsvrEnableSharding { _configsvrEnableSharding: "test2_fsmdb0", writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("3be876c6-d64e-4ca7-b4fd-5b1e9c261d78"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1574796695, 535), signature: { hash: BinData(0, 537AF1CA0E5BF829360A450B5C5B4FE27BD08071), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45202", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796695, 531), t: 1 } }, $db: "admin" } numYields:0 reslen:505 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 5 } }, ReplicationStateTransition: { acquireCount: { w: 9 } }, Global: { acquireCount: { r: 5, w: 4 } }, Database: { acquireCount: { r: 4, w: 4 } }, Collection: { acquireCount: { r: 3, w: 4 } }, Mutex: { acquireCount: { r: 10 } }, oplog: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 4 } storage:{} protocol:op_msg 504ms
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:39.649-0500 MongoDB shell version v0.0.0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.826-0500 I COMMAND [conn64] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.879-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test1_fsmdb0 took 0 ms and failed :: caused by :: NamespaceNotFound: database test1_fsmdb0 not found
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.930-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.930-0500 Implicit session: session { "id" : UUID("8acec178-523a-4de9-83eb-527cf7d73a14") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.930-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.930-0500 true
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.930-0500 2019-11-26T14:31:39.709-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.930-0500 2019-11-26T14:31:39.709-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.930-0500 2019-11-26T14:31:39.710-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.930-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.930-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.930-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.930-0500 [jsTest] New session started with sessionID: { "id" : UUID("ca751c25-a7d1-4fed-b61e-e473c748ac2d") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.930-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.930-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.930-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.930-0500 2019-11-26T14:31:39.713-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.930-0500 2019-11-26T14:31:39.713-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.931-0500 2019-11-26T14:31:39.713-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.931-0500 2019-11-26T14:31:39.713-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.931-0500 2019-11-26T14:31:39.714-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.931-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.931-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.931-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.931-0500 [jsTest] New session started with sessionID: { "id" : UUID("7e108005-184c-4315-a1dd-ef8b9a062794") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.931-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.931-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.931-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.931-0500 2019-11-26T14:31:39.716-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.931-0500 2019-11-26T14:31:39.716-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.931-0500 2019-11-26T14:31:39.716-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.931-0500 2019-11-26T14:31:39.716-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.700-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45296 #117 (1 connection now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.931-0500 2019-11-26T14:31:39.717-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.931-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.931-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.931-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.931-0500 [jsTest] New session started with sessionID: { "id" : UUID("d8b9bf13-5220-480b-b2c0-1fb8a38cc634") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.931-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.931-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.932-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.932-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.932-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.932-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.932-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.932-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.932-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.932-0500 Implicit session: session { "id" : UUID("8aa3257f-ef5e-4f09-bd7f-4fb41c658258") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.932-0500 Implicit session: session { "id" : UUID("f4b73b8a-15c7-420a-bbbe-a9b502c128eb") }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.714-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53366 #69 (12 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.932-0500 Implicit session: session { "id" : UUID("2312f533-907b-4d83-93d1-be190a202828") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.932-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.714-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52480 #72 (11 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.932-0500 Implicit session: session { "id" : UUID("76d73793-0083-4510-91c9-aa0ecbdb6187") }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.716-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52116 #70 (9 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.932-0500 Implicit session: session { "id" : UUID("a14547fd-c7cb-4507-8e65-ef90955f286c") }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.716-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35478 #64 (9 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.933-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.178-0500 I SHARDING [conn17] distributed lock 'test2_fsmdb0' acquired for 'shardCollection', ts : 5ddd7d985cde74b6784bb6f6
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.933-0500 Implicit session: session { "id" : UUID("7f21cb65-fb17-4e89-84b6-7ecd51f88778") }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.826-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index lastmod_1 on ns config.cache.chunks.test2_fsmdb0.agg_out
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.933-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.879-0500 I SHARDING [conn37] setting this node's cached database version for test1_fsmdb0 to {}
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.933-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.701-0500 I NETWORK [conn117] received client metadata from 127.0.0.1:45296 conn117: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.933-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.933-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.714-0500 I NETWORK [conn69] received client metadata from 127.0.0.1:53366 conn69: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.934-0500 Implicit session: session { "id" : UUID("cbd8b496-0c15-49c8-8dc1-3e53124cd8ba") }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.715-0500 I NETWORK [conn72] received client metadata from 127.0.0.1:52480 conn72: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.934-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.716-0500 I NETWORK [conn70] received client metadata from 127.0.0.1:52116 conn70: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.934-0500 Running validate() on localhost:20000
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.934-0500 Running validate() on localhost:20003
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.934-0500 Running validate() on localhost:20002
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.934-0500 Running validate() on localhost:20005
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.934-0500 Running validate() on localhost:20004
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.716-0500 I NETWORK [conn64] received client metadata from 127.0.0.1:35478 conn64: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.934-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.934-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.180-0500 I SHARDING [conn17] distributed lock 'test2_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7d985cde74b6784bb6f8
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.934-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.827-0500 I STORAGE [conn84] createCollection: test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e with generated UUID: b3750738-7e2e-471e-a530-9cc710d06e53 and options: { temp: true }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.935-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.935-0500 [jsTest] New session started with sessionID: { "id" : UUID("0686b2b7-b01c-4c25-8c20-dd485e7f3f22") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.935-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.933-0500 I STORAGE [TimestampMonitor] Removing drop-pending idents with drop timestamps before timestamp Timestamp(1574796687, 30)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.935-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.794-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45318 #118 (2 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.935-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.800-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53398 #70 (13 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.935-0500 Running validate() on localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.800-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52512 #73 (12 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.935-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.801-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52144 #71 (10 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.935-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.803-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35510 #65 (10 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.936-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.181-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.936-0500 [jsTest] New session started with sessionID: { "id" : UUID("0205ede0-594d-4e27-aba5-d06435d8f150") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.829-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.936-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.933-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-307-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201) with drop timestamp Timestamp(1574796686, 1011)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.936-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.795-0500 I NETWORK [conn118] received client metadata from 127.0.0.1:45318 conn118: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.936-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.800-0500 I NETWORK [conn70] received client metadata from 127.0.0.1:53398 conn70: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.936-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.800-0500 I NETWORK [conn73] received client metadata from 127.0.0.1:52512 conn73: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.936-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.801-0500 I NETWORK [conn71] received client metadata from 127.0.0.1:52144 conn71: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.937-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.805-0500 I NETWORK [conn65] received client metadata from 127.0.0.1:35510 conn65: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.937-0500 [jsTest] New session started with sessionID: { "id" : UUID("9a6df42a-5916-4641-b294-59136165aa8b") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.181-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.agg_out to version 1|0||5ddd7d96cf8184c2e1493a53 took 0 ms
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.937-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.831-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: fcaf6048-1e19-4791-a19c-f830b8735024: config.cache.chunks.test2_fsmdb0.agg_out ( 13bc0717-3ecb-47d5-aedd-db010ec932d6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.937-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.934-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-310-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201) with drop timestamp Timestamp(1574796686, 1011)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.937-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.795-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45320 #119 (3 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.937-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.805-0500 I COMMAND [conn70] CMD: validate admin.system.version, full:true
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.937-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.806-0500 I COMMAND [conn73] CMD: validate admin.system.version, full:true
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.938-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.807-0500 I COMMAND [conn71] CMD: validate admin.system.version, full:true
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.938-0500 [jsTest] New session started with sessionID: { "id" : UUID("9e0bc9e7-5d13-489c-8202-33d02ac37740") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.810-0500 I COMMAND [conn65] CMD: validate admin.system.version, full:true
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.938-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.182-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7d985cde74b6784bb6f8' unlocked.
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.938-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.832-0500 I INDEX [ShardServerCatalogCacheLoader-1] Index build completed: fcaf6048-1e19-4791-a19c-f830b8735024
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.938-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.934-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-304-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201) with drop timestamp Timestamp(1574796686, 1011)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.938-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.795-0500 I NETWORK [conn119] received client metadata from 127.0.0.1:45320 conn119: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.938-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.805-0500 W STORAGE [conn70] Could not complete validation of table:collection-17--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.938-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.806-0500 W STORAGE [conn73] Could not complete validation of table:collection-17--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.939-0500 [jsTest] New session started with sessionID: { "id" : UUID("ec6c76c4-6dd7-4132-83c3-12ceda9140a7") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.808-0500 W STORAGE [conn71] Could not complete validation of table:collection-17--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.939-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.811-0500 W STORAGE [conn65] Could not complete validation of table:collection-17--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.939-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.184-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7d985cde74b6784bb6f6' unlocked.
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.939-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.841-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: b9271b1e-24b6-4520-bf36-a453cdc81075: test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 ( 68ff9d0d-dbd8-4e48-bf1e-7c8631ea1f95 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.939-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.935-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-308-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163) with drop timestamp Timestamp(1574796686, 1018)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.939-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.795-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45322 #120 (4 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.939-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.805-0500 I INDEX [conn70] validating the internal structure of index _id_ on collection admin.system.version
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.940-0500 [jsTest] New session started with sessionID: { "id" : UUID("28c1d21a-943e-4b32-b9ff-fd616a10adfd") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.806-0500 I INDEX [conn73] validating the internal structure of index _id_ on collection admin.system.version
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.940-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.808-0500 I INDEX [conn71] validating the internal structure of index _id_ on collection admin.system.version
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.940-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.811-0500 I INDEX [conn65] validating the internal structure of index _id_ on collection admin.system.version
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.940-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.209-0500 I NETWORK [conn105] end connection 127.0.0.1:56512 (36 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.940-0500 Running validate() on localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.841-0500 I INDEX [conn85] Index build completed: b9271b1e-24b6-4520-bf36-a453cdc81075
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.940-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.937-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-312-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163) with drop timestamp Timestamp(1574796686, 1018)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.940-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.795-0500 I NETWORK [conn120] received client metadata from 127.0.0.1:45322 conn120: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.940-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.806-0500 W STORAGE [conn70] Could not complete validation of table:index-18--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.941-0500 [jsTest] New session started with sessionID: { "id" : UUID("2ef85b4d-cb2f-4876-ab2a-8aac287534f0") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.806-0500 W STORAGE [conn73] Could not complete validation of table:index-18--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.941-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.808-0500 W STORAGE [conn71] Could not complete validation of table:index-18--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.941-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.811-0500 W STORAGE [conn65] Could not complete validation of table:index-18--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.941-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.210-0500 I NETWORK [conn106] end connection 127.0.0.1:56540 (35 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:41.941-0500 JSTest jstests/hooks/run_validate_collections.js finished.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.848-0500 I INDEX [conn84] index build: done building index _id_ on ns test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e
[executor:fsm_workload_test:job0] 2019-11-26T14:31:41.942-0500 agg_out:ValidateCollections ran in 2.32 seconds: no failures detected.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.938-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-305-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163) with drop timestamp Timestamp(1574796686, 1018)
[executor:fsm_workload_test:job0] 2019-11-26T14:31:41.942-0500 Running agg_out:CleanupConcurrencyWorkloads...
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.796-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45324 #121 (5 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.806-0500 I INDEX [conn70] validating collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.806-0500 I INDEX [conn73] validating collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.808-0500 I INDEX [conn71] validating collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.811-0500 I INDEX [conn65] validating collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.210-0500 I NETWORK [conn107] end connection 127.0.0.1:56550 (34 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.849-0500 I INDEX [conn84] Registering index build: 023ae8f3-52ba-4a72-95db-87c7f0ebc8e8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.939-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-309-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23) with drop timestamp Timestamp(1574796686, 1720)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.796-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45326 #122 (6 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.806-0500 I INDEX [conn70] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:41.944-0500 I NETWORK [listener] connection accepted from 127.0.0.1:58494 #42 (1 connection now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.806-0500 I INDEX [conn73] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.808-0500 I INDEX [conn71] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.811-0500 I INDEX [conn65] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.212-0500 I NETWORK [conn108] end connection 127.0.0.1:56552 (33 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:36.223-0500 I NETWORK [conn104] end connection 127.0.0.1:56510 (32 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.940-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-314-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23) with drop timestamp Timestamp(1574796686, 1720)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.796-0500 I NETWORK [conn121] received client metadata from 127.0.0.1:45324 conn121: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.806-0500 I INDEX [conn70] Validation complete for collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.806-0500 I INDEX [conn73] Validation complete for collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13). No corruption found.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:41.945-0500 I NETWORK [conn42] received client metadata from 127.0.0.1:58494 conn42: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.808-0500 I INDEX [conn71] Validation complete for collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.811-0500 I INDEX [conn65] Validation complete for collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.855-0500 I COMMAND [conn64] CMD: dropIndexes test2_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:37.673-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56658 #111 (33 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.940-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-306-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23) with drop timestamp Timestamp(1574796686, 1720)
[CleanupConcurrencyWorkloads:job0:agg_out:CleanupConcurrencyWorkloads] 2019-11-26T14:31:41.949-0500 Dropping all databases except for ['config', 'local', '$external', 'admin']
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.796-0500 I NETWORK [conn122] received client metadata from 127.0.0.1:45326 conn122: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CleanupConcurrencyWorkloads:job0:agg_out:CleanupConcurrencyWorkloads] 2019-11-26T14:31:41.949-0500 Dropping database test2_fsmdb0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.807-0500 I COMMAND [conn70] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.808-0500 I COMMAND [conn73] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:41.947-0500 I NETWORK [listener] connection accepted from 127.0.0.1:58498 #43 (2 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.810-0500 I COMMAND [conn71] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.813-0500 I COMMAND [conn65] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.857-0500 I COMMAND [conn55] command admin.$cmd appName: "tid:0" command: _shardsvrShardCollection { _shardsvrShardCollection: "test2_fsmdb0.agg_out", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("dc61b4f9-aa45-4579-b7ea-98c3efc65c32"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796694, 1654), signature: { hash: BinData(0, 45C1DDF53A6E2BCB4EC1A70E851FF4768B381088), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45198", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796694, 1592), t: 1 } }, $db: "admin" } numYields:0 reslen:414 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 6 } }, ReplicationStateTransition: { acquireCount: { w: 8 } }, Global: { acquireCount: { r: 4, w: 4 } }, Database: { acquireCount: { r: 4, w: 4 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 20222 } }, Collection: { acquireCount: { r: 5, w: 2, W: 2 } }, Mutex: { acquireCount: { r: 8, W: 4 } } } flowControl:{ acquireCount: 3 } storage:{} protocol:op_msg 102ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:37.674-0500 I NETWORK [conn111] received client metadata from 127.0.0.1:56658 conn111: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.941-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-318-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368) with drop timestamp Timestamp(1574796686, 2032)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.796-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45328 #123 (7 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.807-0500 W STORAGE [conn70] Could not complete validation of table:collection-31--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.808-0500 W STORAGE [conn73] Could not complete validation of table:collection-31--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:41.947-0500 I NETWORK [conn43] received client metadata from 127.0.0.1:58498 conn43: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.810-0500 W STORAGE [conn71] Could not complete validation of table:collection-29--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.813-0500 W STORAGE [conn65] Could not complete validation of table:collection-29--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.866-0500 I INDEX [conn84] index build: starting on test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:37.674-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56660 #112 (34 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.942-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-320-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368) with drop timestamp Timestamp(1574796686, 2032)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.796-0500 I NETWORK [conn123] received client metadata from 127.0.0.1:45328 conn123: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.808-0500 I INDEX [conn70] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.808-0500 I INDEX [conn73] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.810-0500 I INDEX [conn71] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.813-0500 I INDEX [conn65] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.866-0500 I INDEX [conn84] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:37.674-0500 I NETWORK [conn112] received client metadata from 127.0.0.1:56660 conn112: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.943-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-315-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368) with drop timestamp Timestamp(1574796686, 2032)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.797-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45330 #124 (8 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.808-0500 W STORAGE [conn70] Could not complete validation of table:index-32--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.808-0500 W STORAGE [conn73] Could not complete validation of table:index-32--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.810-0500 W STORAGE [conn71] Could not complete validation of table:index-30--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.813-0500 W STORAGE [conn65] Could not complete validation of table:index-30--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.866-0500 I STORAGE [conn84] Index build initialized: 023ae8f3-52ba-4a72-95db-87c7f0ebc8e8: test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e (b3750738-7e2e-471e-a530-9cc710d06e53 ): indexes: 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:38.060-0500 I NETWORK [conn112] end connection 127.0.0.1:56660 (33 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.945-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-325-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b) with drop timestamp Timestamp(1574796686, 2545)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.798-0500 I NETWORK [conn124] received client metadata from 127.0.0.1:45330 conn124: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.808-0500 I INDEX [conn70] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.808-0500 I INDEX [conn73] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.810-0500 I INDEX [conn71] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.813-0500 I INDEX [conn65] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.866-0500 I INDEX [conn84] Waiting for index build to complete: 023ae8f3-52ba-4a72-95db-87c7f0ebc8e8
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:38.067-0500 I NETWORK [conn111] end connection 127.0.0.1:56658 (32 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.946-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-326-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b) with drop timestamp Timestamp(1574796686, 2545)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.853-0500 I NETWORK [conn118] end connection 127.0.0.1:45318 (7 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.808-0500 W STORAGE [conn70] Could not complete validation of table:index-35--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.808-0500 W STORAGE [conn73] Could not complete validation of table:index-35--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.810-0500 W STORAGE [conn71] Could not complete validation of table:index-31--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.813-0500 W STORAGE [conn65] Could not complete validation of table:index-31--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.866-0500 I COMMAND [conn62] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.710-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56700 #113 (33 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:32.947-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-324-8224331490264904478 (ns: test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b) with drop timestamp Timestamp(1574796686, 2545)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.865-0500 I NETWORK [conn122] end connection 127.0.0.1:45326 (6 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.808-0500 I INDEX [conn70] validating collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.808-0500 I INDEX [conn73] validating collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.811-0500 I INDEX [conn71] validating collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.813-0500 I INDEX [conn65] validating collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.866-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.710-0500 I NETWORK [conn113] received client metadata from 127.0.0.1:56700 conn113: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.036-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39246 #136 (38 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.872-0500 I NETWORK [conn121] end connection 127.0.0.1:45324 (5 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.808-0500 I INDEX [conn70] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.808-0500 I INDEX [conn73] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.811-0500 I INDEX [conn71] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.813-0500 I INDEX [conn65] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.867-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.710-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56702 #114 (34 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.037-0500 I NETWORK [conn136] received client metadata from 127.0.0.1:39246 conn136: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.876-0500 I NETWORK [conn124] end connection 127.0.0.1:45330 (4 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.808-0500 I INDEX [conn70] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.808-0500 I INDEX [conn73] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.811-0500 I INDEX [conn71] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.813-0500 I INDEX [conn65] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.869-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.711-0500 I NETWORK [conn114] received client metadata from 127.0.0.1:56702 conn114: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.037-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39248 #137 (39 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.894-0500 I NETWORK [conn119] end connection 127.0.0.1:45320 (3 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.808-0500 I INDEX [conn70] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.808-0500 I INDEX [conn73] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.811-0500 I INDEX [conn71] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.813-0500 I INDEX [conn65] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.869-0500 I COMMAND [conn64] CMD: dropIndexes test2_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.800-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56734 #115 (35 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.800-0500 I NETWORK [conn115] received client metadata from 127.0.0.1:56734 conn115: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.901-0500 I NETWORK [conn123] end connection 127.0.0.1:45328 (2 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.809-0500 I COMMAND [conn70] CMD: validate config.cache.chunks.test2_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.809-0500 I COMMAND [conn73] CMD: validate config.cache.chunks.test2_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.811-0500 I COMMAND [conn71] CMD: validate config.cache.chunks.test2_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.814-0500 I COMMAND [conn65] CMD: validate config.cache.chunks.test2_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.873-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 023ae8f3-52ba-4a72-95db-87c7f0ebc8e8: test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e ( b3750738-7e2e-471e-a530-9cc710d06e53 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.873-0500 I INDEX [conn84] Index build completed: 023ae8f3-52ba-4a72-95db-87c7f0ebc8e8
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.806-0500 I COMMAND [conn115] CMD: validate admin.system.keys, full:true
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.919-0500 I NETWORK [conn120] end connection 127.0.0.1:45322 (1 connection now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.809-0500 W STORAGE [conn70] Could not complete validation of table:collection-341--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.809-0500 W STORAGE [conn73] Could not complete validation of table:collection-341--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.811-0500 W STORAGE [conn71] Could not complete validation of table:collection-155--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.814-0500 W STORAGE [conn65] Could not complete validation of table:collection-155--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.037-0500 I NETWORK [conn137] received client metadata from 127.0.0.1:39248 conn137: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.877-0500 I COMMAND [conn85] CMD: drop test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.806-0500 W STORAGE [conn115] Could not complete validation of table:collection-41-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:39.922-0500 I NETWORK [conn117] end connection 127.0.0.1:45296 (0 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.809-0500 I INDEX [conn70] validating the internal structure of index _id_ on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.809-0500 I INDEX [conn73] validating the internal structure of index _id_ on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.811-0500 I INDEX [conn71] validating the internal structure of index _id_ on collection config.cache.chunks.test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.814-0500 I INDEX [conn65] validating the internal structure of index _id_ on collection config.cache.chunks.test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.041-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39262 #138 (40 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:34.878-0500 I STORAGE [conn85] dropCollection: test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 (68ff9d0d-dbd8-4e48-bf1e-7c8631ea1f95) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.806-0500 I INDEX [conn115] validating the internal structure of index _id_ on collection admin.system.keys
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:41.945-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45354 #125 (1 connection now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.809-0500 W STORAGE [conn70] Could not complete validation of table:index-342--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.809-0500 W STORAGE [conn73] Could not complete validation of table:index-342--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.811-0500 W STORAGE [conn71] Could not complete validation of table:index-156--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.814-0500 W STORAGE [conn65] Could not complete validation of table:index-156--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.041-0500 I NETWORK [conn138] received client metadata from 127.0.0.1:39262 conn138: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.806-0500 W STORAGE [conn115] Could not complete validation of table:index-42-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.596-0500 I STORAGE [conn85] Finishing collection drop for test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 (68ff9d0d-dbd8-4e48-bf1e-7c8631ea1f95).
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:41.946-0500 I NETWORK [conn125] received client metadata from 127.0.0.1:45354 conn125: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.809-0500 I INDEX [conn70] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.809-0500 I INDEX [conn73] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.809-0500 W STORAGE [conn73] Could not complete validation of table:index-343--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.814-0500 I INDEX [conn65] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.041-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39268 #139 (41 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.806-0500 I INDEX [conn115] validating collection admin.system.keys (UUID: 807238e6-a72f-4ef0-b305-4bab60afd0e6)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.596-0500 I STORAGE [conn85] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 (68ff9d0d-dbd8-4e48-bf1e-7c8631ea1f95)'. Ident: 'index-146--2588534479858262356', commit timestamp: 'Timestamp(1574796695, 5)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.809-0500 W STORAGE [conn70] Could not complete validation of table:index-343--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.809-0500 I INDEX [conn70] validating collection config.cache.chunks.test2_fsmdb0.fsmcoll0 (UUID: c904d8e5-593f-4133-b81d-a4e28a1049f0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.809-0500 I INDEX [conn73] validating collection config.cache.chunks.test2_fsmdb0.fsmcoll0 (UUID: c904d8e5-593f-4133-b81d-a4e28a1049f0)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.814-0500 W STORAGE [conn65] Could not complete validation of table:index-157--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.042-0500 I NETWORK [conn139] received client metadata from 127.0.0.1:39268 conn139: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.806-0500 I INDEX [conn115] validating index consistency _id_ on collection admin.system.keys
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.596-0500 I STORAGE [conn85] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 (68ff9d0d-dbd8-4e48-bf1e-7c8631ea1f95)'. Ident: 'index-148--2588534479858262356', commit timestamp: 'Timestamp(1574796695, 5)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.812-0500 I INDEX [conn71] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.809-0500 I INDEX [conn70] validating index consistency _id_ on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.809-0500 I INDEX [conn73] validating index consistency _id_ on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.814-0500 I INDEX [conn65] validating collection config.cache.chunks.test2_fsmdb0.agg_out (UUID: 13bc0717-3ecb-47d5-aedd-db010ec932d6)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.069-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39282 #140 (42 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.806-0500 I INDEX [conn115] Validation complete for collection admin.system.keys (UUID: 807238e6-a72f-4ef0-b305-4bab60afd0e6). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.596-0500 I STORAGE [conn85] Deferring table drop for collection 'test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946'. Ident: collection-144--2588534479858262356, commit timestamp: Timestamp(1574796695, 5)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.812-0500 W STORAGE [conn71] Could not complete validation of table:index-157--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.809-0500 I INDEX [conn70] validating index consistency lastmod_1 on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.809-0500 I INDEX [conn73] validating index consistency lastmod_1 on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.814-0500 I INDEX [conn65] validating index consistency _id_ on collection config.cache.chunks.test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.069-0500 I NETWORK [conn140] received client metadata from 127.0.0.1:39282 conn140: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.807-0500 I COMMAND [conn115] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.597-0500 I COMMAND [conn85] command test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946 command: drop { drop: "tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946", databaseVersion: { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 }, allowImplicitCollectionCreation: false, $clusterTime: { clusterTime: Timestamp(1574796694, 3611), signature: { hash: BinData(0, 45C1DDF53A6E2BCB4EC1A70E851FF4768B381088), keyId: 6763700092420489256 } }, $configServerState: { opTime: { ts: Timestamp(1574796694, 3172), t: 1 } }, $db: "test2_fsmdb0" } numYields:0 reslen:420 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 719ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.812-0500 I INDEX [conn71] validating collection config.cache.chunks.test2_fsmdb0.agg_out (UUID: 13bc0717-3ecb-47d5-aedd-db010ec932d6)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.809-0500 I INDEX [conn70] Validation complete for collection config.cache.chunks.test2_fsmdb0.fsmcoll0 (UUID: c904d8e5-593f-4133-b81d-a4e28a1049f0). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.809-0500 I INDEX [conn73] Validation complete for collection config.cache.chunks.test2_fsmdb0.fsmcoll0 (UUID: c904d8e5-593f-4133-b81d-a4e28a1049f0). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.814-0500 I INDEX [conn65] validating index consistency lastmod_1 on collection config.cache.chunks.test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.082-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39294 #141 (43 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.807-0500 I INDEX [conn115] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.597-0500 I COMMAND [conn64] command test2_fsmdb0.agg_out appName: "tid:0" command: collMod { collMod: "agg_out", validationAction: "warn", writeConcern: { w: 1, wtimeout: 0 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("dc61b4f9-aa45-4579-b7ea-98c3efc65c32"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796694, 3611), signature: { hash: BinData(0, 45C1DDF53A6E2BCB4EC1A70E851FF4768B381088), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45198", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796694, 3172), t: 1 } }, $db: "test2_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 717876 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 718ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.812-0500 I INDEX [conn71] validating index consistency _id_ on collection config.cache.chunks.test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.810-0500 I COMMAND [conn70] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.810-0500 I COMMAND [conn73] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.814-0500 I INDEX [conn65] Validation complete for collection config.cache.chunks.test2_fsmdb0.agg_out (UUID: 13bc0717-3ecb-47d5-aedd-db010ec932d6). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.082-0500 I NETWORK [conn141] received client metadata from 127.0.0.1:39294 conn141: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.809-0500 I INDEX [conn115] validating collection admin.system.version (UUID: 1b1834a4-71ee-49e7-abbc-7ae09d5089b2)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.597-0500 I COMMAND [conn65] command test2_fsmdb0.agg_out appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("edd46bd6-7845-419a-b11b-5e92d9ad8dd5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2462931333285218809, ns: "test2_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6502285868213573246, ns: "test2_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test2_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796694756), clusterTime: Timestamp(1574796694, 1718) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("edd46bd6-7845-419a-b11b-5e92d9ad8dd5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796694, 1718), signature: { hash: BinData(0, 45C1DDF53A6E2BCB4EC1A70E851FF4768B381088), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45210", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796694, 1592), t: 1 } }, $db: "test2_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946\", to: \"test2_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test2_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:883 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 840ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.812-0500 I INDEX [conn71] validating index consistency lastmod_1 on collection config.cache.chunks.test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.810-0500 W STORAGE [conn70] Could not complete validation of table:collection-29--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.810-0500 W STORAGE [conn73] Could not complete validation of table:collection-29--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.815-0500 I COMMAND [conn65] CMD: validate config.cache.chunks.test2_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.085-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39296 #142 (44 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.810-0500 I INDEX [conn115] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.599-0500 I COMMAND [conn62] command test2_fsmdb0.agg_out appName: "tid:4" command: collMod { collMod: "agg_out", validationAction: "error", writeConcern: { w: 1, wtimeout: 0 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("3be876c6-d64e-4ca7-b4fd-5b1e9c261d78"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796694, 3611), signature: { hash: BinData(0, 45C1DDF53A6E2BCB4EC1A70E851FF4768B381088), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45202", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796694, 3172), t: 1 } }, $db: "test2_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 712003 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 712ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.812-0500 I INDEX [conn71] Validation complete for collection config.cache.chunks.test2_fsmdb0.agg_out (UUID: 13bc0717-3ecb-47d5-aedd-db010ec932d6). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.810-0500 I INDEX [conn70] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.810-0500 I INDEX [conn73] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.815-0500 W STORAGE [conn65] Could not complete validation of table:collection-125--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.085-0500 I NETWORK [conn142] received client metadata from 127.0.0.1:39296 conn142: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.810-0500 I INDEX [conn115] Validation complete for collection admin.system.version (UUID: 1b1834a4-71ee-49e7-abbc-7ae09d5089b2). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.599-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.812-0500 I COMMAND [conn71] CMD: validate config.cache.chunks.test2_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.810-0500 W STORAGE [conn70] Could not complete validation of table:index-30--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.810-0500 W STORAGE [conn73] Could not complete validation of table:index-30--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.815-0500 I INDEX [conn65] validating the internal structure of index _id_ on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.087-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39302 #143 (45 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.812-0500 I COMMAND [conn115] CMD: validate config.actionlog, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.599-0500 I COMMAND [conn55] command test2_fsmdb0.$cmd command: listCollections { listCollections: 1, filter: { name: "agg_out" }, $clusterTime: { clusterTime: Timestamp(1574796695, 4), signature: { hash: BinData(0, 537AF1CA0E5BF829360A450B5C5B4FE27BD08071), keyId: 6763700092420489256 } }, $configServerState: { opTime: { ts: Timestamp(1574796695, 4), t: 1 } }, $db: "test2_fsmdb0" } numYields:0 reslen:638 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 263819 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 263ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.813-0500 W STORAGE [conn71] Could not complete validation of table:collection-125--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.810-0500 I INDEX [conn70] validating collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.810-0500 I INDEX [conn73] validating collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.815-0500 W STORAGE [conn65] Could not complete validation of table:index-126--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.087-0500 I NETWORK [conn143] received client metadata from 127.0.0.1:39302 conn143: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.812-0500 W STORAGE [conn115] Could not complete validation of table:collection-47-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.604-0500 I COMMAND [conn62] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.813-0500 I INDEX [conn71] validating the internal structure of index _id_ on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.810-0500 I INDEX [conn70] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.810-0500 I INDEX [conn73] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.815-0500 I INDEX [conn65] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.127-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39326 #144 (46 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.812-0500 I INDEX [conn115] validating the internal structure of index _id_ on collection config.actionlog
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.606-0500 I COMMAND [conn62] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.813-0500 W STORAGE [conn71] Could not complete validation of table:index-126--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.810-0500 I INDEX [conn70] Validation complete for collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.810-0500 I INDEX [conn73] Validation complete for collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.815-0500 W STORAGE [conn65] Could not complete validation of table:index-127--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.127-0500 I NETWORK [conn144] received client metadata from 127.0.0.1:39326 conn144: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.812-0500 W STORAGE [conn115] Could not complete validation of table:index-48-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.608-0500 I COMMAND [conn84] command test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e appName: "tid:1" command: insert { insert: "tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e", bypassDocumentValidation: false, ordered: false, documents: 500, shardVersion: [ Timestamp(0, 0), ObjectId('000000000000000000000000') ], databaseVersion: { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 }, writeConcern: { w: 1, wtimeout: 0 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("81d4e471-c714-4e93-a360-4ad87027dca4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796694, 3611), signature: { hash: BinData(0, 45C1DDF53A6E2BCB4EC1A70E851FF4768B381088), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58342", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796694, 3172), t: 1 } }, $db: "test2_fsmdb0" } ninserted:500 keysInserted:1000 numYields:0 reslen:400 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 8 } }, ReplicationStateTransition: { acquireCount: { w: 8 } }, Global: { acquireCount: { w: 8 } }, Database: { acquireCount: { w: 8 }, acquireWaitCount: { w: 3 }, timeAcquiringMicros: { w: 716205 } }, Collection: { acquireCount: { w: 8 } }, Mutex: { acquireCount: { r: 1016 } } } flowControl:{ acquireCount: 8 } storage:{} protocol:op_msg 726ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.813-0500 I INDEX [conn71] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.811-0500 I COMMAND [conn70] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.811-0500 I COMMAND [conn73] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.815-0500 I INDEX [conn65] validating collection config.cache.chunks.test2_fsmdb0.fsmcoll0 (UUID: e923876b-cb14-4999-bce6-e0591b1153b2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.130-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39328 #145 (47 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.812-0500 I INDEX [conn115] validating collection config.actionlog (UUID: ff427093-1de4-4a9f-83c9-6b01392e1aea)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.609-0500 I COMMAND [conn84] CMD: drop test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.813-0500 W STORAGE [conn71] Could not complete validation of table:index-127--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.811-0500 W STORAGE [conn70] Could not complete validation of table:collection-27--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.811-0500 W STORAGE [conn73] Could not complete validation of table:collection-27--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.815-0500 I INDEX [conn65] validating index consistency _id_ on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.130-0500 I NETWORK [conn145] received client metadata from 127.0.0.1:39328 conn145: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.812-0500 I INDEX [conn115] validating index consistency _id_ on collection config.actionlog
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.609-0500 I STORAGE [conn84] dropCollection: test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e (b3750738-7e2e-471e-a530-9cc710d06e53) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.813-0500 I INDEX [conn71] validating collection config.cache.chunks.test2_fsmdb0.fsmcoll0 (UUID: e923876b-cb14-4999-bce6-e0591b1153b2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.811-0500 I INDEX [conn70] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.811-0500 I INDEX [conn73] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.815-0500 I INDEX [conn65] validating index consistency lastmod_1 on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.140-0500 W CONTROL [conn145] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 327 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.812-0500 I INDEX [conn115] Validation complete for collection config.actionlog (UUID: ff427093-1de4-4a9f-83c9-6b01392e1aea). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.609-0500 I STORAGE [conn84] Finishing collection drop for test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e (b3750738-7e2e-471e-a530-9cc710d06e53).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.813-0500 I INDEX [conn71] validating index consistency _id_ on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.813-0500 I INDEX [conn71] validating index consistency lastmod_1 on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.811-0500 W STORAGE [conn73] Could not complete validation of table:index-28--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.816-0500 I INDEX [conn65] Validation complete for collection config.cache.chunks.test2_fsmdb0.fsmcoll0 (UUID: e923876b-cb14-4999-bce6-e0591b1153b2). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.145-0500 W CONTROL [conn145] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 327 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.812-0500 I COMMAND [conn115] CMD: validate config.changelog, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.609-0500 I STORAGE [conn84] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e (b3750738-7e2e-471e-a530-9cc710d06e53)'. Ident: 'index-153--2588534479858262356', commit timestamp: 'Timestamp(1574796695, 509)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.811-0500 W STORAGE [conn70] Could not complete validation of table:index-28--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.813-0500 I INDEX [conn71] Validation complete for collection config.cache.chunks.test2_fsmdb0.fsmcoll0 (UUID: e923876b-cb14-4999-bce6-e0591b1153b2). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.811-0500 I INDEX [conn73] validating collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.816-0500 I COMMAND [conn65] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.147-0500 I NETWORK [conn144] end connection 127.0.0.1:39326 (46 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.812-0500 W STORAGE [conn115] Could not complete validation of table:collection-49-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.609-0500 I STORAGE [conn84] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e (b3750738-7e2e-471e-a530-9cc710d06e53)'. Ident: 'index-154--2588534479858262356', commit timestamp: 'Timestamp(1574796695, 509)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.811-0500 I INDEX [conn70] validating collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.813-0500 I COMMAND [conn71] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.811-0500 I INDEX [conn73] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.816-0500 W STORAGE [conn65] Could not complete validation of table:collection-27--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.147-0500 I NETWORK [conn145] end connection 127.0.0.1:39328 (45 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.812-0500 I INDEX [conn115] validating the internal structure of index _id_ on collection config.changelog
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.609-0500 I STORAGE [conn84] Deferring table drop for collection 'test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e'. Ident: collection-152--2588534479858262356, commit timestamp: Timestamp(1574796695, 509)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.811-0500 I INDEX [conn70] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.813-0500 W STORAGE [conn71] Could not complete validation of table:collection-27--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.811-0500 I INDEX [conn73] Validation complete for collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.816-0500 I INDEX [conn65] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.150-0500 I NETWORK [conn137] end connection 127.0.0.1:39248 (44 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.813-0500 W STORAGE [conn115] Could not complete validation of table:index-50-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.610-0500 I COMMAND [conn80] command test2_fsmdb0.agg_out appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("81d4e471-c714-4e93-a360-4ad87027dca4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7459150062899698078, ns: "test2_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 993006272425637731, ns: "test2_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test2_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796694825), clusterTime: Timestamp(1574796694, 3098) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("81d4e471-c714-4e93-a360-4ad87027dca4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796694, 3098), signature: { hash: BinData(0, 45C1DDF53A6E2BCB4EC1A70E851FF4768B381088), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58342", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796694, 2351), t: 1 } }, $db: "test2_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e\", to: \"test2_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test2_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:884 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 783ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.811-0500 I INDEX [conn70] Validation complete for collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.813-0500 I INDEX [conn71] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.811-0500 I COMMAND [conn73] CMD: validate config.system.sessions, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.816-0500 W STORAGE [conn65] Could not complete validation of table:index-28--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.157-0500 I NETWORK [conn136] end connection 127.0.0.1:39246 (43 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.813-0500 I INDEX [conn115] validating collection config.changelog (UUID: 65b892c8-48e9-4ca9-8300-743a486a361f)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.610-0500 I COMMAND [conn80] CMD: dropIndexes test2_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.812-0500 I COMMAND [conn70] CMD: validate config.system.sessions, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.813-0500 W STORAGE [conn71] Could not complete validation of table:index-28--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.811-0500 W STORAGE [conn73] Could not complete validation of table:collection-25--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.816-0500 I INDEX [conn65] validating collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.178-0500 I STORAGE [conn48] createCollection: test2_fsmdb0.fsmcoll0 with provided UUID: 11da2d1e-3dd5-4812-9686-c490a6bdfff0 and options: { uuid: UUID("11da2d1e-3dd5-4812-9686-c490a6bdfff0") }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.813-0500 I INDEX [conn115] validating index consistency _id_ on collection config.changelog
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.613-0500 I COMMAND [conn62] CMD: dropIndexes test2_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.812-0500 W STORAGE [conn70] Could not complete validation of table:collection-25--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.814-0500 I INDEX [conn71] validating collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.811-0500 I INDEX [conn73] validating the internal structure of index _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.817-0500 I INDEX [conn65] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.189-0500 I INDEX [conn48] index build: done building index _id_ on ns test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.813-0500 I INDEX [conn115] Validation complete for collection config.changelog (UUID: 65b892c8-48e9-4ca9-8300-743a486a361f). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.616-0500 I COMMAND [conn81] CMD: dropIndexes test2_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.812-0500 I INDEX [conn70] validating the internal structure of index _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.814-0500 I INDEX [conn71] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.811-0500 W STORAGE [conn73] Could not complete validation of table:index-26--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.817-0500 I INDEX [conn65] Validation complete for collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.194-0500 I INDEX [conn48] index build: done building index _id_hashed on ns test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.813-0500 I COMMAND [conn115] CMD: validate config.chunks, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.619-0500 I COMMAND [conn62] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.812-0500 W STORAGE [conn70] Could not complete validation of table:index-26--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.814-0500 I INDEX [conn71] Validation complete for collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.811-0500 I INDEX [conn73] validating the internal structure of index lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.817-0500 I COMMAND [conn65] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.195-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 from version {} to version { uuid: UUID("0fc54c53-4a71-4a77-bdc3-580b3b26d735"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.813-0500 W STORAGE [conn115] Could not complete validation of table:collection-17-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.621-0500 I COMMAND [conn81] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.812-0500 I INDEX [conn70] validating the internal structure of index lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.814-0500 I COMMAND [conn71] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.811-0500 W STORAGE [conn73] Could not complete validation of table:index-33--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.817-0500 W STORAGE [conn65] Could not complete validation of table:collection-25--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.195-0500 I SHARDING [conn48] Marking collection test2_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.813-0500 I INDEX [conn115] validating the internal structure of index _id_ on collection config.chunks
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.622-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.812-0500 W STORAGE [conn70] Could not complete validation of table:index-33--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.814-0500 W STORAGE [conn71] Could not complete validation of table:collection-25--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.812-0500 I INDEX [conn73] validating collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.817-0500 I INDEX [conn65] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.229-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.fsmcoll0 to version 1|3||5ddd7d96cf8184c2e1493933 took 1 ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.813-0500 W STORAGE [conn115] Could not complete validation of table:index-18-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.626-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.812-0500 I INDEX [conn70] validating collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.814-0500 I INDEX [conn71] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.812-0500 I INDEX [conn73] validating index consistency _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.817-0500 W STORAGE [conn65] Could not complete validation of table:index-26--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.229-0500 I SHARDING [conn63] Updating metadata for collection test2_fsmdb0.fsmcoll0 from collection version: to collection version: 1|3||5ddd7d96cf8184c2e1493933, shard version: 1|1||5ddd7d96cf8184c2e1493933 due to version change
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.813-0500 I INDEX [conn115] validating the internal structure of index ns_1_min_1 on collection config.chunks
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.628-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.812-0500 I INDEX [conn70] validating index consistency _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.814-0500 W STORAGE [conn71] Could not complete validation of table:index-26--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.812-0500 I INDEX [conn73] validating index consistency lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.817-0500 I INDEX [conn65] validating collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.229-0500 I STORAGE [ShardServerCatalogCacheLoader-0] createCollection: config.cache.chunks.test2_fsmdb0.fsmcoll0 with provided UUID: c904d8e5-593f-4133-b81d-a4e28a1049f0 and options: { uuid: UUID("c904d8e5-593f-4133-b81d-a4e28a1049f0") }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.813-0500 W STORAGE [conn115] Could not complete validation of table:index-19-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.630-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.812-0500 I INDEX [conn70] validating index consistency lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.814-0500 I INDEX [conn71] validating collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.812-0500 I INDEX [conn73] Validation complete for collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.818-0500 I INDEX [conn65] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.247-0500 I INDEX [ShardServerCatalogCacheLoader-0] index build: done building index _id_ on ns config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.814-0500 I INDEX [conn115] validating the internal structure of index ns_1_shard_1_min_1 on collection config.chunks
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.633-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.812-0500 I INDEX [conn70] Validation complete for collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.814-0500 I INDEX [conn71] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.812-0500 I COMMAND [conn73] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.818-0500 I INDEX [conn65] Validation complete for collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.248-0500 I INDEX [ShardServerCatalogCacheLoader-0] Registering index build: 8c5551ca-0881-45fa-9049-be7ab26da254
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.814-0500 W STORAGE [conn115] Could not complete validation of table:index-20-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.634-0500 I COMMAND [conn80] CMD: dropIndexes test2_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.813-0500 I COMMAND [conn70] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.814-0500 I INDEX [conn71] Validation complete for collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.812-0500 W STORAGE [conn73] Could not complete validation of table:collection-21--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.818-0500 I COMMAND [conn65] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.264-0500 I INDEX [ShardServerCatalogCacheLoader-0] index build: starting on config.cache.chunks.test2_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.814-0500 I INDEX [conn115] validating the internal structure of index ns_1_lastmod_1 on collection config.chunks
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.638-0500 I COMMAND [conn62] CMD: dropIndexes test2_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.813-0500 W STORAGE [conn70] Could not complete validation of table:collection-21--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.815-0500 I COMMAND [conn71] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.812-0500 I INDEX [conn73] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.818-0500 W STORAGE [conn65] Could not complete validation of table:collection-21--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.264-0500 I INDEX [ShardServerCatalogCacheLoader-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.814-0500 W STORAGE [conn115] Could not complete validation of table:index-21-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.639-0500 I COMMAND [conn80] CMD: dropIndexes test2_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.813-0500 I INDEX [conn70] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.815-0500 W STORAGE [conn71] Could not complete validation of table:collection-21--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.812-0500 W STORAGE [conn73] Could not complete validation of table:index-22--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.818-0500 I INDEX [conn65] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.264-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Index build initialized: 8c5551ca-0881-45fa-9049-be7ab26da254: config.cache.chunks.test2_fsmdb0.fsmcoll0 (c904d8e5-593f-4133-b81d-a4e28a1049f0 ): indexes: 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.814-0500 I INDEX [conn115] validating collection config.chunks (UUID: e7035d0b-a892-4426-b520-83da62bcbda6)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.640-0500 I COMMAND [conn80] CMD: dropIndexes test2_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.813-0500 W STORAGE [conn70] Could not complete validation of table:index-22--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.815-0500 I INDEX [conn71] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.812-0500 I INDEX [conn73] validating collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.818-0500 W STORAGE [conn65] Could not complete validation of table:index-22--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.264-0500 I INDEX [ShardServerCatalogCacheLoader-0] Waiting for index build to complete: 8c5551ca-0881-45fa-9049-be7ab26da254
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.814-0500 I INDEX [conn115] validating index consistency _id_ on collection config.chunks
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.644-0500 I COMMAND [conn80] CMD: dropIndexes test2_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.813-0500 I INDEX [conn70] validating collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.815-0500 W STORAGE [conn71] Could not complete validation of table:index-22--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.812-0500 I INDEX [conn73] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.818-0500 I INDEX [conn65] validating collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.264-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.814-0500 I INDEX [conn115] validating index consistency ns_1_min_1 on collection config.chunks
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.644-0500 I COMMAND [conn62] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.813-0500 I INDEX [conn70] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.815-0500 I INDEX [conn71] validating collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.812-0500 I INDEX [conn73] Validation complete for collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.818-0500 I INDEX [conn65] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.265-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.814-0500 I INDEX [conn115] validating index consistency ns_1_shard_1_min_1 on collection config.chunks
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.646-0500 I COMMAND [conn80] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.813-0500 I INDEX [conn70] Validation complete for collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.815-0500 I INDEX [conn71] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.815-0500 I COMMAND [conn73] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.818-0500 I INDEX [conn65] Validation complete for collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.267-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.814-0500 I INDEX [conn115] validating index consistency ns_1_lastmod_1 on collection config.chunks
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.652-0500 I COMMAND [conn62] CMD: dropIndexes test2_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.815-0500 I COMMAND [conn70] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.815-0500 I INDEX [conn71] Validation complete for collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.828-0500 W STORAGE [conn73] Could not complete validation of table:collection-16--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.820-0500 I COMMAND [conn65] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.268-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 8c5551ca-0881-45fa-9049-be7ab26da254: config.cache.chunks.test2_fsmdb0.fsmcoll0 ( c904d8e5-593f-4133-b81d-a4e28a1049f0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.814-0500 I INDEX [conn115] Validation complete for collection config.chunks (UUID: e7035d0b-a892-4426-b520-83da62bcbda6). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.654-0500 I COMMAND [conn80] CMD: dropIndexes test2_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.815-0500 W STORAGE [conn70] Could not complete validation of table:collection-16--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.817-0500 I COMMAND [conn71] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.828-0500 I INDEX [conn73] validating collection local.oplog.rs (UUID: 88962763-38f7-4965-bfd6-b2a62304ae0e)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.824-0500 W STORAGE [conn65] Could not complete validation of table:collection-16--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.268-0500 I INDEX [ShardServerCatalogCacheLoader-0] Index build completed: 8c5551ca-0881-45fa-9049-be7ab26da254
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.814-0500 I COMMAND [conn115] CMD: validate config.collections, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.655-0500 I COMMAND [conn62] CMD: dropIndexes test2_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.815-0500 I INDEX [conn70] validating collection local.oplog.rs (UUID: 6d43bede-f05f-41b1-b7ac-5a32b66b8140)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.821-0500 W STORAGE [conn71] Could not complete validation of table:collection-16--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.892-0500 I INDEX [conn73] Validation complete for collection local.oplog.rs (UUID: 88962763-38f7-4965-bfd6-b2a62304ae0e). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.824-0500 I INDEX [conn65] validating collection local.oplog.rs (UUID: 307925b3-4143-4c06-a46a-f04119b3afb4)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.268-0500 I SHARDING [ShardServerCatalogCacheLoader-0] Marking collection config.cache.chunks.test2_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.815-0500 W STORAGE [conn115] Could not complete validation of table:collection-51-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.657-0500 I COMMAND [conn80] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.863-0500 I INDEX [conn70] Validation complete for collection local.oplog.rs (UUID: 6d43bede-f05f-41b1-b7ac-5a32b66b8140). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.821-0500 I INDEX [conn71] validating collection local.oplog.rs (UUID: 6c707c3f-4064-4e35-98fb-b2fff8245539)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.893-0500 I COMMAND [conn73] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.843-0500 I INDEX [conn65] Validation complete for collection local.oplog.rs (UUID: 307925b3-4143-4c06-a46a-f04119b3afb4). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.826-0500 I COMMAND [conn68] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.815-0500 I INDEX [conn115] validating the internal structure of index _id_ on collection config.collections
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.661-0500 I COMMAND [conn62] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.863-0500 I COMMAND [conn70] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.840-0500 I INDEX [conn71] Validation complete for collection local.oplog.rs (UUID: 6c707c3f-4064-4e35-98fb-b2fff8245539). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.893-0500 I INDEX [conn73] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.844-0500 I COMMAND [conn65] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.855-0500 I COMMAND [conn68] CMD: dropIndexes test2_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.815-0500 W STORAGE [conn115] Could not complete validation of table:index-52-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.663-0500 I COMMAND [conn62] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.865-0500 I INDEX [conn70] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.840-0500 I COMMAND [conn71] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.895-0500 I INDEX [conn73] validating collection local.replset.election (UUID: d0928956-d7fc-46fe-a9bc-1f07f2435457)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.844-0500 I INDEX [conn65] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.865-0500 I COMMAND [conn68] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.815-0500 I INDEX [conn115] validating collection config.collections (UUID: c846d630-16e0-4675-b90f-3cd769544ef0)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.815-0500 I INDEX [conn115] validating index consistency _id_ on collection config.collections
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.867-0500 I INDEX [conn70] validating collection local.replset.election (UUID: bf7b5380-e70a-475e-ad1b-16751bee6907)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.841-0500 I INDEX [conn71] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.896-0500 I INDEX [conn73] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.847-0500 I INDEX [conn65] validating collection local.replset.election (UUID: 7b059263-7419-4cf5-8072-b44957d729c9)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:34.869-0500 I COMMAND [conn68] CMD: dropIndexes test2_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.667-0500 I COMMAND [conn62] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.815-0500 I INDEX [conn115] Validation complete for collection config.collections (UUID: c846d630-16e0-4675-b90f-3cd769544ef0). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.868-0500 I INDEX [conn70] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.868-0500 I INDEX [conn70] Validation complete for collection local.replset.election (UUID: bf7b5380-e70a-475e-ad1b-16751bee6907). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.868-0500 I COMMAND [conn70] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.847-0500 I INDEX [conn65] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.599-0500 I COMMAND [conn68] CMD: dropIndexes test2_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.667-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.815-0500 I COMMAND [conn115] CMD: validate config.databases, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.815-0500 W STORAGE [conn115] Could not complete validation of table:collection-55-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.815-0500 I INDEX [conn115] validating the internal structure of index _id_ on collection config.databases
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.868-0500 W STORAGE [conn70] Could not complete validation of table:collection-4--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.847-0500 I INDEX [conn65] Validation complete for collection local.replset.election (UUID: 7b059263-7419-4cf5-8072-b44957d729c9). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.604-0500 I COMMAND [conn68] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.674-0500 I COMMAND [conn80] CMD: dropIndexes test2_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.843-0500 I INDEX [conn71] validating collection local.replset.election (UUID: 6a83721b-d0f2-438c-a2e3-ec6a11e75236)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.896-0500 I INDEX [conn73] Validation complete for collection local.replset.election (UUID: d0928956-d7fc-46fe-a9bc-1f07f2435457). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.815-0500 W STORAGE [conn115] Could not complete validation of table:index-56-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.868-0500 I INDEX [conn70] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.847-0500 I COMMAND [conn65] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.606-0500 I COMMAND [conn68] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.610-0500 I COMMAND [conn70] CMD: dropIndexes test2_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.613-0500 I COMMAND [conn68] CMD: dropIndexes test2_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.896-0500 I COMMAND [conn73] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.815-0500 I INDEX [conn115] validating collection config.databases (UUID: 1c31f9a7-ee46-41d3-a296-2e1f323b51b8)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.870-0500 I INDEX [conn70] validating collection local.replset.minvalid (UUID: 6654b1c2-f323-4c78-9165-5ff31d331960)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.848-0500 W STORAGE [conn65] Could not complete validation of table:collection-4--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.848-0500 I INDEX [conn65] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.701-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.616-0500 I COMMAND [conn71] CMD: dropIndexes test2_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.896-0500 W STORAGE [conn73] Could not complete validation of table:collection-4--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.815-0500 I INDEX [conn115] validating index consistency _id_ on collection config.databases
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.870-0500 I INDEX [conn70] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.843-0500 I INDEX [conn71] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.849-0500 I INDEX [conn65] validating collection local.replset.minvalid (UUID: e1166351-a2a9-4335-b202-a653b252b811)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.849-0500 I INDEX [conn65] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.619-0500 I COMMAND [conn68] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.896-0500 I INDEX [conn73] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.815-0500 I INDEX [conn115] Validation complete for collection config.databases (UUID: 1c31f9a7-ee46-41d3-a296-2e1f323b51b8). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.870-0500 I INDEX [conn70] Validation complete for collection local.replset.minvalid (UUID: 6654b1c2-f323-4c78-9165-5ff31d331960). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.843-0500 I INDEX [conn71] Validation complete for collection local.replset.election (UUID: 6a83721b-d0f2-438c-a2e3-ec6a11e75236). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.705-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.849-0500 I INDEX [conn65] Validation complete for collection local.replset.minvalid (UUID: e1166351-a2a9-4335-b202-a653b252b811). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.621-0500 I COMMAND [conn71] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.898-0500 I INDEX [conn73] validating collection local.replset.minvalid (UUID: 6eb6e647-60c7-450a-a905-f04052287b8a)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.816-0500 I COMMAND [conn115] CMD: validate config.lockpings, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.871-0500 I COMMAND [conn70] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.844-0500 I COMMAND [conn71] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.710-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.850-0500 I COMMAND [conn65] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.622-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.898-0500 I INDEX [conn73] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.816-0500 W STORAGE [conn115] Could not complete validation of table:collection-32-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.876-0500 I INDEX [conn70] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.844-0500 W STORAGE [conn71] Could not complete validation of table:collection-4--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.711-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.853-0500 I INDEX [conn65] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.626-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.898-0500 I INDEX [conn73] Validation complete for collection local.replset.minvalid (UUID: 6eb6e647-60c7-450a-a905-f04052287b8a). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.816-0500 I INDEX [conn115] validating the internal structure of index _id_ on collection config.lockpings
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.878-0500 I INDEX [conn70] validating collection local.replset.oplogTruncateAfterPoint (UUID: fe211210-ae1b-4ab2-81d6-86b025cc1404)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.844-0500 I INDEX [conn71] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.754-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46838 #149 (44 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.855-0500 I INDEX [conn65] validating collection local.replset.oplogTruncateAfterPoint (UUID: 022b88bb-9282-4f39-aad1-6988341f4ac1)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.628-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.898-0500 I COMMAND [conn73] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.816-0500 W STORAGE [conn115] Could not complete validation of table:index-33-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.878-0500 I INDEX [conn70] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.845-0500 I INDEX [conn71] validating collection local.replset.minvalid (UUID: 3f481e27-9697-4b6d-b77b-0bd9b43c5dfa)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.754-0500 I NETWORK [conn149] received client metadata from 127.0.0.1:46838 conn149: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.855-0500 I INDEX [conn65] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.630-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.902-0500 I INDEX [conn73] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.816-0500 I INDEX [conn115] validating the internal structure of index ping_1 on collection config.lockpings
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.878-0500 I INDEX [conn70] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: fe211210-ae1b-4ab2-81d6-86b025cc1404). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.845-0500 I INDEX [conn71] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.755-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46840 #150 (45 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.855-0500 I INDEX [conn65] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 022b88bb-9282-4f39-aad1-6988341f4ac1). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.633-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.904-0500 I INDEX [conn73] validating collection local.replset.oplogTruncateAfterPoint (UUID: 5d41bfc8-ebca-43f3-a038-30023495a91a)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.816-0500 W STORAGE [conn115] Could not complete validation of table:index-34-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.879-0500 I COMMAND [conn70] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.845-0500 I INDEX [conn71] Validation complete for collection local.replset.minvalid (UUID: 3f481e27-9697-4b6d-b77b-0bd9b43c5dfa). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.755-0500 I NETWORK [conn150] received client metadata from 127.0.0.1:46840 conn150: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.856-0500 I COMMAND [conn65] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.634-0500 I COMMAND [conn70] CMD: dropIndexes test2_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.904-0500 I INDEX [conn73] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[executor:fsm_workload_test:job0] 2019-11-26T14:31:42.002-0500 agg_out:CleanupConcurrencyWorkloads ran in 0.06 seconds: no failures detected.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.816-0500 I INDEX [conn115] validating collection config.lockpings (UUID: f662f115-623a-496b-9953-7132cdf8c056)
[executor] 2019-11-26T14:31:44.019-0500 Waiting for threads to complete
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.879-0500 I INDEX [conn70] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.846-0500 I COMMAND [conn71] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.822-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46854 #151 (46 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.857-0500 I INDEX [conn65] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.638-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:42.002-0500 I NETWORK [conn43] end connection 127.0.0.1:58498 (1 connection now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:42.002-0500 I NETWORK [conn125] end connection 127.0.0.1:45354 (0 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.904-0500 I INDEX [conn73] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 5d41bfc8-ebca-43f3-a038-30023495a91a). No corruption found.
[CheckReplDBHashInBackground:job0] Stopping the background check repl dbhash thread.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.816-0500 I INDEX [conn115] validating index consistency _id_ on collection config.lockpings
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.882-0500 I INDEX [conn70] validating collection local.startup_log (UUID: 7b6988ea-0c65-41a6-9855-5680c2c711a1)
[executor] 2019-11-26T14:31:44.020-0500 Threads are completed!
[executor] 2019-11-26T14:31:44.020-0500 Summary of latest execution: All 5 test(s) passed in 10.09 seconds.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.850-0500 I INDEX [conn71] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.823-0500 I NETWORK [conn151] received client metadata from 127.0.0.1:46854 conn151: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.859-0500 I INDEX [conn65] validating collection local.startup_log (UUID: 62f9eac5-a715-4818-9af1-edc47894f622)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.639-0500 I COMMAND [conn70] CMD: dropIndexes test2_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:42.002-0500 I NETWORK [conn42] end connection 127.0.0.1:58494 (0 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.904-0500 I COMMAND [conn73] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.816-0500 I INDEX [conn115] validating index consistency ping_1 on collection config.lockpings
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.882-0500 I INDEX [conn70] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.852-0500 I INDEX [conn71] validating collection local.replset.oplogTruncateAfterPoint (UUID: ae67a1b2-b2be-4d7e-8242-18f3082bc280)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.825-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46856 #152 (47 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.859-0500 I INDEX [conn65] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.022-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45360 #126 (1 connection now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.639-0500 I COMMAND [conn70] CMD: dropIndexes test2_fsmdb0.agg_out: { flag: 1.0 }
[CheckReplDBHashInBackground:job0] Starting the background check repl dbhash thread.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:44.022-0500 I NETWORK [listener] connection accepted from 127.0.0.1:58500 #44 (1 connection now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.905-0500 I INDEX [conn73] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.816-0500 I INDEX [conn115] Validation complete for collection config.lockpings (UUID: f662f115-623a-496b-9953-7132cdf8c056). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.882-0500 I INDEX [conn70] Validation complete for collection local.startup_log (UUID: 7b6988ea-0c65-41a6-9855-5680c2c711a1). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.852-0500 I INDEX [conn71] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.825-0500 I NETWORK [conn152] received client metadata from 127.0.0.1:46856 conn152: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.859-0500 I INDEX [conn65] Validation complete for collection local.startup_log (UUID: 62f9eac5-a715-4818-9af1-edc47894f622). No corruption found.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.022-0500 I NETWORK [conn126] received client metadata from 127.0.0.1:45360 conn126: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.644-0500 I COMMAND [conn70] CMD: dropIndexes test2_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:44.022-0500 I NETWORK [conn44] received client metadata from 127.0.0.1:58500 conn44: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.907-0500 I INDEX [conn73] validating collection local.startup_log (UUID: e0cc0511-0005-4584-a461-5ae30058b4c6)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.817-0500 I COMMAND [conn115] CMD: validate config.locks, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.882-0500 I COMMAND [conn70] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.852-0500 I INDEX [conn71] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: ae67a1b2-b2be-4d7e-8242-18f3082bc280). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.835-0500 W CONTROL [conn152] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 47 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.859-0500 I COMMAND [conn65] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.860-0500 I INDEX [conn65] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.644-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:44.024-0500 I NETWORK [conn44] end connection 127.0.0.1:58500 (0 connections now open)
[CheckReplDBHashInBackground:job0] Resuming the background check repl dbhash thread.
[executor:fsm_workload_test:job0] 2019-11-26T14:31:44.027-0500 Running agg_out.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval TestData = new Object(); TestData["usingReplicaSetShards"] = true; TestData["runningWithAutoSplit"] = false; TestData["runningWithBalancer"] = false; TestData["fsmWorkloads"] = ["jstests/concurrency/fsm_workloads/agg_out.js"]; TestData["resmokeDbPathPrefix"] = "/home/nz_linux/data/job0/resmoke"; TestData["dbNamePrefix"] = "test3_"; TestData["sameDB"] = false; TestData["sameCollection"] = false; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "resmoke_runner"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); --readMode=commands mongodb://localhost:20007,localhost:20008 jstests/concurrency/fsm_libs/resmoke_runner.js
[fsm_workload_test:agg_out] 2019-11-26T14:31:44.028-0500 Starting FSM workload jstests/concurrency/fsm_workloads/agg_out.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval TestData = new Object(); TestData["usingReplicaSetShards"] = true; TestData["runningWithAutoSplit"] = false; TestData["runningWithBalancer"] = false; TestData["fsmWorkloads"] = ["jstests/concurrency/fsm_workloads/agg_out.js"]; TestData["resmokeDbPathPrefix"] = "/home/nz_linux/data/job0/resmoke"; TestData["dbNamePrefix"] = "test3_"; TestData["sameDB"] = false; TestData["sameCollection"] = false; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "resmoke_runner"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); --readMode=commands mongodb://localhost:20007,localhost:20008 jstests/concurrency/fsm_libs/resmoke_runner.js
[executor:fsm_workload_test:job0] 2019-11-26T14:31:44.028-0500 Running agg_out:CheckReplDBHashInBackground...
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.907-0500 I INDEX [conn73] validating index consistency _id_ on collection local.startup_log
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:44.029-0500 Starting JSTest jstests/hooks/run_check_repl_dbhash_background.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_check_repl_dbhash_background"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_check_repl_dbhash_background.js
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.817-0500 W STORAGE [conn115] Could not complete validation of table:collection-28-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.883-0500 I INDEX [conn70] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.852-0500 I COMMAND [conn71] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.857-0500 W CONTROL [conn152] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 47 }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.024-0500 I NETWORK [conn126] end connection 127.0.0.1:45360 (0 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.862-0500 I INDEX [conn65] validating collection local.system.replset (UUID: c43cc3e4-845d-4144-8406-83bf4df96d39)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.646-0500 I COMMAND [conn70] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.907-0500 I INDEX [conn73] Validation complete for collection local.startup_log (UUID: e0cc0511-0005-4584-a461-5ae30058b4c6). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.817-0500 I INDEX [conn115] validating the internal structure of index _id_ on collection config.locks
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.885-0500 I INDEX [conn70] validating collection local.system.replset (UUID: 920cbf66-0930-4ef5-82e9-10d7319f0fda)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.853-0500 I INDEX [conn71] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.860-0500 I NETWORK [conn151] end connection 127.0.0.1:46854 (46 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.862-0500 I INDEX [conn65] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.651-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.908-0500 I COMMAND [conn73] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.817-0500 W STORAGE [conn115] Could not complete validation of table:index-29-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.885-0500 I INDEX [conn70] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.855-0500 I INDEX [conn71] validating collection local.startup_log (UUID: fb2ea5d2-ac7b-4697-a368-9f5d41483423)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.860-0500 I NETWORK [conn152] end connection 127.0.0.1:46856 (45 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.862-0500 I INDEX [conn65] Validation complete for collection local.system.replset (UUID: c43cc3e4-845d-4144-8406-83bf4df96d39). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.654-0500 I COMMAND [conn70] CMD: dropIndexes test2_fsmdb0.agg_out: { padding: "text" }
[fsm_workload_test:agg_out] 2019-11-26T14:31:44.038-0500 FSM workload jstests/concurrency/fsm_workloads/agg_out.js started with pid 15849.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.908-0500 I INDEX [conn73] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.817-0500 I INDEX [conn115] validating the internal structure of index ts_1 on collection config.locks
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.885-0500 I INDEX [conn70] Validation complete for collection local.system.replset (UUID: 920cbf66-0930-4ef5-82e9-10d7319f0fda). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.855-0500 I INDEX [conn71] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.864-0500 I NETWORK [conn150] end connection 127.0.0.1:46840 (44 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.863-0500 I COMMAND [conn65] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.655-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.910-0500 I INDEX [conn73] validating collection local.system.replset (UUID: 3b8c02e8-ec29-4e79-912d-3e315d1d851c)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.817-0500 W STORAGE [conn115] Could not complete validation of table:index-30-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.886-0500 I COMMAND [conn70] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.855-0500 I INDEX [conn71] Validation complete for collection local.startup_log (UUID: fb2ea5d2-ac7b-4697-a368-9f5d41483423). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:35.870-0500 I NETWORK [conn149] end connection 127.0.0.1:46838 (43 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.864-0500 I INDEX [conn65] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.657-0500 I COMMAND [conn70] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:44.041-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js started with pid 15852.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.910-0500 I INDEX [conn73] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.817-0500 I INDEX [conn115] validating the internal structure of index state_1_process_1 on collection config.locks
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.886-0500 I INDEX [conn70] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.856-0500 I COMMAND [conn71] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:36.195-0500 I COMMAND [conn80] CMD: dropIndexes test2_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.866-0500 I INDEX [conn65] validating collection local.system.rollback.id (UUID: af3b2fdb-b5ae-49b3-a026-c55e1bf822c0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.661-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.911-0500 I INDEX [conn73] Validation complete for collection local.system.replset (UUID: 3b8c02e8-ec29-4e79-912d-3e315d1d851c). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.817-0500 W STORAGE [conn115] Could not complete validation of table:index-31-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.888-0500 I INDEX [conn70] validating collection local.system.rollback.id (UUID: 9434a858-83b3-4d87-8d66-64bde405790b)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.857-0500 I INDEX [conn71] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:36.210-0500 I NETWORK [conn142] end connection 127.0.0.1:46742 (42 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.866-0500 I INDEX [conn65] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.663-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.911-0500 I COMMAND [conn73] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.817-0500 I INDEX [conn115] validating collection config.locks (UUID: dbde06c7-d8ac-4f80-ab9f-cae486f16451)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.888-0500 I INDEX [conn70] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.859-0500 I INDEX [conn71] validating collection local.system.replset (UUID: 2b695a66-e9c6-4bba-a36e-eb0a5cf356ba)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:36.210-0500 I NETWORK [conn143] end connection 127.0.0.1:46750 (41 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.866-0500 I INDEX [conn65] Validation complete for collection local.system.rollback.id (UUID: af3b2fdb-b5ae-49b3-a026-c55e1bf822c0). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.667-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.912-0500 I INDEX [conn73] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.817-0500 I INDEX [conn115] validating index consistency _id_ on collection config.locks
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.888-0500 I INDEX [conn70] Validation complete for collection local.system.rollback.id (UUID: 9434a858-83b3-4d87-8d66-64bde405790b). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.859-0500 I INDEX [conn71] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:36.211-0500 I NETWORK [conn144] end connection 127.0.0.1:46770 (40 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.868-0500 I COMMAND [conn65] CMD: validate test2_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.667-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.914-0500 I INDEX [conn73] validating collection local.system.rollback.id (UUID: 1099f6d7-f170-471c-a0ac-dc97bd7e42b0)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.817-0500 I INDEX [conn115] validating index consistency ts_1 on collection config.locks
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.890-0500 I COMMAND [conn70] CMD: validate test2_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.859-0500 I INDEX [conn71] Validation complete for collection local.system.replset (UUID: 2b695a66-e9c6-4bba-a36e-eb0a5cf356ba). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:36.211-0500 I NETWORK [conn146] end connection 127.0.0.1:46778 (39 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.868-0500 W STORAGE [conn65] Could not complete validation of table:collection-137--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.674-0500 I COMMAND [conn70] CMD: dropIndexes test2_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.914-0500 I INDEX [conn73] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.817-0500 I INDEX [conn115] validating index consistency state_1_process_1 on collection config.locks
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.890-0500 W STORAGE [conn70] Could not complete validation of table:collection-337--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.859-0500 I COMMAND [conn71] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:36.211-0500 I NETWORK [conn145] end connection 127.0.0.1:46772 (38 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.868-0500 I INDEX [conn65] validating the internal structure of index _id_ on collection test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.701-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.914-0500 I INDEX [conn73] Validation complete for collection local.system.rollback.id (UUID: 1099f6d7-f170-471c-a0ac-dc97bd7e42b0). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.817-0500 I INDEX [conn115] Validation complete for collection config.locks (UUID: dbde06c7-d8ac-4f80-ab9f-cae486f16451). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.890-0500 I INDEX [conn70] validating the internal structure of index _id_ on collection test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.860-0500 I INDEX [conn71] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:36.223-0500 I NETWORK [conn141] end connection 127.0.0.1:46740 (37 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.868-0500 W STORAGE [conn65] Could not complete validation of table:index-138--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.705-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.915-0500 I COMMAND [conn73] CMD: validate test2_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.818-0500 I COMMAND [conn115] CMD: validate config.migrations, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.890-0500 W STORAGE [conn70] Could not complete validation of table:index-338--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.862-0500 I INDEX [conn71] validating collection local.system.rollback.id (UUID: d6027364-802b-4e8d-ae7f-556bc4252840)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:37.681-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46880 #153 (38 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.868-0500 I INDEX [conn65] validating the internal structure of index _id_hashed on collection test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.709-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.915-0500 W STORAGE [conn73] Could not complete validation of table:collection-337--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.818-0500 W STORAGE [conn115] Could not complete validation of table:collection-22-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.890-0500 I INDEX [conn70] validating the internal structure of index _id_hashed on collection test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.862-0500 I INDEX [conn71] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:37.681-0500 I NETWORK [conn153] received client metadata from 127.0.0.1:46880 conn153: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.868-0500 W STORAGE [conn65] Could not complete validation of table:index-147--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.711-0500 I COMMAND [conn65] CMD: dropIndexes test2_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.915-0500 I INDEX [conn73] validating the internal structure of index _id_ on collection test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.818-0500 I INDEX [conn115] validating the internal structure of index _id_ on collection config.migrations
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.891-0500 W STORAGE [conn70] Could not complete validation of table:index-339--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.862-0500 I INDEX [conn71] Validation complete for collection local.system.rollback.id (UUID: d6027364-802b-4e8d-ae7f-556bc4252840). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:37.681-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46882 #154 (39 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.869-0500 I INDEX [conn65] validating collection test2_fsmdb0.agg_out (UUID: 08932b51-9933-4490-ab6b-1df6cfb57633)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.752-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39362 #146 (44 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.915-0500 W STORAGE [conn73] Could not complete validation of table:index-338--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.818-0500 W STORAGE [conn115] Could not complete validation of table:index-23-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.891-0500 I INDEX [conn70] validating collection test2_fsmdb0.fsmcoll0 (UUID: 11da2d1e-3dd5-4812-9686-c490a6bdfff0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.864-0500 I COMMAND [conn71] CMD: validate test2_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:37.682-0500 I NETWORK [conn154] received client metadata from 127.0.0.1:46882 conn154: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.870-0500 I INDEX [conn65] validating index consistency _id_ on collection test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.752-0500 I NETWORK [conn146] received client metadata from 127.0.0.1:39362 conn146: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.916-0500 I INDEX [conn73] validating the internal structure of index _id_hashed on collection test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.818-0500 I INDEX [conn115] validating the internal structure of index ns_1_min_1 on collection config.migrations
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.892-0500 I INDEX [conn70] validating index consistency _id_ on collection test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.864-0500 W STORAGE [conn71] Could not complete validation of table:collection-137--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:37.741-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46890 #155 (40 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.870-0500 I INDEX [conn65] validating index consistency _id_hashed on collection test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.752-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39366 #147 (45 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.916-0500 W STORAGE [conn73] Could not complete validation of table:index-339--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.818-0500 W STORAGE [conn115] Could not complete validation of table:index-24-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.892-0500 I INDEX [conn70] validating index consistency _id_hashed on collection test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.864-0500 I INDEX [conn71] validating the internal structure of index _id_ on collection test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:37.741-0500 I NETWORK [conn155] received client metadata from 127.0.0.1:46890 conn155: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.871-0500 I INDEX [conn65] Validation complete for collection test2_fsmdb0.agg_out (UUID: 08932b51-9933-4490-ab6b-1df6cfb57633). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.753-0500 I NETWORK [conn147] received client metadata from 127.0.0.1:39366 conn147: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.916-0500 I INDEX [conn73] validating collection test2_fsmdb0.fsmcoll0 (UUID: 11da2d1e-3dd5-4812-9686-c490a6bdfff0)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.818-0500 I INDEX [conn115] validating collection config.migrations (UUID: 550e32ef-0dd4-48f9-bb5e-9e21bec0734f)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.893-0500 I INDEX [conn70] Validation complete for collection test2_fsmdb0.fsmcoll0 (UUID: 11da2d1e-3dd5-4812-9686-c490a6bdfff0). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.864-0500 W STORAGE [conn71] Could not complete validation of table:index-138--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:37.744-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46894 #156 (41 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.871-0500 I COMMAND [conn65] CMD: validate test2_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.814-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39378 #148 (46 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.918-0500 I INDEX [conn73] validating index consistency _id_ on collection test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.818-0500 I INDEX [conn115] validating index consistency _id_ on collection config.migrations
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.894-0500 I NETWORK [conn70] end connection 127.0.0.1:53398 (12 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.865-0500 I INDEX [conn71] validating the internal structure of index _id_hashed on collection test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:37.745-0500 I NETWORK [conn156] received client metadata from 127.0.0.1:46894 conn156: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.872-0500 W STORAGE [conn65] Could not complete validation of table:collection-121--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.814-0500 I NETWORK [conn148] received client metadata from 127.0.0.1:39378 conn148: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.918-0500 I INDEX [conn73] validating index consistency _id_hashed on collection test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.818-0500 I INDEX [conn115] validating index consistency ns_1_min_1 on collection config.migrations
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:39.933-0500 I NETWORK [conn69] end connection 127.0.0.1:53366 (11 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.865-0500 W STORAGE [conn71] Could not complete validation of table:index-147--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:37.766-0500 I COMMAND [conn156] CMD fsync: sync:1 lock:1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.872-0500 I INDEX [conn65] validating the internal structure of index _id_ on collection test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.817-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39382 #149 (47 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.918-0500 I INDEX [conn73] Validation complete for collection test2_fsmdb0.fsmcoll0 (UUID: 11da2d1e-3dd5-4812-9686-c490a6bdfff0). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.818-0500 I INDEX [conn115] Validation complete for collection config.migrations (UUID: 550e32ef-0dd4-48f9-bb5e-9e21bec0734f). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:41.973-0500 I COMMAND [ReplWriterWorker-2] CMD: drop test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.865-0500 I INDEX [conn71] validating collection test2_fsmdb0.agg_out (UUID: 08932b51-9933-4490-ab6b-1df6cfb57633)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:37.900-0500 W COMMAND [fsyncLockWorker] WARNING: instance is locked, blocking all writes. The fsync command has finished execution, remember to unlock the instance using fsyncUnlock().
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.872-0500 W STORAGE [conn65] Could not complete validation of table:index-122--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.817-0500 I NETWORK [conn149] received client metadata from 127.0.0.1:39382 conn149: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.920-0500 I NETWORK [conn73] end connection 127.0.0.1:52512 (11 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.819-0500 I COMMAND [conn115] CMD: validate config.mongos, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:41.973-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796701, 13), t: 1 } and commit timestamp Timestamp(1574796701, 13)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.866-0500 I INDEX [conn71] validating index consistency _id_ on collection test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:37.900-0500 I COMMAND [conn156] mongod is locked and no writes are allowed. db.fsyncUnlock() to unlock
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.872-0500 I INDEX [conn65] validating the internal structure of index _id_hashed on collection test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.827-0500 W CONTROL [conn149] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 327 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.933-0500 I NETWORK [conn72] end connection 127.0.0.1:52480 (10 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.819-0500 W STORAGE [conn115] Could not complete validation of table:collection-43-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:41.973-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.867-0500 I INDEX [conn71] validating index consistency _id_hashed on collection test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:37.900-0500 I COMMAND [conn156] Lock count is 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.872-0500 W STORAGE [conn65] Could not complete validation of table:index-123--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.843-0500 W CONTROL [conn149] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 327 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.819-0500 I INDEX [conn115] validating the internal structure of index _id_ on collection config.mongos
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:41.973-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0)'. Ident: 'index-338--8000595249233899911', commit timestamp: 'Timestamp(1574796701, 13)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.867-0500 I INDEX [conn71] Validation complete for collection test2_fsmdb0.agg_out (UUID: 08932b51-9933-4490-ab6b-1df6cfb57633). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:37.900-0500 I COMMAND [conn156] For more info see http://dochub.mongodb.org/core/fsynccommand
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.873-0500 I INDEX [conn65] validating collection test2_fsmdb0.fsmcoll0 (UUID: 11da2d1e-3dd5-4812-9686-c490a6bdfff0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.845-0500 I NETWORK [conn148] end connection 127.0.0.1:39378 (46 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.819-0500 W STORAGE [conn115] Could not complete validation of table:index-44-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:41.973-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0)'. Ident: 'index-339--8000595249233899911', commit timestamp: 'Timestamp(1574796701, 13)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.868-0500 I COMMAND [conn71] CMD: validate test2_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:37.900-0500 I COMMAND [conn156] command admin.$cmd appName: "MongoDB Shell" command: fsync { fsync: 1.0, lock: 1.0, allowFsyncFailure: true, lsid: { id: UUID("34687f44-29da-4637-88ae-5fc26b14b72d") }, $clusterTime: { clusterTime: Timestamp(1574796697, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:477 locks:{ Mutex: { acquireCount: { W: 1 } } } protocol:op_msg 134ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.874-0500 I INDEX [conn65] validating index consistency _id_ on collection test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.846-0500 I NETWORK [conn149] end connection 127.0.0.1:39382 (45 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.819-0500 I INDEX [conn115] validating collection config.mongos (UUID: 57207abe-6d8d-4102-a526-bc847dba6c09)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:41.973-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test2_fsmdb0.fsmcoll0'. Ident: collection-337--8000595249233899911, commit timestamp: Timestamp(1574796701, 13)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.868-0500 W STORAGE [conn71] Could not complete validation of table:collection-121--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:38.051-0500 I COMMAND [conn156] command: unlock requested
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.874-0500 I INDEX [conn65] validating index consistency _id_hashed on collection test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.864-0500 I NETWORK [conn147] end connection 127.0.0.1:39366 (44 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.948-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.819-0500 I INDEX [conn115] validating index consistency _id_ on collection config.mongos
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:41.986-0500 I COMMAND [ReplWriterWorker-8] CMD: drop config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.868-0500 I INDEX [conn71] validating the internal structure of index _id_ on collection test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:38.054-0500 I COMMAND [conn156] fsyncUnlock completed. mongod is now unlocked and free to accept writes
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.875-0500 I INDEX [conn65] Validation complete for collection test2_fsmdb0.fsmcoll0 (UUID: 11da2d1e-3dd5-4812-9686-c490a6bdfff0). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:35.870-0500 I NETWORK [conn146] end connection 127.0.0.1:39362 (43 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:39.948-0500 I SHARDING [Sharding-Fixed-2] Updating config server with confirmed set shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.819-0500 I INDEX [conn115] Validation complete for collection config.mongos (UUID: 57207abe-6d8d-4102-a526-bc847dba6c09). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:41.986-0500 I STORAGE [ReplWriterWorker-8] dropCollection: config.cache.chunks.test2_fsmdb0.fsmcoll0 (c904d8e5-593f-4133-b81d-a4e28a1049f0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796701, 21), t: 1 } and commit timestamp Timestamp(1574796701, 21)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.868-0500 W STORAGE [conn71] Could not complete validation of table:index-122--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:38.056-0500 I NETWORK [conn155] end connection 127.0.0.1:46890 (40 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.876-0500 I NETWORK [conn65] end connection 127.0.0.1:35510 (9 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:36.195-0500 I COMMAND [conn70] CMD: dropIndexes test2_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.819-0500 I COMMAND [conn115] CMD: validate config.settings, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:41.975-0500 I COMMAND [ReplWriterWorker-2] CMD: drop test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:41.986-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for config.cache.chunks.test2_fsmdb0.fsmcoll0 (c904d8e5-593f-4133-b81d-a4e28a1049f0).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.868-0500 I INDEX [conn71] validating the internal structure of index _id_hashed on collection test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:38.056-0500 I NETWORK [conn156] end connection 127.0.0.1:46894 (39 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.933-0500 I NETWORK [conn64] end connection 127.0.0.1:35478 (8 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:36.209-0500 I NETWORK [conn139] end connection 127.0.0.1:39268 (42 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.820-0500 W STORAGE [conn115] Could not complete validation of table:collection-45-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:41.975-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796701, 13), t: 1 } and commit timestamp Timestamp(1574796701, 13)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:41.986-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test2_fsmdb0.fsmcoll0 (c904d8e5-593f-4133-b81d-a4e28a1049f0)'. Ident: 'index-342--8000595249233899911', commit timestamp: 'Timestamp(1574796701, 21)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.868-0500 W STORAGE [conn71] Could not complete validation of table:index-123--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:38.060-0500 I NETWORK [conn154] end connection 127.0.0.1:46882 (38 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.947-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35514 #66 (9 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:36.210-0500 I NETWORK [conn140] end connection 127.0.0.1:39282 (41 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.820-0500 I INDEX [conn115] validating the internal structure of index _id_ on collection config.settings
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:41.975-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:41.986-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test2_fsmdb0.fsmcoll0 (c904d8e5-593f-4133-b81d-a4e28a1049f0)'. Ident: 'index-343--8000595249233899911', commit timestamp: 'Timestamp(1574796701, 21)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.869-0500 I INDEX [conn71] validating collection test2_fsmdb0.fsmcoll0 (UUID: 11da2d1e-3dd5-4812-9686-c490a6bdfff0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:38.067-0500 I NETWORK [conn153] end connection 127.0.0.1:46880 (37 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:39.947-0500 I NETWORK [conn66] received client metadata from 127.0.0.1:35514 conn66: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:36.211-0500 I NETWORK [conn141] end connection 127.0.0.1:39294 (40 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.820-0500 W STORAGE [conn115] Could not complete validation of table:index-46-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:41.975-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0)'. Ident: 'index-338--4104909142373009110', commit timestamp: 'Timestamp(1574796701, 13)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:41.986-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'config.cache.chunks.test2_fsmdb0.fsmcoll0'. Ident: collection-341--8000595249233899911, commit timestamp: Timestamp(1574796701, 21)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.870-0500 I INDEX [conn71] validating index consistency _id_ on collection test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.716-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46922 #157 (38 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.959-0500 I COMMAND [ReplWriterWorker-13] CMD: drop test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:36.211-0500 I NETWORK [conn143] end connection 127.0.0.1:39302 (39 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.820-0500 I INDEX [conn115] validating collection config.settings (UUID: 6d167d1d-0483-49b9-9ac8-ee5b66996698)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:41.975-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0)'. Ident: 'index-339--4104909142373009110', commit timestamp: 'Timestamp(1574796701, 13)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:41.975-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test2_fsmdb0.fsmcoll0'. Ident: collection-337--4104909142373009110, commit timestamp: Timestamp(1574796701, 13)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.870-0500 I INDEX [conn71] validating index consistency _id_hashed on collection test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.717-0500 I NETWORK [conn157] received client metadata from 127.0.0.1:46922 conn157: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.959-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test2_fsmdb0.agg_out (08932b51-9933-4490-ab6b-1df6cfb57633) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796701, 5), t: 1 } and commit timestamp Timestamp(1574796701, 5)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:36.211-0500 I NETWORK [conn142] end connection 127.0.0.1:39296 (38 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.820-0500 I INDEX [conn115] validating index consistency _id_ on collection config.settings
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:41.993-0500 I COMMAND [ReplWriterWorker-7] dropDatabase test2_fsmdb0 - starting
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:41.988-0500 I COMMAND [ReplWriterWorker-3] CMD: drop config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.871-0500 I INDEX [conn71] Validation complete for collection test2_fsmdb0.fsmcoll0 (UUID: 11da2d1e-3dd5-4812-9686-c490a6bdfff0). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.717-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46924 #158 (39 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.959-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test2_fsmdb0.agg_out (08932b51-9933-4490-ab6b-1df6cfb57633).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:36.223-0500 I NETWORK [conn138] end connection 127.0.0.1:39262 (37 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.820-0500 I INDEX [conn115] Validation complete for collection config.settings (UUID: 6d167d1d-0483-49b9-9ac8-ee5b66996698). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:41.993-0500 I COMMAND [ReplWriterWorker-7] dropDatabase test2_fsmdb0 - dropped 0 collection(s)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:41.988-0500 I STORAGE [ReplWriterWorker-3] dropCollection: config.cache.chunks.test2_fsmdb0.fsmcoll0 (c904d8e5-593f-4133-b81d-a4e28a1049f0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796701, 21), t: 1 } and commit timestamp Timestamp(1574796701, 21)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.872-0500 I NETWORK [conn71] end connection 127.0.0.1:52144 (9 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.717-0500 I NETWORK [conn158] received client metadata from 127.0.0.1:46924 conn158: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.959-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.agg_out (08932b51-9933-4490-ab6b-1df6cfb57633)'. Ident: 'index-138--7234316082034423155', commit timestamp: 'Timestamp(1574796701, 5)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:37.678-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39404 #150 (38 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.820-0500 I COMMAND [conn115] CMD: validate config.shards, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:41.993-0500 I COMMAND [ReplWriterWorker-7] dropDatabase test2_fsmdb0 - finished
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:41.988-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for config.cache.chunks.test2_fsmdb0.fsmcoll0 (c904d8e5-593f-4133-b81d-a4e28a1049f0).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.933-0500 I NETWORK [conn70] end connection 127.0.0.1:52116 (8 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.801-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46948 #159 (40 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.959-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.agg_out (08932b51-9933-4490-ab6b-1df6cfb57633)'. Ident: 'index-147--7234316082034423155', commit timestamp: 'Timestamp(1574796701, 5)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:37.678-0500 I NETWORK [conn150] received client metadata from 127.0.0.1:39404 conn150: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.820-0500 W STORAGE [conn115] Could not complete validation of table:collection-25-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:41.998-0500 I SHARDING [ReplWriterWorker-11] setting this node's cached database version for test2_fsmdb0 to {}
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:41.988-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test2_fsmdb0.fsmcoll0 (c904d8e5-593f-4133-b81d-a4e28a1049f0)'. Ident: 'index-342--4104909142373009110', commit timestamp: 'Timestamp(1574796701, 21)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.947-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52152 #72 (9 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.801-0500 I NETWORK [conn159] received client metadata from 127.0.0.1:46948 conn159: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.959-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test2_fsmdb0.agg_out'. Ident: collection-137--7234316082034423155, commit timestamp: Timestamp(1574796701, 5)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:37.679-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39408 #151 (39 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.820-0500 I INDEX [conn115] validating the internal structure of index _id_ on collection config.shards
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:41.988-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test2_fsmdb0.fsmcoll0 (c904d8e5-593f-4133-b81d-a4e28a1049f0)'. Ident: 'index-343--4104909142373009110', commit timestamp: 'Timestamp(1574796701, 21)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:39.947-0500 I NETWORK [conn72] received client metadata from 127.0.0.1:52152 conn72: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.807-0500 I COMMAND [conn159] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.970-0500 I COMMAND [ReplWriterWorker-4] CMD: drop config.cache.chunks.test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:37.679-0500 I NETWORK [conn151] received client metadata from 127.0.0.1:39408 conn151: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.820-0500 W STORAGE [conn115] Could not complete validation of table:index-26-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:41.988-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'config.cache.chunks.test2_fsmdb0.fsmcoll0'. Ident: collection-341--4104909142373009110, commit timestamp: Timestamp(1574796701, 21)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.808-0500 I INDEX [conn159] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.959-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.970-0500 I STORAGE [ReplWriterWorker-4] dropCollection: config.cache.chunks.test2_fsmdb0.agg_out (13bc0717-3ecb-47d5-aedd-db010ec932d6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796701, 9), t: 1 } and commit timestamp Timestamp(1574796701, 9)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:37.741-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39422 #152 (40 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.820-0500 I INDEX [conn115] validating the internal structure of index host_1 on collection config.shards
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:41.994-0500 I COMMAND [ReplWriterWorker-9] dropDatabase test2_fsmdb0 - starting
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.810-0500 I INDEX [conn159] validating collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.959-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test2_fsmdb0.agg_out (08932b51-9933-4490-ab6b-1df6cfb57633) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796701, 5), t: 1 } and commit timestamp Timestamp(1574796701, 5)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.970-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for config.cache.chunks.test2_fsmdb0.agg_out (13bc0717-3ecb-47d5-aedd-db010ec932d6).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.970-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test2_fsmdb0.agg_out (13bc0717-3ecb-47d5-aedd-db010ec932d6)'. Ident: 'index-156--7234316082034423155', commit timestamp: 'Timestamp(1574796701, 9)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.820-0500 W STORAGE [conn115] Could not complete validation of table:index-27-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:41.994-0500 I COMMAND [ReplWriterWorker-9] dropDatabase test2_fsmdb0 - dropped 0 collection(s)
[fsm_workload_test:agg_out] 2019-11-26T14:31:44.064-0500 MongoDB shell version v0.0.0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.810-0500 I INDEX [conn159] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.959-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test2_fsmdb0.agg_out (08932b51-9933-4490-ab6b-1df6cfb57633).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:37.741-0500 I NETWORK [conn152] received client metadata from 127.0.0.1:39422 conn152: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.970-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test2_fsmdb0.agg_out (13bc0717-3ecb-47d5-aedd-db010ec932d6)'. Ident: 'index-157--7234316082034423155', commit timestamp: 'Timestamp(1574796701, 9)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.820-0500 I INDEX [conn115] validating collection config.shards (UUID: ed6a2b77-0788-4ad3-a1b0-ccd61535c24f)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:41.994-0500 I COMMAND [ReplWriterWorker-9] dropDatabase test2_fsmdb0 - finished
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.810-0500 I INDEX [conn159] Validation complete for collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.959-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.agg_out (08932b51-9933-4490-ab6b-1df6cfb57633)'. Ident: 'index-138--2310912778499990807', commit timestamp: 'Timestamp(1574796701, 5)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:37.744-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39426 #153 (41 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.970-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'config.cache.chunks.test2_fsmdb0.agg_out'. Ident: collection-155--7234316082034423155, commit timestamp: Timestamp(1574796701, 9)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.820-0500 I INDEX [conn115] validating index consistency _id_ on collection config.shards
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:42.000-0500 I SHARDING [ReplWriterWorker-11] setting this node's cached database version for test2_fsmdb0 to {}
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.812-0500 I COMMAND [conn159] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.959-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.agg_out (08932b51-9933-4490-ab6b-1df6cfb57633)'. Ident: 'index-147--2310912778499990807', commit timestamp: 'Timestamp(1574796701, 5)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:37.744-0500 I NETWORK [conn153] received client metadata from 127.0.0.1:39426 conn153: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.977-0500 I COMMAND [ReplWriterWorker-6] CMD: drop test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.820-0500 I INDEX [conn115] validating index consistency host_1 on collection config.shards
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.812-0500 I INDEX [conn159] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.959-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test2_fsmdb0.agg_out'. Ident: collection-137--2310912778499990807, commit timestamp: Timestamp(1574796701, 5)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:37.764-0500 I COMMAND [conn153] CMD fsync: sync:1 lock:1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.977-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796701, 14), t: 1 } and commit timestamp Timestamp(1574796701, 14)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.821-0500 I INDEX [conn115] Validation complete for collection config.shards (UUID: ed6a2b77-0788-4ad3-a1b0-ccd61535c24f). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.814-0500 I INDEX [conn159] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.970-0500 I COMMAND [ReplWriterWorker-12] CMD: drop config.cache.chunks.test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:37.825-0500 W COMMAND [fsyncLockWorker] WARNING: instance is locked, blocking all writes. The fsync command has finished execution, remember to unlock the instance using fsyncUnlock().
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.977-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.821-0500 I COMMAND [conn115] CMD: validate config.system.sessions, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.816-0500 I INDEX [conn159] validating collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.970-0500 I STORAGE [ReplWriterWorker-12] dropCollection: config.cache.chunks.test2_fsmdb0.agg_out (13bc0717-3ecb-47d5-aedd-db010ec932d6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796701, 9), t: 1 } and commit timestamp Timestamp(1574796701, 9)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:37.825-0500 I COMMAND [conn153] mongod is locked and no writes are allowed. db.fsyncUnlock() to unlock
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.977-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0)'. Ident: 'index-122--7234316082034423155', commit timestamp: 'Timestamp(1574796701, 14)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.821-0500 W STORAGE [conn115] Could not complete validation of table:collection-53-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.816-0500 I INDEX [conn159] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.970-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for config.cache.chunks.test2_fsmdb0.agg_out (13bc0717-3ecb-47d5-aedd-db010ec932d6).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:37.825-0500 I COMMAND [conn153] Lock count is 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.977-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0)'. Ident: 'index-123--7234316082034423155', commit timestamp: 'Timestamp(1574796701, 14)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.821-0500 I INDEX [conn115] validating the internal structure of index _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.816-0500 I INDEX [conn159] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.970-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test2_fsmdb0.agg_out (13bc0717-3ecb-47d5-aedd-db010ec932d6)'. Ident: 'index-156--2310912778499990807', commit timestamp: 'Timestamp(1574796701, 9)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:44.068-0500 MongoDB shell version v0.0.0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:37.825-0500 I COMMAND [conn153] For more info see http://dochub.mongodb.org/core/fsynccommand
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.977-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test2_fsmdb0.fsmcoll0'. Ident: collection-121--7234316082034423155, commit timestamp: Timestamp(1574796701, 14)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.821-0500 W STORAGE [conn115] Could not complete validation of table:index-54-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.816-0500 I INDEX [conn159] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.970-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test2_fsmdb0.agg_out (13bc0717-3ecb-47d5-aedd-db010ec932d6)'. Ident: 'index-157--2310912778499990807', commit timestamp: 'Timestamp(1574796701, 9)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:37.933-0500 I STORAGE [TimestampMonitor] Removing drop-pending idents with drop timestamps before timestamp Timestamp(1574796692, 6)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.987-0500 I COMMAND [ReplWriterWorker-11] CMD: drop config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.821-0500 I INDEX [conn115] validating collection config.system.sessions (UUID: 9014747b-5aa2-462f-9e13-1e6b27298390)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.817-0500 I COMMAND [conn159] CMD: validate config.cache.chunks.test2_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.970-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'config.cache.chunks.test2_fsmdb0.agg_out'. Ident: collection-155--2310912778499990807, commit timestamp: Timestamp(1574796701, 9)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:37.955-0500 I COMMAND [conn153] command: unlock requested
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.987-0500 I STORAGE [ReplWriterWorker-11] dropCollection: config.cache.chunks.test2_fsmdb0.fsmcoll0 (e923876b-cb14-4999-bce6-e0591b1153b2) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796701, 23), t: 1 } and commit timestamp Timestamp(1574796701, 23)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.821-0500 I INDEX [conn115] validating index consistency _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.821-0500 I INDEX [conn115] Validation complete for collection config.system.sessions (UUID: 9014747b-5aa2-462f-9e13-1e6b27298390). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.977-0500 I COMMAND [ReplWriterWorker-6] CMD: drop test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:37.957-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-301-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 5)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.987-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for config.cache.chunks.test2_fsmdb0.fsmcoll0 (e923876b-cb14-4999-bce6-e0591b1153b2).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.817-0500 W STORAGE [conn159] Could not complete validation of table:collection-145--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.822-0500 I COMMAND [conn115] CMD: validate config.tags, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.977-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796701, 14), t: 1 } and commit timestamp Timestamp(1574796701, 14)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:37.957-0500 I COMMAND [conn153] fsyncUnlock completed. mongod is now unlocked and free to accept writes
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.988-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test2_fsmdb0.fsmcoll0 (e923876b-cb14-4999-bce6-e0591b1153b2)'. Ident: 'index-126--7234316082034423155', commit timestamp: 'Timestamp(1574796701, 23)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.817-0500 I INDEX [conn159] validating the internal structure of index _id_ on collection config.cache.chunks.test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.822-0500 W STORAGE [conn115] Could not complete validation of table:collection-35-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.822-0500 I INDEX [conn115] validating the internal structure of index _id_ on collection config.tags
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:37.959-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-302-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 5)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.988-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test2_fsmdb0.fsmcoll0 (e923876b-cb14-4999-bce6-e0591b1153b2)'. Ident: 'index-127--7234316082034423155', commit timestamp: 'Timestamp(1574796701, 23)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.817-0500 W STORAGE [conn159] Could not complete validation of table:index-147--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.977-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.822-0500 W STORAGE [conn115] Could not complete validation of table:index-36-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:37.959-0500 I NETWORK [conn152] end connection 127.0.0.1:39422 (40 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.988-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'config.cache.chunks.test2_fsmdb0.fsmcoll0'. Ident: collection-125--7234316082034423155, commit timestamp: Timestamp(1574796701, 23)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.817-0500 I INDEX [conn159] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.977-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0)'. Ident: 'index-122--2310912778499990807', commit timestamp: 'Timestamp(1574796701, 14)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.822-0500 I INDEX [conn115] validating the internal structure of index ns_1_min_1 on collection config.tags
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:37.960-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-300-8224331490264904478 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 5)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.990-0500 I COMMAND [ReplWriterWorker-2] dropDatabase test2_fsmdb0 - starting
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.817-0500 W STORAGE [conn159] Could not complete validation of table:index-150--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.977-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0)'. Ident: 'index-123--2310912778499990807', commit timestamp: 'Timestamp(1574796701, 14)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.822-0500 W STORAGE [conn115] Could not complete validation of table:index-37-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:37.960-0500 I NETWORK [conn153] end connection 127.0.0.1:39426 (39 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.990-0500 I COMMAND [ReplWriterWorker-2] dropDatabase test2_fsmdb0 - dropped 0 collection(s)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.817-0500 I INDEX [conn159] validating collection config.cache.chunks.test2_fsmdb0.agg_out (UUID: 13bc0717-3ecb-47d5-aedd-db010ec932d6)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.977-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test2_fsmdb0.fsmcoll0'. Ident: collection-121--2310912778499990807, commit timestamp: Timestamp(1574796701, 14)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.822-0500 I INDEX [conn115] validating the internal structure of index ns_1_tag_1 on collection config.tags
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:38.060-0500 I NETWORK [conn151] end connection 127.0.0.1:39408 (38 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.990-0500 I COMMAND [ReplWriterWorker-2] dropDatabase test2_fsmdb0 - finished
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.817-0500 I INDEX [conn159] validating index consistency _id_ on collection config.cache.chunks.test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.987-0500 I COMMAND [ReplWriterWorker-5] CMD: drop config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.822-0500 W STORAGE [conn115] Could not complete validation of table:index-38-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:38.067-0500 I NETWORK [conn150] end connection 127.0.0.1:39404 (37 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:41.999-0500 I SHARDING [ReplWriterWorker-1] setting this node's cached database version for test2_fsmdb0 to {}
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.817-0500 I INDEX [conn159] validating index consistency lastmod_1 on collection config.cache.chunks.test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.987-0500 I STORAGE [ReplWriterWorker-5] dropCollection: config.cache.chunks.test2_fsmdb0.fsmcoll0 (e923876b-cb14-4999-bce6-e0591b1153b2) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796701, 23), t: 1 } and commit timestamp Timestamp(1574796701, 23)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.822-0500 I INDEX [conn115] validating collection config.tags (UUID: d225b508-e40e-4c3c-a716-26adc4561055)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.714-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39448 #154 (38 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.817-0500 I INDEX [conn159] Validation complete for collection config.cache.chunks.test2_fsmdb0.agg_out (UUID: 13bc0717-3ecb-47d5-aedd-db010ec932d6). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.987-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for config.cache.chunks.test2_fsmdb0.fsmcoll0 (e923876b-cb14-4999-bce6-e0591b1153b2).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.822-0500 I INDEX [conn115] validating index consistency _id_ on collection config.tags
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.714-0500 I NETWORK [conn154] received client metadata from 127.0.0.1:39448 conn154: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.817-0500 I COMMAND [conn159] CMD: validate config.cache.chunks.test2_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.987-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test2_fsmdb0.fsmcoll0 (e923876b-cb14-4999-bce6-e0591b1153b2)'. Ident: 'index-126--2310912778499990807', commit timestamp: 'Timestamp(1574796701, 23)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.822-0500 I INDEX [conn115] validating index consistency ns_1_min_1 on collection config.tags
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.715-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39450 #155 (39 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.818-0500 W STORAGE [conn159] Could not complete validation of table:collection-116--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.987-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test2_fsmdb0.fsmcoll0 (e923876b-cb14-4999-bce6-e0591b1153b2)'. Ident: 'index-127--2310912778499990807', commit timestamp: 'Timestamp(1574796701, 23)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.822-0500 I INDEX [conn115] validating index consistency ns_1_tag_1 on collection config.tags
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.715-0500 I NETWORK [conn155] received client metadata from 127.0.0.1:39450 conn155: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.818-0500 I INDEX [conn159] validating the internal structure of index _id_ on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.987-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'config.cache.chunks.test2_fsmdb0.fsmcoll0'. Ident: collection-125--2310912778499990807, commit timestamp: Timestamp(1574796701, 23)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.822-0500 I INDEX [conn115] Validation complete for collection config.tags (UUID: d225b508-e40e-4c3c-a716-26adc4561055). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.801-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39484 #156 (40 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.818-0500 W STORAGE [conn159] Could not complete validation of table:index-117--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.990-0500 I COMMAND [ReplWriterWorker-15] dropDatabase test2_fsmdb0 - starting
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.823-0500 I COMMAND [conn115] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.801-0500 I NETWORK [conn156] received client metadata from 127.0.0.1:39484 conn156: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.818-0500 I INDEX [conn159] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.990-0500 I COMMAND [ReplWriterWorker-15] dropDatabase test2_fsmdb0 - dropped 0 collection(s)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.823-0500 W STORAGE [conn115] Could not complete validation of table:collection-15-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.807-0500 I COMMAND [conn156] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.818-0500 W STORAGE [conn159] Could not complete validation of table:index-118--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:41.990-0500 I COMMAND [ReplWriterWorker-15] dropDatabase test2_fsmdb0 - finished
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.823-0500 I INDEX [conn115] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.808-0500 I INDEX [conn156] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.818-0500 I INDEX [conn159] validating collection config.cache.chunks.test2_fsmdb0.fsmcoll0 (UUID: e923876b-cb14-4999-bce6-e0591b1153b2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:42.000-0500 I SHARDING [ReplWriterWorker-14] setting this node's cached database version for test2_fsmdb0 to {}
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.823-0500 W STORAGE [conn115] Could not complete validation of table:index-16-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.811-0500 I INDEX [conn156] validating collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.818-0500 I INDEX [conn159] validating index consistency _id_ on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.823-0500 I INDEX [conn115] validating collection config.transactions (UUID: c2741992-901b-4092-a01f-3dfe88ab21c5)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.811-0500 I INDEX [conn156] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.818-0500 I INDEX [conn159] validating index consistency lastmod_1 on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.823-0500 I INDEX [conn115] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.811-0500 I INDEX [conn156] Validation complete for collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.818-0500 I INDEX [conn159] Validation complete for collection config.cache.chunks.test2_fsmdb0.fsmcoll0 (UUID: e923876b-cb14-4999-bce6-e0591b1153b2). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.823-0500 I INDEX [conn115] Validation complete for collection config.transactions (UUID: c2741992-901b-4092-a01f-3dfe88ab21c5). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.813-0500 I COMMAND [conn156] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.819-0500 I COMMAND [conn159] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.824-0500 I COMMAND [conn115] CMD: validate config.version, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.814-0500 I INDEX [conn156] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.819-0500 W STORAGE [conn159] Could not complete validation of table:collection-18--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.824-0500 W STORAGE [conn115] Could not complete validation of table:collection-39-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.816-0500 I INDEX [conn156] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.819-0500 I INDEX [conn159] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.824-0500 I INDEX [conn115] validating the internal structure of index _id_ on collection config.version
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.819-0500 I INDEX [conn156] validating collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.819-0500 W STORAGE [conn159] Could not complete validation of table:index-20--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.824-0500 W STORAGE [conn115] Could not complete validation of table:index-40-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.819-0500 I INDEX [conn156] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.819-0500 I INDEX [conn159] validating collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.824-0500 I INDEX [conn115] validating collection config.version (UUID: d52b8328-6d55-4f54-8cfd-e715a58e3315)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.819-0500 I INDEX [conn156] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.819-0500 I INDEX [conn159] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.824-0500 I INDEX [conn115] validating index consistency _id_ on collection config.version
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.819-0500 I INDEX [conn156] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.819-0500 I INDEX [conn159] Validation complete for collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.824-0500 I INDEX [conn115] Validation complete for collection config.version (UUID: d52b8328-6d55-4f54-8cfd-e715a58e3315). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.820-0500 I COMMAND [conn156] CMD: validate config.cache.chunks.test2_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.819-0500 I COMMAND [conn159] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.826-0500 I COMMAND [conn115] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.820-0500 W STORAGE [conn156] Could not complete validation of table:collection-331-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.820-0500 W STORAGE [conn159] Could not complete validation of table:collection-17--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.826-0500 W STORAGE [conn115] Could not complete validation of table:collection-10-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.820-0500 I INDEX [conn156] validating the internal structure of index _id_ on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.820-0500 I INDEX [conn159] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.826-0500 I INDEX [conn115] validating collection local.oplog.rs (UUID: 5bb0c359-7cb9-48f8-8ff8-4b4c84c12ec5)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.820-0500 W STORAGE [conn156] Could not complete validation of table:index-332-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.820-0500 W STORAGE [conn159] Could not complete validation of table:index-19--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.826-0500 I INDEX [conn115] Validation complete for collection local.oplog.rs (UUID: 5bb0c359-7cb9-48f8-8ff8-4b4c84c12ec5). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.820-0500 I INDEX [conn156] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.820-0500 I INDEX [conn159] validating collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.827-0500 I COMMAND [conn115] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.820-0500 W STORAGE [conn156] Could not complete validation of table:index-333-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.820-0500 I INDEX [conn159] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.828-0500 I INDEX [conn115] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.820-0500 I INDEX [conn156] validating collection config.cache.chunks.test2_fsmdb0.fsmcoll0 (UUID: c904d8e5-593f-4133-b81d-a4e28a1049f0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.820-0500 I INDEX [conn159] Validation complete for collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.829-0500 I INDEX [conn115] validating collection local.replset.election (UUID: 5f00e271-c3c6-4d7b-9d39-1c8e9e8a77d4)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.820-0500 I INDEX [conn156] validating index consistency _id_ on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.820-0500 I COMMAND [conn159] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.830-0500 I INDEX [conn115] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.820-0500 I INDEX [conn156] validating index consistency lastmod_1 on collection config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.820-0500 W STORAGE [conn159] Could not complete validation of table:collection-15--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.830-0500 I INDEX [conn115] Validation complete for collection local.replset.election (UUID: 5f00e271-c3c6-4d7b-9d39-1c8e9e8a77d4). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.820-0500 I INDEX [conn156] Validation complete for collection config.cache.chunks.test2_fsmdb0.fsmcoll0 (UUID: c904d8e5-593f-4133-b81d-a4e28a1049f0). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.820-0500 I INDEX [conn159] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.830-0500 I COMMAND [conn115] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.821-0500 I COMMAND [conn156] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.820-0500 W STORAGE [conn159] Could not complete validation of table:index-16--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.831-0500 I INDEX [conn115] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.821-0500 W STORAGE [conn156] Could not complete validation of table:collection-20-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.820-0500 I INDEX [conn159] validating collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.833-0500 I INDEX [conn115] validating collection local.replset.minvalid (UUID: ce934bfb-84f4-4d44-a963-37c09c6c95a6)
[fsm_workload_test:agg_out] 2019-11-26T14:31:44.115-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.821-0500 I INDEX [conn156] validating the internal structure of index _id_ on collection config.cache.collections
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:44.118-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.820-0500 I INDEX [conn159] validating index consistency _id_ on collection config.transactions
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.743-0500 Implicit session: session { "id" : UUID("8aa996c8-30e1-454c-9fd1-7c5b6e5a3e00") }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.116-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45362 #127 (1 connection now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.744-0500 Implicit session: session { "id" : UUID("c2dc9cbd-6836-4f32-b088-905379066380") }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.131-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53434 #71 (12 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.744-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.131-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52548 #77 (11 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.134-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52190 #73 (10 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.744-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.134-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35552 #67 (10 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.744-0500 true
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:44.157-0500 I NETWORK [listener] connection accepted from 127.0.0.1:58558 #45 (1 connection now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.745-0500 true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.833-0500 I INDEX [conn115] validating index consistency _id_ on collection local.replset.minvalid
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.745-0500 2019-11-26T14:31:44.132-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.821-0500 W STORAGE [conn156] Could not complete validation of table:index-23-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.745-0500 2019-11-26T14:31:44.126-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.820-0500 I INDEX [conn159] Validation complete for collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.824-0500 I COMMAND [conn159] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.824-0500 W STORAGE [conn159] Could not complete validation of table:collection-10--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.745-0500 2019-11-26T14:31:44.133-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.131-0500 I NETWORK [conn71] received client metadata from 127.0.0.1:53434 conn71: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.134-0500 I NETWORK [conn73] received client metadata from 127.0.0.1:52190 conn73: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.745-0500 2019-11-26T14:31:44.127-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.134-0500 I NETWORK [conn67] received client metadata from 127.0.0.1:35552 conn67: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.746-0500 2019-11-26T14:31:44.133-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:44.158-0500 I NETWORK [conn45] received client metadata from 127.0.0.1:58558 conn45: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.746-0500 2019-11-26T14:31:44.128-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.833-0500 I INDEX [conn115] Validation complete for collection local.replset.minvalid (UUID: ce934bfb-84f4-4d44-a963-37c09c6c95a6). No corruption found.
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.746-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.821-0500 I INDEX [conn156] validating collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.746-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.116-0500 I NETWORK [conn127] received client metadata from 127.0.0.1:45362 conn127: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.746-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.131-0500 I NETWORK [conn77] received client metadata from 127.0.0.1:52548 conn77: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.747-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.824-0500 I INDEX [conn159] validating collection local.oplog.rs (UUID: f999d0d7-cb6c-4d2c-a5ff-807a7ed09766)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.747-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.137-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53456 #72 (13 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.747-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.139-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52204 #74 (11 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.747-0500 [jsTest] New session started with sessionID: { "id" : UUID("0ece6bec-6db6-4727-afcd-cd758d85a0c9") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.139-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35566 #68 (11 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.747-0500 [jsTest] New session started with sessionID: { "id" : UUID("18924fa2-c38f-4374-b06d-7f4fccd94d58") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:44.635-0500 I NETWORK [listener] connection accepted from 127.0.0.1:58618 #46 (2 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.747-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.833-0500 I COMMAND [conn115] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.748-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.821-0500 I INDEX [conn156] validating index consistency _id_ on collection config.cache.collections
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.748-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.118-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45364 #128 (2 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.748-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.136-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52566 #78 (12 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.748-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.834-0500 I INDEX [conn159] Validation complete for collection local.oplog.rs (UUID: f999d0d7-cb6c-4d2c-a5ff-807a7ed09766). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.748-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.137-0500 I NETWORK [conn72] received client metadata from 127.0.0.1:53456 conn72: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.748-0500 2019-11-26T14:31:44.136-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.139-0500 I NETWORK [conn74] received client metadata from 127.0.0.1:52204 conn74: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.749-0500 2019-11-26T14:31:44.131-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.139-0500 I NETWORK [conn68] received client metadata from 127.0.0.1:35566 conn68: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.749-0500 2019-11-26T14:31:44.136-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:44.635-0500 I NETWORK [conn46] received client metadata from 127.0.0.1:58618 conn46: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.749-0500 2019-11-26T14:31:44.131-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.837-0500 I INDEX [conn115] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.749-0500 2019-11-26T14:31:44.136-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.821-0500 I INDEX [conn156] Validation complete for collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.749-0500 2019-11-26T14:31:44.131-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.118-0500 I NETWORK [conn128] received client metadata from 127.0.0.1:45364 conn128: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.749-0500 2019-11-26T14:31:44.136-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.137-0500 I NETWORK [conn78] received client metadata from 127.0.0.1:52566 conn78: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.749-0500 2019-11-26T14:31:44.131-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.835-0500 I COMMAND [conn159] CMD: validate local.replset.election, full:true
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.750-0500 2019-11-26T14:31:44.137-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.166-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53492 #73 (14 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.750-0500 2019-11-26T14:31:44.132-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.171-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52242 #75 (12 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.750-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.171-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35604 #69 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.750-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:44.638-0500 I NETWORK [listener] connection accepted from 127.0.0.1:58622 #47 (3 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.750-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.841-0500 I INDEX [conn115] validating collection local.replset.oplogTruncateAfterPoint (UUID: b5258dce-fb89-4436-a191-b8586ea2e6c0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.750-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.822-0500 I COMMAND [conn156] CMD: validate config.cache.databases, full:true
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.751-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.149-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45406 #129 (3 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.751-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.166-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52602 #79 (13 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.751-0500 [jsTest] New session started with sessionID: { "id" : UUID("08a565f5-ca38-481c-85bf-0158ce6db8c9") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.836-0500 I INDEX [conn159] validating the internal structure of index _id_ on collection local.replset.election
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.751-0500 [jsTest] New session started with sessionID: { "id" : UUID("b528ba95-1c97-4b7f-b3a6-20bb68e72390") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.167-0500 I NETWORK [conn73] received client metadata from 127.0.0.1:53492 conn73: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.751-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.171-0500 I NETWORK [conn75] received client metadata from 127.0.0.1:52242 conn75: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.751-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.172-0500 I NETWORK [conn69] received client metadata from 127.0.0.1:35604 conn69: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.751-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:44.638-0500 I NETWORK [conn47] received client metadata from 127.0.0.1:58622 conn47: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.752-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:44.648-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test3_fsmdb0 from version {} to version { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 } took 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.752-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.822-0500 W STORAGE [conn156] Could not complete validation of table:collection-19-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.752-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.150-0500 I NETWORK [conn129] received client metadata from 127.0.0.1:45406 conn129: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.752-0500 2019-11-26T14:31:44.138-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.166-0500 I NETWORK [conn79] received client metadata from 127.0.0.1:52602 conn79: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.752-0500 2019-11-26T14:31:44.133-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.838-0500 I INDEX [conn159] validating collection local.replset.election (UUID: 101a66fe-c3c0-4bee-94b9-e9bb8d04aa79)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.752-0500 2019-11-26T14:31:44.139-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.199-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53516 #74 (15 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.753-0500 2019-11-26T14:31:44.133-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.204-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52264 #76 (13 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.753-0500 2019-11-26T14:31:44.139-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.205-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35626 #70 (13 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.753-0500 2019-11-26T14:31:44.133-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.841-0500 I INDEX [conn115] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.753-0500 2019-11-26T14:31:44.139-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:44.650-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test3_fsmdb0.fsmcoll0 to version 1|3||5ddd7da0cf8184c2e1493df9 took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.753-0500 2019-11-26T14:31:44.134-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.822-0500 I INDEX [conn156] validating the internal structure of index _id_ on collection config.cache.databases
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.753-0500 2019-11-26T14:31:44.139-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.157-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45414 #130 (4 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.753-0500 2019-11-26T14:31:44.134-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.198-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52626 #80 (14 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.754-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.838-0500 I INDEX [conn159] validating index consistency _id_ on collection local.replset.election
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.754-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.199-0500 I NETWORK [conn74] received client metadata from 127.0.0.1:53516 conn74: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.754-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.205-0500 I NETWORK [conn76] received client metadata from 127.0.0.1:52264 conn76: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.754-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.205-0500 I NETWORK [conn70] received client metadata from 127.0.0.1:35626 conn70: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.213-0500 I STORAGE [ReplWriterWorker-3] createCollection: test3_fsmdb0.fsmcoll0 with provided UUID: 81145456-1c0e-4ef0-89a6-ab06e3485635 and options: { uuid: UUID("81145456-1c0e-4ef0-89a6-ab06e3485635") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.754-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:44.849-0500 I COMMAND [conn46] command test3_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("c396db2f-788f-419f-b744-d7ae3889c6f5") }, $db: "test3_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 201ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.822-0500 W STORAGE [conn156] Could not complete validation of table:index-21-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.754-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.157-0500 I NETWORK [conn130] received client metadata from 127.0.0.1:45414 conn130: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.755-0500 [jsTest] New session started with sessionID: { "id" : UUID("93930b55-6270-4b93-9c83-0a35d59aef1c") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.199-0500 I NETWORK [conn80] received client metadata from 127.0.0.1:52626 conn80: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.755-0500 [jsTest] New session started with sessionID: { "id" : UUID("0817472f-6548-4b9e-a9c4-d32d72c06b52") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.838-0500 I INDEX [conn159] Validation complete for collection local.replset.election (UUID: 101a66fe-c3c0-4bee-94b9-e9bb8d04aa79). No corruption found.
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.755-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.209-0500 W CONTROL [conn74] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 323 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.755-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.213-0500 I STORAGE [ReplWriterWorker-1] createCollection: test3_fsmdb0.fsmcoll0 with provided UUID: 81145456-1c0e-4ef0-89a6-ab06e3485635 and options: { uuid: UUID("81145456-1c0e-4ef0-89a6-ab06e3485635") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.755-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.841-0500 I INDEX [conn115] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: b5258dce-fb89-4436-a191-b8586ea2e6c0). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.755-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.216-0500 W CONTROL [conn70] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 43 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.755-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.822-0500 I INDEX [conn156] validating collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.756-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.188-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45442 #131 (5 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.756-0500 setting random seed: 3116373635
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.209-0500 W CONTROL [conn80] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 718 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.756-0500 Skipping data consistency checks for 1-node CSRS: { "type" : "replica set", "primary" : "localhost:20000", "nodes" : [ "localhost:20000" ] }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.839-0500 I COMMAND [conn159] CMD: validate local.replset.minvalid, full:true
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.756-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.214-0500 W CONTROL [conn74] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 323 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.756-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.215-0500 W CONTROL [conn76] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 40 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.756-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.842-0500 I COMMAND [conn115] CMD: validate local.startup_log, full:true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.756-0500 Implicit session: session { "id" : UUID("a0fc9edc-7388-426d-83d1-30f8d210bf42") }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.227-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test3_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.757-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.822-0500 I INDEX [conn156] validating index consistency _id_ on collection config.cache.databases
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.757-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.188-0500 I NETWORK [conn131] received client metadata from 127.0.0.1:45442 conn131: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.757-0500 [jsTest] New session started with sessionID: { "id" : UUID("804be35e-8980-48a9-a2c5-d9b492a65bf5") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.213-0500 W CONTROL [conn80] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 718 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.757-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.840-0500 I INDEX [conn159] validating the internal structure of index _id_ on collection local.replset.minvalid
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.757-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.216-0500 I NETWORK [conn74] end connection 127.0.0.1:53516 (14 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.757-0500 Implicit session: session { "id" : UUID("20821b59-d7f3-4198-8e1f-e0a1c3fb5f02") }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.228-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test3_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.757-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.842-0500 I INDEX [conn115] validating the internal structure of index _id_ on collection local.startup_log
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.757-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.245-0500 I INDEX [ReplWriterWorker-6] index build: starting on test3_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.758-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.822-0500 I INDEX [conn156] Validation complete for collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.758-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.193-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45444 #132 (6 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.758-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.216-0500 I NETWORK [conn80] end connection 127.0.0.1:52626 (13 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.758-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.842-0500 I INDEX [conn159] validating collection local.replset.minvalid (UUID: 5dfed1a1-c7a1-4f91-a3da-2544e54d2e9a)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.758-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.263-0500 I STORAGE [ReplWriterWorker-14] createCollection: test3_fsmdb0.fsmcoll0 with provided UUID: 81145456-1c0e-4ef0-89a6-ab06e3485635 and options: { uuid: UUID("81145456-1c0e-4ef0-89a6-ab06e3485635") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.758-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.245-0500 I INDEX [ReplWriterWorker-6] index build: starting on test3_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.758-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.844-0500 I INDEX [conn115] validating collection local.startup_log (UUID: a1488758-c116-4144-adba-02b8f3b8144d)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.759-0500 [jsTest] New session started with sessionID: { "id" : UUID("62f29b4c-373d-44a7-ac6f-a3035e17cdfc") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.245-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.759-0500 [jsTest] New session started with sessionID: { "id" : UUID("3aca532b-3bde-4df7-9f2e-6b963b8b623c") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.823-0500 I COMMAND [conn156] CMD: validate config.system.sessions, full:true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.759-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.193-0500 I NETWORK [conn132] received client metadata from 127.0.0.1:45444 conn132: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.759-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.264-0500 I NETWORK [conn77] end connection 127.0.0.1:52548 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.759-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.842-0500 I INDEX [conn159] validating index consistency _id_ on collection local.replset.minvalid
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.759-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.264-0500 I NETWORK [conn71] end connection 127.0.0.1:53434 (13 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.759-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.245-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.759-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.844-0500 I INDEX [conn115] validating index consistency _id_ on collection local.startup_log
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.760-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.245-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: cb805b23-7132-443e-b6ff-33389464f65c: test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635 ): indexes: 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.760-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.824-0500 I INDEX [conn156] validating the internal structure of index _id_ on collection config.system.sessions
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.760-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.215-0500 I NETWORK [conn131] end connection 127.0.0.1:45442 (5 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.760-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.275-0500 I STORAGE [ReplWriterWorker-8] createCollection: test3_fsmdb0.fsmcoll0 with provided UUID: 81145456-1c0e-4ef0-89a6-ab06e3485635 and options: { uuid: UUID("81145456-1c0e-4ef0-89a6-ab06e3485635") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.760-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.842-0500 I INDEX [conn159] Validation complete for collection local.replset.minvalid (UUID: 5dfed1a1-c7a1-4f91-a3da-2544e54d2e9a). No corruption found.
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.760-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.274-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test3_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.760-0500 [jsTest] New session started with sessionID: { "id" : UUID("5ebf7ba8-67d2-4291-81a2-58bb195f38b2") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.245-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 3acdc2a0-5e02-416c-bfcb-acf4885deffb: test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635 ): indexes: 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.760-0500 [jsTest] New session started with sessionID: { "id" : UUID("4158e6da-ff70-48ae-82ff-42424dc73694") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.844-0500 I INDEX [conn115] Validation complete for collection local.startup_log (UUID: a1488758-c116-4144-adba-02b8f3b8144d). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.761-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.245-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.761-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.826-0500 I INDEX [conn156] validating the internal structure of index lsidTTLIndex on collection config.system.sessions
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.761-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.253-0500 I NETWORK [conn132] end connection 127.0.0.1:45444 (4 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.761-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.289-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test3_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.761-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.843-0500 I COMMAND [conn159] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.761-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.289-0500 I INDEX [ReplWriterWorker-6] index build: starting on test3_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.761-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.245-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.761-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.845-0500 I COMMAND [conn115] CMD: validate local.system.replset, full:true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.762-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.246-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.762-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.829-0500 I INDEX [conn156] validating collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.762-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.257-0500 I NETWORK [conn128] end connection 127.0.0.1:45364 (3 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.762-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.302-0500 I INDEX [ReplWriterWorker-0] index build: starting on test3_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.762-0500 [jsTest] New session started with sessionID: { "id" : UUID("312911a6-c3dc-462b-af4b-7185af1a8ac4") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.843-0500 I INDEX [conn159] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.762-0500 [jsTest] New session started with sessionID: { "id" : UUID("9b4f5cde-12b6-4ce3-aec6-1b46dc7ba39c") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.290-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.762-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.245-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.762-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.846-0500 I INDEX [conn115] validating the internal structure of index _id_ on collection local.system.replset
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.763-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.249-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test3_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.763-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.829-0500 I INDEX [conn156] validating index consistency _id_ on collection config.system.sessions
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.763-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.345-0500 I COMMAND [conn130] command test3_fsmdb0.fsmcoll0 appName: "MongoDB Shell" command: shardCollection { shardCollection: "test3_fsmdb0.fsmcoll0", key: { _id: "hashed" }, lsid: { id: UUID("53087a90-9976-412a-8155-7dc18f9c5dc2") }, $clusterTime: { clusterTime: Timestamp(1574796704, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:245 protocol:op_msg 148ms
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.763-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.302-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.763-0500 Running data consistency checks for replica set: shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.845-0500 I INDEX [conn159] validating collection local.replset.oplogTruncateAfterPoint (UUID: 31ce824c-ef86-4223-a4be-3069dae7b5f2)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.763-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.290-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 93531b32-5451-4d44-b828-03ad37391d00: test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635 ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.763-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.248-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test3_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.763-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.848-0500 I INDEX [conn115] validating collection local.system.replset (UUID: ea98bf03-b956-4e01-b9a4-857e601cceda)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.764-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.251-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: cb805b23-7132-443e-b6ff-33389464f65c: test3_fsmdb0.fsmcoll0 ( 81145456-1c0e-4ef0-89a6-ab06e3485635 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.764-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.830-0500 I INDEX [conn156] validating index consistency lsidTTLIndex on collection config.system.sessions
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.764-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.438-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test3_fsmdb0 from version {} to version { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 } took 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.764-0500 [jsTest] New session started with sessionID: { "id" : UUID("fdcb477f-a225-4b92-b193-0aa74dd0cdb5") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.302-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 37d96cb8-5172-4c50-9ebd-ecc82ac6ce5a: test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635 ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.764-0500 [jsTest] New session started with sessionID: { "id" : UUID("6e8e90b8-1508-4ec2-a500-d72cae14e3f1") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.845-0500 I INDEX [conn159] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.764-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.290-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.764-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.250-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 3acdc2a0-5e02-416c-bfcb-acf4885deffb: test3_fsmdb0.fsmcoll0 ( 81145456-1c0e-4ef0-89a6-ab06e3485635 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.764-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.848-0500 I INDEX [conn115] validating index consistency _id_ on collection local.system.replset
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.765-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.251-0500 W CONTROL [conn70] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 48 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.765-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.830-0500 I INDEX [conn156] Validation complete for collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.765-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.440-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test3_fsmdb0.fsmcoll0 to version 1|3||5ddd7da0cf8184c2e1493df9 took 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.765-0500 Recreating replica set from config {
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.302-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.765-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.845-0500 I INDEX [conn159] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 31ce824c-ef86-4223-a4be-3069dae7b5f2). No corruption found.
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.765-0500 "_id" : "config-rs",
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.290-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.765-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.251-0500 W CONTROL [conn76] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 44 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.765-0500 "version" : 1,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.848-0500 I INDEX [conn115] Validation complete for collection local.system.replset (UUID: ea98bf03-b956-4e01-b9a4-857e601cceda). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.766-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.254-0500 I NETWORK [conn70] end connection 127.0.0.1:35626 (12 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.766-0500 "configsvr" : true,
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.831-0500 I COMMAND [conn156] CMD: validate config.transactions, full:true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.766-0500 [jsTest] New session started with sessionID: { "id" : UUID("61405ae2-81e6-42ec-8a5f-bbc28588ecdc") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.528-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test3_fsmdb0.agg_out took 0 ms and found the collection is not sharded
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.766-0500 "protocolVersion" : NumberLong(1),
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.303-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.766-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.846-0500 I COMMAND [conn159] CMD: validate local.startup_log, full:true
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.766-0500 "writeConcernMajorityJournalDefault" : true,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.293-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test3_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.766-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.253-0500 I NETWORK [conn76] end connection 127.0.0.1:52264 (12 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.766-0500 "members" : [
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.848-0500 I COMMAND [conn115] CMD: validate local.system.rollback.id, full:true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.767-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.263-0500 I NETWORK [conn67] end connection 127.0.0.1:35552 (11 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.767-0500 {
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.831-0500 W STORAGE [conn156] Could not complete validation of table:collection-15-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.767-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.604-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45462 #133 (4 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.767-0500 "_id" : 0,
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.306-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test3_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.767-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.847-0500 I INDEX [conn159] validating the internal structure of index _id_ on collection local.startup_log
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.767-0500 "host" : "localhost:20000",
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.296-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 93531b32-5451-4d44-b828-03ad37391d00: test3_fsmdb0.fsmcoll0 ( 81145456-1c0e-4ef0-89a6-ab06e3485635 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.767-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.264-0500 I NETWORK [conn73] end connection 127.0.0.1:52190 (11 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.767-0500 "arbiterOnly" : false,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.849-0500 I INDEX [conn115] validating the internal structure of index _id_ on collection local.system.rollback.id
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.768-0500 [jsTest] New session started with sessionID: { "id" : UUID("d4f12b54-9e88-4a8d-adc8-823e40907fcf") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.316-0500 I STORAGE [ReplWriterWorker-2] createCollection: config.cache.chunks.test3_fsmdb0.fsmcoll0 with provided UUID: a33e44c0-60ea-478a-83bd-e45f3213aca7 and options: { uuid: UUID("a33e44c0-60ea-478a-83bd-e45f3213aca7") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.768-0500 "buildIndexes" : true,
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.831-0500 I INDEX [conn156] validating the internal structure of index _id_ on collection config.transactions
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.768-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.605-0500 I NETWORK [conn133] received client metadata from 127.0.0.1:45462 conn133: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.768-0500 "hidden" : false,
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.308-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 37d96cb8-5172-4c50-9ebd-ecc82ac6ce5a: test3_fsmdb0.fsmcoll0 ( 81145456-1c0e-4ef0-89a6-ab06e3485635 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.768-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.849-0500 I INDEX [conn159] validating collection local.startup_log (UUID: fd9e05bb-cd6c-441c-9265-3783d4065b03)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.768-0500 "priority" : 1,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.318-0500 I STORAGE [ReplWriterWorker-7] createCollection: config.cache.chunks.test3_fsmdb0.fsmcoll0 with provided UUID: d291b2bc-f179-4f06-8164-0b81d0131eb1 and options: { uuid: UUID("d291b2bc-f179-4f06-8164-0b81d0131eb1") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.768-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.316-0500 I STORAGE [ReplWriterWorker-15] createCollection: config.cache.chunks.test3_fsmdb0.fsmcoll0 with provided UUID: a33e44c0-60ea-478a-83bd-e45f3213aca7 and options: { uuid: UUID("a33e44c0-60ea-478a-83bd-e45f3213aca7") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.768-0500 "tags" : {
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.851-0500 I INDEX [conn115] validating collection local.system.rollback.id (UUID: 0ad52f2a-9d3e-4f9f-b91b-17a9c570ab7e)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.769-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.332-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns config.cache.chunks.test3_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.769-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.831-0500 W STORAGE [conn156] Could not complete validation of table:index-16-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.769-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.614-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45464 #134 (5 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.769-0500 },
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.335-0500 I STORAGE [ReplWriterWorker-9] createCollection: config.cache.chunks.test3_fsmdb0.fsmcoll0 with provided UUID: d291b2bc-f179-4f06-8164-0b81d0131eb1 and options: { uuid: UUID("d291b2bc-f179-4f06-8164-0b81d0131eb1") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.769-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.849-0500 I INDEX [conn159] validating index consistency _id_ on collection local.startup_log
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.769-0500 "slaveDelay" : NumberLong(0),
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.334-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns config.cache.chunks.test3_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.769-0500 [jsTest] New session started with sessionID: { "id" : UUID("c64bf23e-fbd4-4984-94c6-6ef7a8c03b80") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.332-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns config.cache.chunks.test3_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.769-0500 "votes" : 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.851-0500 I INDEX [conn115] validating index consistency _id_ on collection local.system.rollback.id
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.770-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.352-0500 I INDEX [ReplWriterWorker-13] index build: starting on config.cache.chunks.test3_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.770-0500 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.831-0500 I INDEX [conn156] validating collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.770-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.615-0500 I NETWORK [conn134] received client metadata from 127.0.0.1:45464 conn134: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.770-0500 ],
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.352-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns config.cache.chunks.test3_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.770-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.849-0500 I INDEX [conn159] Validation complete for collection local.startup_log (UUID: fd9e05bb-cd6c-441c-9265-3783d4065b03). No corruption found.
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.770-0500 "settings" : {
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.354-0500 I INDEX [ReplWriterWorker-14] index build: starting on config.cache.chunks.test3_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.770-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.351-0500 I INDEX [ReplWriterWorker-7] index build: starting on config.cache.chunks.test3_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.770-0500 "chainingAllowed" : true,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.851-0500 I INDEX [conn115] Validation complete for collection local.system.rollback.id (UUID: 0ad52f2a-9d3e-4f9f-b91b-17a9c570ab7e). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.771-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.352-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.771-0500 "heartbeatIntervalMillis" : 2000,
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.831-0500 I INDEX [conn156] validating index consistency _id_ on collection config.transactions
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.771-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.617-0500 I NETWORK [conn133] end connection 127.0.0.1:45462 (4 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.771-0500 "heartbeatTimeoutSecs" : 10,
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.366-0500 I INDEX [ReplWriterWorker-8] index build: starting on config.cache.chunks.test3_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.771-0500 [jsTest] New session started with sessionID: { "id" : UUID("132b48f2-693b-4e52-aea1-140be5e01a6e") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.850-0500 I COMMAND [conn159] CMD: validate local.system.replset, full:true
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.771-0500 "electionTimeoutMillis" : 86400000,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.354-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.771-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.351-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.771-0500 "catchUpTimeoutMillis" : -1,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.853-0500 I NETWORK [conn115] end connection 127.0.0.1:56734 (34 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.772-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.352-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 0e7be302-4140-4f85-9075-6519174460b2: config.cache.chunks.test3_fsmdb0.fsmcoll0 (a33e44c0-60ea-478a-83bd-e45f3213aca7 ): indexes: 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.772-0500 "catchUpTakeoverDelayMillis" : 30000,
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.831-0500 I INDEX [conn156] Validation complete for collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.772-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.624-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45466 #135 (5 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.772-0500 "getLastErrorModes" : {
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.366-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.772-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.850-0500 I INDEX [conn159] validating the internal structure of index _id_ on collection local.system.replset
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.772-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.354-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 4624060b-42c9-495f-a13a-f9f839e5e2ce: config.cache.chunks.test3_fsmdb0.fsmcoll0 (d291b2bc-f179-4f06-8164-0b81d0131eb1 ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.772-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.351-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 3ae5e157-1023-40ef-ac20-d7eaae6c54da: config.cache.chunks.test3_fsmdb0.fsmcoll0 (a33e44c0-60ea-478a-83bd-e45f3213aca7 ): indexes: 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.772-0500 },
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.922-0500 I NETWORK [conn114] end connection 127.0.0.1:56702 (33 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.773-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.352-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.773-0500 "getLastErrorDefaults" : {
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.834-0500 I COMMAND [conn156] CMD: validate local.oplog.rs, full:true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.773-0500 [jsTest] New session started with sessionID: { "id" : UUID("2fbc8645-4040-4c89-85eb-412d67d3962b") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.624-0500 I NETWORK [conn135] received client metadata from 127.0.0.1:45466 conn135: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.773-0500 "w" : 1,
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.366-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 4ad9c975-8c43-4c8e-a4be-15b02d10c1da: config.cache.chunks.test3_fsmdb0.fsmcoll0 (d291b2bc-f179-4f06-8164-0b81d0131eb1 ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.773-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.853-0500 I INDEX [conn159] validating collection local.system.replset (UUID: 3eb8c3e8-f477-448c-9a25-5db5ef40b0d6)
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.773-0500 "wtimeout" : 0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.354-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.773-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.351-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.773-0500 },
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:39.933-0500 I NETWORK [conn113] end connection 127.0.0.1:56700 (32 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.353-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.774-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.834-0500 W STORAGE [conn156] Could not complete validation of table:collection-10-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.774-0500 "replicaSetId" : ObjectId("5ddd7d655cde74b6784bb14d")
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.626-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45468 #136 (6 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.774-0500 Running data consistency checks for replica set: shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.366-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.774-0500 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.853-0500 I INDEX [conn159] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.853-0500 I INDEX [conn159] Validation complete for collection local.system.replset (UUID: 3eb8c3e8-f477-448c-9a25-5db5ef40b0d6). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.853-0500 I COMMAND [conn159] CMD: validate local.system.rollback.id, full:true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.774-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.354-0500 I SHARDING [ReplWriterWorker-4] Marking collection config.cache.chunks.test3_fsmdb0.fsmcoll0 as collection version:
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.774-0500 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:41.952-0500 I SHARDING [conn23] distributed lock 'test2_fsmdb0' acquired for 'dropDatabase', ts : 5ddd7d9d5cde74b6784bb742
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.774-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.834-0500 I INDEX [conn156] validating collection local.oplog.rs (UUID: 5f1b9ff7-2fef-4590-8e90-0f3704b0f5df)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.626-0500 I NETWORK [conn136] received client metadata from 127.0.0.1:45468 conn136: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.775-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.366-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.775-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.355-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.775-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.352-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.775-0500 [jsTest] New session started with sessionID: { "id" : UUID("af77d0aa-80d2-44f6-913c-4ba7ad9b2e60") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.854-0500 I INDEX [conn159] validating the internal structure of index _id_ on collection local.system.rollback.id
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.775-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.356-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 4 side writes (inserted: 4, deleted: 0) for 'lastmod_1' in 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.775-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:41.952-0500 I SHARDING [conn23] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:41.952-0500-5ddd7d9d5cde74b6784bb745", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55596", time: new Date(1574796701952), what: "dropDatabase.start", ns: "test2_fsmdb0", details: {} }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.776-0500 [jsTest] New session started with sessionID: { "id" : UUID("5e012121-5d2b-407c-85b9-7c18a5772d17") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.875-0500 I INDEX [conn156] Validation complete for collection local.oplog.rs (UUID: 5f1b9ff7-2fef-4590-8e90-0f3704b0f5df). No corruption found.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.626-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45470 #137 (7 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.776-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.367-0500 I SHARDING [ReplWriterWorker-2] Marking collection config.cache.chunks.test3_fsmdb0.fsmcoll0 as collection version:
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.776-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.355-0500 I SHARDING [ReplWriterWorker-0] Marking collection config.cache.chunks.test3_fsmdb0.fsmcoll0 as collection version:
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.776-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.352-0500 I SHARDING [ReplWriterWorker-4] Marking collection config.cache.chunks.test3_fsmdb0.fsmcoll0 as collection version:
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.776-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.856-0500 I INDEX [conn159] validating collection local.system.rollback.id (UUID: 223114bc-2956-4d9b-8f0a-5c567c2cb10e)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.776-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.356-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.test3_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.776-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:41.954-0500 I SHARDING [conn23] distributed lock 'test2_fsmdb0.agg_out' acquired for 'dropCollection', ts : 5ddd7d9d5cde74b6784bb748
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.777-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.876-0500 I COMMAND [conn156] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.877-0500 I INDEX [conn156] validating the internal structure of index _id_ on collection local.replset.election
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.777-0500 Recreating replica set from config {
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.370-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: drain applied 4 side writes (inserted: 4, deleted: 0) for 'lastmod_1' in 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.777-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.357-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 4 side writes (inserted: 4, deleted: 0) for 'lastmod_1' in 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.777-0500 "_id" : "shard-rs0",
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.354-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: drain applied 4 side writes (inserted: 4, deleted: 0) for 'lastmod_1' in 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.777-0500 [jsTest] New session started with sessionID: { "id" : UUID("354b36b9-4487-4617-a614-d3a71e39fca3") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.856-0500 I INDEX [conn159] validating index consistency _id_ on collection local.system.rollback.id
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.777-0500 "version" : 2,
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.358-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 0e7be302-4140-4f85-9075-6519174460b2: config.cache.chunks.test3_fsmdb0.fsmcoll0 ( a33e44c0-60ea-478a-83bd-e45f3213aca7 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.777-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:41.954-0500 I SHARDING [conn23] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:41.954-0500-5ddd7d9d5cde74b6784bb74a", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55596", time: new Date(1574796701954), what: "dropCollection.start", ns: "test2_fsmdb0.agg_out", details: {} }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:41.966-0500 I SHARDING [conn23] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:41.966-0500-5ddd7d9d5cde74b6784bb752", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55596", time: new Date(1574796701966), what: "dropCollection", ns: "test2_fsmdb0.agg_out", details: {} }
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.777-0500 "protocolVersion" : NumberLong(1),
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.879-0500 I INDEX [conn156] validating collection local.replset.election (UUID: 801ad0de-17c3-44b2-a878-e91b8de004c5)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.778-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.370-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index lastmod_1 on ns config.cache.chunks.test3_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.778-0500 "writeConcernMajorityJournalDefault" : true,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.357-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.test3_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.778-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.354-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index lastmod_1 on ns config.cache.chunks.test3_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.778-0500 "members" : [
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.856-0500 I INDEX [conn159] Validation complete for collection local.system.rollback.id (UUID: 223114bc-2956-4d9b-8f0a-5c567c2cb10e). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.778-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.508-0500 I STORAGE [ReplWriterWorker-8] createCollection: test3_fsmdb0.agg_out with provided UUID: 12bfc6f5-a5d9-4228-a70a-b419624ce864 and options: { uuid: UUID("12bfc6f5-a5d9-4228-a70a-b419624ce864") }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.523-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test3_fsmdb0.agg_out
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.778-0500 {
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:41.968-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7d9d5cde74b6784bb748' unlocked.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.778-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.879-0500 I INDEX [conn156] validating index consistency _id_ on collection local.replset.election
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.778-0500 "_id" : 0,
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:44.371-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 4ad9c975-8c43-4c8e-a4be-15b02d10c1da: config.cache.chunks.test3_fsmdb0.fsmcoll0 ( d291b2bc-f179-4f06-8164-0b81d0131eb1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:44.359-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 4624060b-42c9-495f-a13a-f9f839e5e2ce: config.cache.chunks.test3_fsmdb0.fsmcoll0 ( d291b2bc-f179-4f06-8164-0b81d0131eb1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.355-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 3ae5e157-1023-40ef-ac20-d7eaae6c54da: config.cache.chunks.test3_fsmdb0.fsmcoll0 ( a33e44c0-60ea-478a-83bd-e45f3213aca7 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:45.779-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.858-0500 I COMMAND [conn159] CMD: validate test2_fsmdb0.agg_out, full:true
[fsm_workload_test:agg_out] 2019-11-26T14:31:45.779-0500 "host" : "localhost:20001",
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.626-0500 I NETWORK [conn137] received client metadata from 127.0.0.1:45470 conn137: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:46.994-0500 [jsTest] New session started with sessionID: { "id" : UUID("89a44a74-a124-47d1-b95d-ab5b29c0b781") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.559-0500 I INDEX [ReplWriterWorker-12] index build: starting on test3_fsmdb0.agg_out properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.994-0500 "arbiterOnly" : false,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:41.969-0500 I SHARDING [conn23] distributed lock 'test2_fsmdb0.fsmcoll0' acquired for 'dropCollection', ts : 5ddd7d9d5cde74b6784bb755
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:46.994-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.879-0500 I INDEX [conn156] Validation complete for collection local.replset.election (UUID: 801ad0de-17c3-44b2-a878-e91b8de004c5). No corruption found.
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.995-0500 "buildIndexes" : true,
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.508-0500 I STORAGE [ReplWriterWorker-10] createCollection: test3_fsmdb0.agg_out with provided UUID: 12bfc6f5-a5d9-4228-a70a-b419624ce864 and options: { uuid: UUID("12bfc6f5-a5d9-4228-a70a-b419624ce864") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:46.995-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.858-0500 W STORAGE [conn159] Could not complete validation of table:collection-126--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.995-0500 "hidden" : false,
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.628-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45472 #138 (8 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:46.995-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.559-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.995-0500 "priority" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.996-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.996-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.996-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.996-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.996-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.996-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.996-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.996-0500 "_id" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.996-0500 "host" : "localhost:20002",
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.996-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.996-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.996-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.996-0500 "priority" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.996-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.996-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.996-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.996-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.996-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.996-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.996-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500 "_id" : 2,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500 "host" : "localhost:20003",
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500 "priority" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500 ],
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500 "settings" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500 "chainingAllowed" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500 "heartbeatIntervalMillis" : 2000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500 "heartbeatTimeoutSecs" : 10,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500 "electionTimeoutMillis" : 86400000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500 "catchUpTimeoutMillis" : -1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500 "catchUpTakeoverDelayMillis" : 30000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500 "getLastErrorModes" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500 "getLastErrorDefaults" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.997-0500 "w" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.998-0500 "wtimeout" : 0
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.998-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.998-0500 "replicaSetId" : ObjectId("5ddd7d683bbfe7fa5630d3b8")
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.998-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.998-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.998-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.998-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.998-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.998-0500 [jsTest] New session started with sessionID: { "id" : UUID("5a805393-ffba-45a2-8f85-695397361f13") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.998-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.998-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.998-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.998-0500 Recreating replica set from config {
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.998-0500 "_id" : "shard-rs1",
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.998-0500 "version" : 2,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.998-0500 "protocolVersion" : NumberLong(1),
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.998-0500 "writeConcernMajorityJournalDefault" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.998-0500 "members" : [
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.998-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.998-0500 "_id" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.998-0500 "host" : "localhost:20004",
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.998-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.998-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500 "priority" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500 "_id" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500 "host" : "localhost:20005",
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500 "priority" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500 "_id" : 2,
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500 "host" : "localhost:20006",
[fsm_workload_test:agg_out] 2019-11-26T14:31:46.999-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500 "priority" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500 ],
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500 "settings" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500 "chainingAllowed" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500 "heartbeatIntervalMillis" : 2000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500 "heartbeatTimeoutSecs" : 10,
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500 "electionTimeoutMillis" : 86400000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500 "catchUpTimeoutMillis" : -1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500 "catchUpTakeoverDelayMillis" : 30000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500 "getLastErrorModes" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500 "getLastErrorDefaults" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500 "w" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500 "wtimeout" : 0
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.000-0500 "replicaSetId" : ObjectId("5ddd7d6bcf8184c2e1492eba")
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500 [jsTest] New session started with sessionID: { "id" : UUID("9e5e52f7-3ed7-4077-ad12-6db40acad3a8") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500 [jsTest] New session started with sessionID: { "id" : UUID("005ed255-285c-4e36-9af0-7a4ae5f1c9ed") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500 [jsTest] New session started with sessionID: { "id" : UUID("13ee0c6d-3a5e-4efb-8f7e-88419bb9f1a3") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.001-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500 [jsTest] New session started with sessionID: { "id" : UUID("3fe7dc8f-37b5-4f46-92d7-1a89ecb93ed2") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500 [jsTest] New session started with sessionID: { "id" : UUID("e55cc968-4011-4585-a3e5-4cec28271e09") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500 [jsTest] New session started with sessionID: { "id" : UUID("69d881e4-53fc-4d79-a003-b4176717d0d5") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500 [jsTest] Workload(s) started: jstests/concurrency/fsm_workloads/agg_out.js
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.002-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500 [jsTest] New session started with sessionID: { "id" : UUID("53087a90-9976-412a-8155-7dc18f9c5dc2") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500 Using 5 threads (requested 5)
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500 Implicit session: session { "id" : UUID("a20e8022-d59a-46f2-94b4-45f18f496396") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500 Implicit session: session { "id" : UUID("41cbfc84-e478-449c-83be-1dbf2a1f6423") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500 Implicit session: session { "id" : UUID("2f0a3520-3072-49cd-b573-10b88d9f640d") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500 Implicit session: session { "id" : UUID("02e2edb9-82c2-4297-8bc3-7509fbb27529") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500 Implicit session: session { "id" : UUID("b4861328-6294-416b-9dac-c068fa7cf90b") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.003-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500 [tid:3] setting random seed: 4147381866
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500 [tid:0] setting random seed: 1087612350
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500 [tid:2] setting random seed: 806376151
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500 [tid:4] setting random seed: 1498086183
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500 [tid:1] setting random seed: 2253259475
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500 [tid:3]
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500 [jsTest] New session started with sessionID: { "id" : UUID("8f169c90-43a1-4c3d-84ba-96afe3bea6ba") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500 [tid:0]
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500 [jsTest] New session started with sessionID: { "id" : UUID("fe44fdfe-c512-4ee5-9746-0ef4f91d78d0") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500 [tid:2]
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500 [jsTest] New session started with sessionID: { "id" : UUID("49d93db0-9abd-4103-bc81-b25086705499") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.004-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.005-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.005-0500 [tid:1]
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.005-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.005-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.005-0500 [jsTest] New session started with sessionID: { "id" : UUID("c396db2f-788f-419f-b744-d7ae3889c6f5") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.005-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.005-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.005-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.005-0500 [tid:4]
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.005-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.005-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.005-0500 [jsTest] New session started with sessionID: { "id" : UUID("d53e7426-95aa-4942-aab6-beb057515432") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.005-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.005-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.005-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.005-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js finished.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:41.969-0500 I SHARDING [conn23] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:41.969-0500-5ddd7d9d5cde74b6784bb757", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55596", time: new Date(1574796701969), what: "dropCollection.start", ns: "test2_fsmdb0.fsmcoll0", details: {} }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.006-0500 Starting JSTest jstests/hooks/run_check_repl_dbhash_background.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_check_repl_dbhash_background"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_check_repl_dbhash_background.js
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.879-0500 I COMMAND [conn156] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.523-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.858-0500 I INDEX [conn159] validating the internal structure of index _id_ on collection test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.628-0500 I NETWORK [conn138] received client metadata from 127.0.0.1:45472 conn138: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.559-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 1ec77192-2617-4561-9fc8-fb21618bdf75: test3_fsmdb0.agg_out (12bfc6f5-a5d9-4228-a70a-b419624ce864 ): indexes: 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:41.984-0500 I SHARDING [conn23] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:41.984-0500-5ddd7d9d5cde74b6784bb760", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55596", time: new Date(1574796701984), what: "dropCollection", ns: "test2_fsmdb0.fsmcoll0", details: {} }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.880-0500 I INDEX [conn156] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.558-0500 I INDEX [ReplWriterWorker-8] index build: starting on test3_fsmdb0.agg_out properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.858-0500 W STORAGE [conn159] Could not complete validation of table:index-131--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.633-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45474 #139 (9 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.559-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:41.986-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7d9d5cde74b6784bb755' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.882-0500 I INDEX [conn156] validating collection local.replset.minvalid (UUID: a96fd08c-e1c8-43e5-868a-0849697b175e)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.558-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.858-0500 I INDEX [conn159] validating the internal structure of index _id_hashed on collection test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.633-0500 I NETWORK [conn139] received client metadata from 127.0.0.1:45474 conn139: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.560-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:41.998-0500 I SHARDING [conn23] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:41.998-0500-5ddd7d9d5cde74b6784bb768", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55596", time: new Date(1574796701998), what: "dropDatabase", ns: "test2_fsmdb0", details: {} }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.882-0500 I INDEX [conn156] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.558-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: fca3692f-906f-47c2-90c9-aa5c07cc6a34: test3_fsmdb0.agg_out (12bfc6f5-a5d9-4228-a70a-b419624ce864 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.858-0500 W STORAGE [conn159] Could not complete validation of table:index-138--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.635-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45478 #140 (10 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.636-0500 I NETWORK [conn140] received client metadata from 127.0.0.1:45478 conn140: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:42.001-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7d9d5cde74b6784bb742' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.882-0500 I INDEX [conn156] Validation complete for collection local.replset.minvalid (UUID: a96fd08c-e1c8-43e5-868a-0849697b175e). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.558-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.858-0500 I INDEX [conn159] validating collection test2_fsmdb0.agg_out (UUID: 08932b51-9933-4490-ab6b-1df6cfb57633)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.562-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.636-0500 I NETWORK [conn135] end connection 127.0.0.1:45466 (9 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.127-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56768 #116 (33 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.883-0500 I COMMAND [conn156] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.559-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.013-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js started with pid 15952.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.860-0500 I INDEX [conn159] validating index consistency _id_ on collection test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.564-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 1ec77192-2617-4561-9fc8-fb21618bdf75: test3_fsmdb0.agg_out ( 12bfc6f5-a5d9-4228-a70a-b419624ce864 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.638-0500 I NETWORK [conn136] end connection 127.0.0.1:45468 (8 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.127-0500 I NETWORK [conn116] received client metadata from 127.0.0.1:56768 conn116: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.883-0500 I INDEX [conn156] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.561-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.860-0500 I INDEX [conn159] validating index consistency _id_hashed on collection test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.680-0500 I STORAGE [ReplWriterWorker-14] createCollection: test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564 with provided UUID: 4fb1c91d-b1bb-4f34-b5a6-a959123153c4 and options: { uuid: UUID("4fb1c91d-b1bb-4f34-b5a6-a959123153c4"), temp: true }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.638-0500 I NETWORK [conn137] end connection 127.0.0.1:45470 (7 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.128-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56770 #117 (34 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.885-0500 I INDEX [conn156] validating collection local.replset.oplogTruncateAfterPoint (UUID: 4ac06258-0ea7-46c8-b773-0c637830872b)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.563-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: fca3692f-906f-47c2-90c9-aa5c07cc6a34: test3_fsmdb0.agg_out ( 12bfc6f5-a5d9-4228-a70a-b419624ce864 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.860-0500 I INDEX [conn159] Validation complete for collection test2_fsmdb0.agg_out (UUID: 08932b51-9933-4490-ab6b-1df6cfb57633). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.861-0500 I COMMAND [conn159] CMD: validate test2_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.640-0500 I NETWORK [conn138] end connection 127.0.0.1:45472 (6 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.128-0500 I NETWORK [conn117] received client metadata from 127.0.0.1:56770 conn117: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.885-0500 I INDEX [conn156] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.680-0500 I STORAGE [ReplWriterWorker-9] createCollection: test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564 with provided UUID: 4fb1c91d-b1bb-4f34-b5a6-a959123153c4 and options: { uuid: UUID("4fb1c91d-b1bb-4f34-b5a6-a959123153c4"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.695-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.861-0500 W STORAGE [conn159] Could not complete validation of table:collection-112--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.802-0500 I COMMAND [conn134] command test3_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d53e7426-95aa-4942-aab6-beb057515432") }, $db: "test3_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 154ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.133-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56780 #118 (35 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.886-0500 I INDEX [conn156] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 4ac06258-0ea7-46c8-b773-0c637830872b). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.696-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.696-0500 I STORAGE [ReplWriterWorker-2] createCollection: test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614 with provided UUID: 9994eb3f-8b56-4ba9-9d36-e7240503d188 and options: { uuid: UUID("9994eb3f-8b56-4ba9-9d36-e7240503d188"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.861-0500 I INDEX [conn159] validating the internal structure of index _id_ on collection test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.831-0500 I COMMAND [conn139] command test3_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("fe44fdfe-c512-4ee5-9746-0ef4f91d78d0") }, $db: "test3_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 183ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.133-0500 I NETWORK [conn118] received client metadata from 127.0.0.1:56780 conn118: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.886-0500 I COMMAND [conn156] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.697-0500 I STORAGE [ReplWriterWorker-5] createCollection: test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614 with provided UUID: 9994eb3f-8b56-4ba9-9d36-e7240503d188 and options: { uuid: UUID("9994eb3f-8b56-4ba9-9d36-e7240503d188"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.710-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.861-0500 W STORAGE [conn159] Could not complete validation of table:index-113--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:44.847-0500 I COMMAND [conn140] command test3_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("49d93db0-9abd-4103-bc81-b25086705499") }, $db: "test3_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 199ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.133-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56782 #119 (36 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.887-0500 I INDEX [conn156] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.725-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.711-0500 I STORAGE [ReplWriterWorker-1] createCollection: test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710 with provided UUID: 86906110-e72c-4bfd-9dfb-e0faa8857257 and options: { uuid: UUID("86906110-e72c-4bfd-9dfb-e0faa8857257"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.861-0500 I INDEX [conn159] validating the internal structure of index _id_hashed on collection test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.134-0500 I NETWORK [conn119] received client metadata from 127.0.0.1:56782 conn119: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.889-0500 I INDEX [conn156] validating collection local.startup_log (UUID: e8e71921-e80f-42ad-92d0-ad769374a694)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.725-0500 I STORAGE [ReplWriterWorker-14] createCollection: test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710 with provided UUID: 86906110-e72c-4bfd-9dfb-e0faa8857257 and options: { uuid: UUID("86906110-e72c-4bfd-9dfb-e0faa8857257"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.725-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.861-0500 W STORAGE [conn159] Could not complete validation of table:index-114--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.151-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56810 #120 (37 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.889-0500 I INDEX [conn156] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.740-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.725-0500 I STORAGE [ReplWriterWorker-9] createCollection: test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748 with provided UUID: 044c76b1-1b5b-4981-867a-b6690dd735b8 and options: { uuid: UUID("044c76b1-1b5b-4981-867a-b6690dd735b8"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.862-0500 I INDEX [conn159] validating collection test2_fsmdb0.fsmcoll0 (UUID: 11da2d1e-3dd5-4812-9686-c490a6bdfff0)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.151-0500 I NETWORK [conn120] received client metadata from 127.0.0.1:56810 conn120: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.889-0500 I INDEX [conn156] Validation complete for collection local.startup_log (UUID: e8e71921-e80f-42ad-92d0-ad769374a694). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.741-0500 I STORAGE [ReplWriterWorker-6] createCollection: test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748 with provided UUID: 044c76b1-1b5b-4981-867a-b6690dd735b8 and options: { uuid: UUID("044c76b1-1b5b-4981-867a-b6690dd735b8"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.740-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.863-0500 I INDEX [conn159] validating index consistency _id_ on collection test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.160-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56820 #121 (38 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.889-0500 I COMMAND [conn156] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.756-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.741-0500 I STORAGE [ReplWriterWorker-8] createCollection: test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682 with provided UUID: 86ac9d9d-dbdd-45b4-b872-4d3229437831 and options: { uuid: UUID("86ac9d9d-dbdd-45b4-b872-4d3229437831"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.863-0500 I INDEX [conn159] validating index consistency _id_hashed on collection test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.160-0500 I NETWORK [conn121] received client metadata from 127.0.0.1:56820 conn121: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.890-0500 I INDEX [conn156] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.757-0500 I STORAGE [ReplWriterWorker-10] createCollection: test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682 with provided UUID: 86ac9d9d-dbdd-45b4-b872-4d3229437831 and options: { uuid: UUID("86ac9d9d-dbdd-45b4-b872-4d3229437831"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.758-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.863-0500 I INDEX [conn159] Validation complete for collection test2_fsmdb0.fsmcoll0 (UUID: 11da2d1e-3dd5-4812-9686-c490a6bdfff0). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.162-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56822 #122 (39 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.892-0500 I INDEX [conn156] validating collection local.system.replset (UUID: 318b7af2-23ac-427e-bba7-a3e3f5b1e60d)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.771-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.775-0500 I INDEX [ReplWriterWorker-7] index build: starting on test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.865-0500 I NETWORK [conn159] end connection 127.0.0.1:46948 (39 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.162-0500 I NETWORK [conn122] received client metadata from 127.0.0.1:56822 conn122: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.892-0500 I INDEX [conn156] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.790-0500 I INDEX [ReplWriterWorker-13] index build: starting on test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.775-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.923-0500 I NETWORK [conn158] end connection 127.0.0.1:46924 (38 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.183-0500 I SHARDING [conn17] distributed lock 'test3_fsmdb0' acquired for 'dropCollection', ts : 5ddd7da05cde74b6784bb781
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.892-0500 I INDEX [conn156] Validation complete for collection local.system.replset (UUID: 318b7af2-23ac-427e-bba7-a3e3f5b1e60d). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.790-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.775-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 4f53ed84-83a4-46d7-937c-6a9f561bc6a7: test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564 (4fb1c91d-b1bb-4f34-b5a6-a959123153c4 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.933-0500 I NETWORK [conn157] end connection 127.0.0.1:46922 (37 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.185-0500 I SHARDING [conn17] distributed lock 'test3_fsmdb0.fsmcoll0' acquired for 'dropCollection', ts : 5ddd7da05cde74b6784bb783
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.893-0500 I COMMAND [conn156] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.790-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 3b53999a-4313-4093-969c-023d7f392890: test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564 (4fb1c91d-b1bb-4f34-b5a6-a959123153c4 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.775-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.947-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46958 #160 (38 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.187-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7da05cde74b6784bb783' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.894-0500 I INDEX [conn156] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.791-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.776-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:39.947-0500 I NETWORK [conn160] received client metadata from 127.0.0.1:46958 conn160: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.187-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7da05cde74b6784bb781' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.896-0500 I INDEX [conn156] validating collection local.system.rollback.id (UUID: 2d9a033a-73d1-44ef-b7d1-30b6243b0419)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.791-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.778-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.955-0500 I COMMAND [conn55] CMD: drop test2_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.190-0500 I SHARDING [conn17] distributed lock 'test3_fsmdb0' acquired for 'enableSharding', ts : 5ddd7da05cde74b6784bb78b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.896-0500 I INDEX [conn156] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.794-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.789-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 4f53ed84-83a4-46d7-937c-6a9f561bc6a7: test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564 ( 4fb1c91d-b1bb-4f34-b5a6-a959123153c4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.956-0500 I STORAGE [conn55] dropCollection: test2_fsmdb0.agg_out (08932b51-9933-4490-ab6b-1df6cfb57633) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.192-0500 I SHARDING [conn17] Registering new database { _id: "test3_fsmdb0", primary: "shard-rs1", partitioned: false, version: { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 } } in sharding catalog
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.896-0500 I INDEX [conn156] Validation complete for collection local.system.rollback.id (UUID: 2d9a033a-73d1-44ef-b7d1-30b6243b0419). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.801-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 3b53999a-4313-4093-969c-023d7f392890: test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564 ( 4fb1c91d-b1bb-4f34-b5a6-a959123153c4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.796-0500 I INDEX [ReplWriterWorker-1] index build: starting on test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.956-0500 I STORAGE [conn55] Finishing collection drop for test2_fsmdb0.agg_out (08932b51-9933-4490-ab6b-1df6cfb57633).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.195-0500 I SHARDING [conn17] Enabling sharding for database [test3_fsmdb0] in config db
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.897-0500 I COMMAND [conn156] CMD: validate test2_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.808-0500 I INDEX [ReplWriterWorker-14] index build: starting on test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.796-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.956-0500 I STORAGE [conn55] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.agg_out (08932b51-9933-4490-ab6b-1df6cfb57633)'. Ident: 'index-131--2588534479858262356', commit timestamp: 'Timestamp(1574796701, 5)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.196-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7da05cde74b6784bb78b' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.898-0500 W STORAGE [conn156] Could not complete validation of table:collection-328-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.808-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.796-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 5cccd887-575a-4d1c-84b0-207a47bc28ed: test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710 (86906110-e72c-4bfd-9dfb-e0faa8857257 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.956-0500 I STORAGE [conn55] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.agg_out (08932b51-9933-4490-ab6b-1df6cfb57633)'. Ident: 'index-138--2588534479858262356', commit timestamp: 'Timestamp(1574796701, 5)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.956-0500 I STORAGE [conn55] Deferring table drop for collection 'test2_fsmdb0.agg_out'. Ident: collection-126--2588534479858262356, commit timestamp: Timestamp(1574796701, 5)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.898-0500 I INDEX [conn156] validating the internal structure of index _id_ on collection test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.809-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 9ace3743-d38c-4823-92d3-46dfa0149d08: test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710 (86906110-e72c-4bfd-9dfb-e0faa8857257 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.796-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.797-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.799-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.809-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.965-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.agg_out took 0 ms and found the collection is not sharded
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.898-0500 W STORAGE [conn156] Could not complete validation of table:index-329-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.810-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 5cccd887-575a-4d1c-84b0-207a47bc28ed: test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710 ( 86906110-e72c-4bfd-9dfb-e0faa8857257 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.198-0500 I SHARDING [conn17] distributed lock 'test3_fsmdb0' acquired for 'shardCollection', ts : 5ddd7da05cde74b6784bb794
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.809-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.966-0500 I SHARDING [conn55] Updating metadata for collection test2_fsmdb0.agg_out from collection version: 1|0||5ddd7d96cf8184c2e1493a53, shard version: 1|0||5ddd7d96cf8184c2e1493a53 to collection version: due to UUID change
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.898-0500 I INDEX [conn156] validating the internal structure of index _id_hashed on collection test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.818-0500 I INDEX [ReplWriterWorker-8] index build: starting on test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.199-0500 I SHARDING [conn17] distributed lock 'test3_fsmdb0.fsmcoll0' acquired for 'shardCollection', ts : 5ddd7da05cde74b6784bb796
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.812-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.035-0500 MongoDB shell version v0.0.0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.966-0500 I COMMAND [ShardServerCatalogCacheLoader-1] CMD: drop config.cache.chunks.test2_fsmdb0.agg_out
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.377-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.832-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.832-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.832-0500 [jsTest] New session started with sessionID: { "id" : UUID("20f316ba-ab71-41b5-aa6b-cd043507d1d8") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.832-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.832-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.832-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.832-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.832-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.832-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.832-0500 [jsTest] Workload(s) completed in 3023 ms: jstests/concurrency/fsm_workloads/agg_out.js
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.833-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.833-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.833-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.833-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.898-0500 W STORAGE [conn156] Could not complete validation of table:index-330-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.833-0500 Implicit session: session { "id" : UUID("56495dd4-458b-4a00-8fe5-23d378b32f8d") }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:47.020-0500 I COMMAND [conn47] command test3_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("8f169c90-43a1-4c3d-84ba-96afe3bea6ba") }, $db: "test3_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2373ms
[fsm_workload_test:agg_out] 2019-11-26T14:31:47.833-0500 FSM workload jstests/concurrency/fsm_workloads/agg_out.js finished.
[executor:fsm_workload_test:job0] 2019-11-26T14:31:47.834-0500 agg_out.js ran in 3.81 seconds: no failures detected.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.833-0500 MongoDB server version: 0.0.0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.834-0500 true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.834-0500 2019-11-26T14:31:47.094-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.834-0500 2019-11-26T14:31:47.094-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.834-0500 2019-11-26T14:31:47.094-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.834-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.834-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.834-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.834-0500 [jsTest] New session started with sessionID: { "id" : UUID("5b1135b4-62e9-49ed-ae48-aecd7c9a758c") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.834-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.834-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.834-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.834-0500 2019-11-26T14:31:47.098-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.834-0500 2019-11-26T14:31:47.098-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.834-0500 2019-11-26T14:31:47.098-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.835-0500 2019-11-26T14:31:47.098-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.835-0500 2019-11-26T14:31:47.099-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.835-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.835-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.835-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.835-0500 [jsTest] New session started with sessionID: { "id" : UUID("c7c999e0-f82a-4724-bc28-69247779d587") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.818-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.835-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.257-0500 I NETWORK [conn117] end connection 127.0.0.1:56770 (38 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.835-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.075-0500 I COMMAND [conn134] command test3_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d53e7426-95aa-4942-aab6-beb057515432") }, $clusterTime: { clusterTime: Timestamp(1574796704, 1078), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test3_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2268ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.835-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.836-0500 2019-11-26T14:31:47.100-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.836-0500 2019-11-26T14:31:47.100-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:47.098-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52666 #81 (13 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.836-0500 2019-11-26T14:31:47.100-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:47.098-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53556 #75 (14 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.836-0500 2019-11-26T14:31:47.100-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.822-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 9ace3743-d38c-4823-92d3-46dfa0149d08: test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710 ( 86906110-e72c-4bfd-9dfb-e0faa8857257 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.836-0500 2019-11-26T14:31:47.101-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.836-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.836-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.836-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.836-0500 [jsTest] New session started with sessionID: { "id" : UUID("70aff224-ca25-4c3c-b8d0-c20c2c233484") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.966-0500 I STORAGE [ShardServerCatalogCacheLoader-1] dropCollection: config.cache.chunks.test2_fsmdb0.agg_out (13bc0717-3ecb-47d5-aedd-db010ec932d6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.836-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.898-0500 I INDEX [conn156] validating collection test2_fsmdb0.fsmcoll0 (UUID: 11da2d1e-3dd5-4812-9686-c490a6bdfff0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.837-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:47.207-0500 I COMMAND [conn46] command test3_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("c396db2f-788f-419f-b744-d7ae3889c6f5") }, $clusterTime: { clusterTime: Timestamp(1574796704, 3085), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test3_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f\", to: \"test3_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test3_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:713 protocol:op_msg 2355ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.837-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.837-0500 Skipping data consistency checks for 1-node CSRS: { "type" : "replica set", "primary" : "localhost:20000", "nodes" : [ "localhost:20000" ] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.837-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.837-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.837-0500 Implicit session: session { "id" : UUID("67b5aa8e-eab4-4ded-ab7b-a70096bdda52") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.837-0500 Implicit session: session { "id" : UUID("f41d5837-7256-4206-a1ad-bcd93bf5168d") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.837-0500 MongoDB server version: 0.0.0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.837-0500 MongoDB server version: 0.0.0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.837-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.837-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.837-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.837-0500 [jsTest] New session started with sessionID: { "id" : UUID("e7e129ca-ad4d-4fbb-8835-9f01231239cb") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.837-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.837-0500
[CheckReplDBHashInBackground:job0] Pausing the background check repl dbhash thread.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.838-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.818-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 40269ef9-0b22-4260-83fe-76624d9c81cc: test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614 (9994eb3f-8b56-4ba9-9d36-e7240503d188 ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.838-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.264-0500 I NETWORK [conn116] end connection 127.0.0.1:56768 (37 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.838-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.085-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45482 #141 (7 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.838-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.838-0500 [jsTest] New session started with sessionID: { "id" : UUID("e92f4f21-7664-4258-aad0-b1a22a452329") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.838-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.838-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:47.098-0500 I NETWORK [conn81] received client metadata from 127.0.0.1:52666 conn81: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.838-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:47.099-0500 I NETWORK [conn75] received client metadata from 127.0.0.1:53556 conn75: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.839-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.831-0500 I INDEX [ReplWriterWorker-10] index build: starting on test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.839-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.966-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Finishing collection drop for config.cache.chunks.test2_fsmdb0.agg_out (13bc0717-3ecb-47d5-aedd-db010ec932d6).
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.839-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.839-0500 [jsTest] New session started with sessionID: { "id" : UUID("07294df8-d390-49e5-8e4b-68495d70d967") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.899-0500 I INDEX [conn156] validating index consistency _id_ on collection test2_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.839-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:47.210-0500 I COMMAND [conn47] command test3_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("8f169c90-43a1-4c3d-84ba-96afe3bea6ba") }, $clusterTime: { clusterTime: Timestamp(1574796705, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test3_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc\", to: \"test3_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test3_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:710 protocol:op_msg 187ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.839-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.818-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.839-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.293-0500 D4 TXN [conn52] New transaction started with txnNumber: 0 on session with lsid 249117dc-3089-4cf1-a36d-07feb7796d7b
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.839-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.085-0500 I NETWORK [conn141] received client metadata from 127.0.0.1:45482 conn141: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.840-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:47.198-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52692 #82 (14 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.840-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:47.198-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53584 #76 (15 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.840-0500 [jsTest] New session started with sessionID: { "id" : UUID("93ac67b9-6391-49e9-be17-d0bbb9c04e3f") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.831-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.840-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.966-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test2_fsmdb0.agg_out (13bc0717-3ecb-47d5-aedd-db010ec932d6)'. Ident: 'index-147--2588534479858262356', commit timestamp: 'Timestamp(1574796701, 9)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.840-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.840-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.900-0500 I INDEX [conn156] validating index consistency _id_hashed on collection test2_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.840-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:47.213-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test3_fsmdb0.agg_out to version 1|0||5ddd7da3cf8184c2e1493fd4 took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.841-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.819-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.841-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.341-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test3_fsmdb0 from version {} to version { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 } took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.841-0500 [jsTest] New session started with sessionID: { "id" : UUID("18ef19ae-2d21-402b-a24c-2d141361096c") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.142-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test3_fsmdb0 from version {} to version { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 } took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.841-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:47.198-0500 I NETWORK [conn82] received client metadata from 127.0.0.1:52692 conn82: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.841-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:47.198-0500 I NETWORK [conn76] received client metadata from 127.0.0.1:53584 conn76: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.841-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.831-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 77aa592b-9868-4767-a173-cad42312a61c: test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614 (9994eb3f-8b56-4ba9-9d36-e7240503d188 ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.841-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.966-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test2_fsmdb0.agg_out (13bc0717-3ecb-47d5-aedd-db010ec932d6)'. Ident: 'index-150--2588534479858262356', commit timestamp: 'Timestamp(1574796701, 9)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.842-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.842-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.900-0500 I INDEX [conn156] Validation complete for collection test2_fsmdb0.fsmcoll0 (UUID: 11da2d1e-3dd5-4812-9686-c490a6bdfff0). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.842-0500 [jsTest] New session started with sessionID: { "id" : UUID("4e7ba3d8-44e2-4d06-b4cf-a3f46b3d369c") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:47.294-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test3_fsmdb0 from version {} to version { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 } took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.842-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.822-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.842-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.341-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test3_fsmdb0.fsmcoll0 to version 1|3||5ddd7da0cf8184c2e1493df9 took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.842-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.143-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test3_fsmdb0.agg_out to version 1|0||5ddd7da3cf8184c2e1493fd4 took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.842-0500 Running data consistency checks for replica set: shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:47.208-0500 W CONTROL [conn82] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 718 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.842-0500 Running data consistency checks for replica set: shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:47.209-0500 W CONTROL [conn76] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 323 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.843-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.831-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.843-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.966-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Deferring table drop for collection 'config.cache.chunks.test2_fsmdb0.agg_out'. Ident: collection-145--2588534479858262356, commit timestamp: Timestamp(1574796701, 9)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.843-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.901-0500 I NETWORK [conn156] end connection 127.0.0.1:39484 (39 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.843-0500 [jsTest] New session started with sessionID: { "id" : UUID("7d847079-c37e-45f6-9d0b-24a6de4f4dae") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:47.296-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test3_fsmdb0.agg_out to version 1|0||5ddd7da3cf8184c2e1493fd4 took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.843-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.824-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 40269ef9-0b22-4260-83fe-76624d9c81cc: test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614 ( 9994eb3f-8b56-4ba9-9d36-e7240503d188 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.843-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.343-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7da05cde74b6784bb796' unlocked.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.843-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.169-0500 I COMMAND [conn139] command test3_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("fe44fdfe-c512-4ee5-9746-0ef4f91d78d0") }, $clusterTime: { clusterTime: Timestamp(1574796704, 2966), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test3_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f\", to: \"test3_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: cannot rename to a sharded collection" errName:IllegalOperation errCode:20 reslen:575 protocol:op_msg 2336ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.844-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:47.225-0500 W CONTROL [conn82] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 718 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.844-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:47.225-0500 W CONTROL [conn76] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 323 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.844-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.832-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.844-0500 [jsTest] New session started with sessionID: { "id" : UUID("be38c4ff-2688-402a-9c6f-8f964f98f04e") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.975-0500 I COMMAND [conn55] CMD: drop test2_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.844-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.844-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.923-0500 I NETWORK [conn155] end connection 127.0.0.1:39450 (38 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.844-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:47.330-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test3_fsmdb0 from version {} to version { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 } took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.845-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.857-0500 I INDEX [ReplWriterWorker-15] index build: starting on test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.845-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.345-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7da05cde74b6784bb794' unlocked.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.845-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.185-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test3_fsmdb0 from version {} to version { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 } took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.845-0500 [jsTest] New session started with sessionID: { "id" : UUID("ed5265f6-a476-40bc-a135-9b0874b82817") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:47.227-0500 I NETWORK [conn82] end connection 127.0.0.1:52692 (13 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.845-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:47.227-0500 I NETWORK [conn76] end connection 127.0.0.1:53584 (14 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.845-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.835-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.845-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.975-0500 I STORAGE [conn55] dropCollection: test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.846-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:39.933-0500 I NETWORK [conn154] end connection 127.0.0.1:39448 (37 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.846-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:47.331-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test3_fsmdb0.agg_out to version 1|0||5ddd7da3cf8184c2e1493fd4 took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.846-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.857-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.846-0500 [jsTest] New session started with sessionID: { "id" : UUID("f3d98ce6-0e92-4b08-bf5d-efb32d032345") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.345-0500 I COMMAND [conn17] command admin.$cmd appName: "MongoDB Shell" command: _configsvrShardCollection { _configsvrShardCollection: "test3_fsmdb0.fsmcoll0", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("53087a90-9976-412a-8155-7dc18f9c5dc2"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1574796704, 9), signature: { hash: BinData(0, 9FAD85F96B04780E79458F8D92AFFEB7FD9EF168), keyId: 6763700092420489256 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45414", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796704, 9), t: 1 } }, $db: "admin" } numYields:0 reslen:587 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 6 } }, Global: { acquireCount: { r: 2, w: 4 } }, Database: { acquireCount: { r: 2, w: 4 } }, Collection: { acquireCount: { r: 2, w: 4 } }, Mutex: { acquireCount: { r: 10, W: 1 } } } flowControl:{ acquireCount: 4 } storage:{} protocol:op_msg 147ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.846-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.186-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test3_fsmdb0.agg_out to version 1|0||5ddd7da3cf8184c2e1493fd4 took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.846-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:47.367-0500 I NETWORK [conn81] end connection 127.0.0.1:52666 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.846-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:47.367-0500 I NETWORK [conn75] end connection 127.0.0.1:53556 (13 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.847-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.837-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 77aa592b-9868-4767-a173-cad42312a61c: test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614 ( 9994eb3f-8b56-4ba9-9d36-e7240503d188 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.847-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.975-0500 I STORAGE [conn55] Finishing collection drop for test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0).
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.847-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:41.955-0500 I COMMAND [conn37] CMD: drop test2_fsmdb0.agg_out
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.847-0500 [jsTest] New session started with sessionID: { "id" : UUID("70d9a5c9-4ab3-4144-bdbc-a3c60fb1176d") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:47.346-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test3_fsmdb0 from version {} to version { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 } took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.847-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.857-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 06814303-ee3f-44f4-9533-10b3bb4d4368: test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748 (044c76b1-1b5b-4981-867a-b6690dd735b8 ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.847-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.347-0500 I SHARDING [conn17] distributed lock 'test3_fsmdb0' acquired for 'enableSharding', ts : 5ddd7da05cde74b6784bb7b5
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.847-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.187-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45505 #142 (8 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.848-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:47.385-0500 I NETWORK [conn79] end connection 127.0.0.1:52602 (11 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.848-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:47.385-0500 I NETWORK [conn73] end connection 127.0.0.1:53492 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.848-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.852-0500 I INDEX [ReplWriterWorker-14] index build: starting on test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.848-0500 [jsTest] New session started with sessionID: { "id" : UUID("62f11293-c5fd-486f-8c78-e39c230d0361") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.975-0500 I STORAGE [conn55] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0)'. Ident: 'index-113--2588534479858262356', commit timestamp: 'Timestamp(1574796701, 14)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.848-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.848-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:41.971-0500 I COMMAND [conn37] CMD: drop test2_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.848-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:47.347-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test3_fsmdb0.agg_out to version 1|0||5ddd7da3cf8184c2e1493fd4 took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:47.849-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js finished.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.857-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[executor:fsm_workload_test:job0] 2019-11-26T14:31:47.849-0500 agg_out:CheckReplDBHashInBackground ran in 3.82 seconds: no failures detected.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.348-0500 I SHARDING [conn17] Enabling sharding for database [test3_fsmdb0] in config db
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.187-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45504 #143 (9 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:47.397-0500 I NETWORK [conn78] end connection 127.0.0.1:52566 (10 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:47.397-0500 I NETWORK [conn72] end connection 127.0.0.1:53456 (11 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.853-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.975-0500 I STORAGE [conn55] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0)'. Ident: 'index-114--2588534479858262356', commit timestamp: 'Timestamp(1574796701, 14)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:41.971-0500 I STORAGE [conn37] dropCollection: test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:47.352-0500 I NETWORK [conn46] end connection 127.0.0.1:58618 (2 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.858-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[executor:fsm_workload_test:job0] 2019-11-26T14:31:47.850-0500 Running agg_out:CheckReplDBHash...
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.349-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7da05cde74b6784bb7b5' unlocked.
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:47.851-0500 Starting JSTest jstests/hooks/run_check_repl_dbhash.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_check_repl_dbhash"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_check_repl_dbhash.js
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.187-0500 I NETWORK [conn142] received client metadata from 127.0.0.1:45505 conn142: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.853-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: b9f980d3-4ed0-49d1-8c8b-a80fd2472cbb: test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748 (044c76b1-1b5b-4981-867a-b6690dd735b8 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.975-0500 I STORAGE [conn55] Deferring table drop for collection 'test2_fsmdb0.fsmcoll0'. Ident: collection-112--2588534479858262356, commit timestamp: Timestamp(1574796701, 14)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:41.971-0500 I STORAGE [conn37] Finishing collection drop for test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0).
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:47.375-0500 I NETWORK [conn47] end connection 127.0.0.1:58622 (1 connection now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.860-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.352-0500 I SHARDING [conn17] distributed lock 'test3_fsmdb0' acquired for 'shardCollection', ts : 5ddd7da05cde74b6784bb7bb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.187-0500 I NETWORK [conn143] received client metadata from 127.0.0.1:45504 conn143: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.853-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.983-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.fsmcoll0 took 0 ms and found the collection is not sharded
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:41.971-0500 I STORAGE [conn37] Deferring table drop for index '_id_' on collection 'test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0)'. Ident: 'index-329-8224331490264904478', commit timestamp: 'Timestamp(1574796701, 13)'
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:47.385-0500 I NETWORK [conn45] end connection 127.0.0.1:58558 (0 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.862-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564 (4fb1c91d-b1bb-4f34-b5a6-a959123153c4) to test3_fsmdb0.agg_out and drop 12bfc6f5-a5d9-4228-a70a-b419624ce864.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.353-0500 I SHARDING [conn17] distributed lock 'test3_fsmdb0.fsmcoll0' acquired for 'shardCollection', ts : 5ddd7da05cde74b6784bb7bd
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.207-0500 I COMMAND [conn140] command test3_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("49d93db0-9abd-4103-bc81-b25086705499") }, $clusterTime: { clusterTime: Timestamp(1574796704, 3032), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test3_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d\", to: \"test3_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test3_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:713 protocol:op_msg 2357ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.853-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.983-0500 I SHARDING [conn55] Updating metadata for collection test2_fsmdb0.fsmcoll0 from collection version: 1|3||5ddd7d96cf8184c2e1493933, shard version: 1|3||5ddd7d96cf8184c2e1493933 to collection version: due to UUID change
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:41.972-0500 I STORAGE [conn37] Deferring table drop for index '_id_hashed' on collection 'test2_fsmdb0.fsmcoll0 (11da2d1e-3dd5-4812-9686-c490a6bdfff0)'. Ident: 'index-330-8224331490264904478', commit timestamp: 'Timestamp(1574796701, 13)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.862-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test3_fsmdb0.agg_out (12bfc6f5-a5d9-4228-a70a-b419624ce864) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796704, 1078), t: 1 } and commit timestamp Timestamp(1574796704, 1078)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.355-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test3_fsmdb0 from version {} to version { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.226-0500 I NETWORK [conn143] end connection 127.0.0.1:45504 (8 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.283-0500 I NETWORK [conn134] end connection 127.0.0.1:45464 (7 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.983-0500 I COMMAND [ShardServerCatalogCacheLoader-1] CMD: drop config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:41.972-0500 I STORAGE [conn37] Deferring table drop for collection 'test2_fsmdb0.fsmcoll0'. Ident: collection-328-8224331490264904478, commit timestamp: Timestamp(1574796701, 13)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.862-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test3_fsmdb0.agg_out (12bfc6f5-a5d9-4228-a70a-b419624ce864).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.355-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test3_fsmdb0.fsmcoll0 to version 1|3||5ddd7da0cf8184c2e1493df9 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.856-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.284-0500 I NETWORK [conn139] end connection 127.0.0.1:45474 (6 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.984-0500 I STORAGE [ShardServerCatalogCacheLoader-1] dropCollection: config.cache.chunks.test2_fsmdb0.fsmcoll0 (e923876b-cb14-4999-bce6-e0591b1153b2) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:41.982-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test2_fsmdb0.fsmcoll0 took 0 ms and found the collection is not sharded
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.862-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 4fb1c91d-b1bb-4f34-b5a6-a959123153c4 from test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564 to test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.357-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7da05cde74b6784bb7bd' unlocked.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.358-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7da05cde74b6784bb7bb' unlocked.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.307-0500 I NETWORK [conn140] end connection 127.0.0.1:45478 (5 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:47.858-0500 JSTest jstests/hooks/run_check_repl_dbhash.js started with pid 15984.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.984-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Finishing collection drop for config.cache.chunks.test2_fsmdb0.fsmcoll0 (e923876b-cb14-4999-bce6-e0591b1153b2).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:41.982-0500 I SHARDING [conn37] Updating metadata for collection test2_fsmdb0.fsmcoll0 from collection version: 1|3||5ddd7d96cf8184c2e1493933, shard version: 1|1||5ddd7d96cf8184c2e1493933 to collection version: due to UUID change
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.862-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.agg_out (12bfc6f5-a5d9-4228-a70a-b419624ce864)'. Ident: 'index-174--7234316082034423155', commit timestamp: 'Timestamp(1574796704, 1078)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.857-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564 (4fb1c91d-b1bb-4f34-b5a6-a959123153c4) to test3_fsmdb0.agg_out and drop 12bfc6f5-a5d9-4228-a70a-b419624ce864.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.494-0500 I SHARDING [conn52] distributed lock 'test3_fsmdb0' acquired for 'createCollection', ts : 5ddd7da05cde74b6784bb7cc
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.496-0500 I SHARDING [conn52] distributed lock 'test3_fsmdb0.agg_out' acquired for 'createCollection', ts : 5ddd7da05cde74b6784bb7ce
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.984-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test2_fsmdb0.fsmcoll0 (e923876b-cb14-4999-bce6-e0591b1153b2)'. Ident: 'index-117--2588534479858262356', commit timestamp: 'Timestamp(1574796701, 23)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:41.982-0500 I COMMAND [ShardServerCatalogCacheLoader-0] CMD: drop config.cache.chunks.test2_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.862-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.agg_out (12bfc6f5-a5d9-4228-a70a-b419624ce864)'. Ident: 'index-175--7234316082034423155', commit timestamp: 'Timestamp(1574796704, 1078)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.857-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test3_fsmdb0.agg_out (12bfc6f5-a5d9-4228-a70a-b419624ce864) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796704, 1078), t: 1 } and commit timestamp Timestamp(1574796704, 1078)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.357-0500 I NETWORK [conn142] end connection 127.0.0.1:45505 (4 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.525-0500 I SHARDING [conn52] distributed lock with ts: 5ddd7da05cde74b6784bb7ce' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.984-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test2_fsmdb0.fsmcoll0 (e923876b-cb14-4999-bce6-e0591b1153b2)'. Ident: 'index-118--2588534479858262356', commit timestamp: 'Timestamp(1574796701, 23)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:41.982-0500 I STORAGE [ShardServerCatalogCacheLoader-0] dropCollection: config.cache.chunks.test2_fsmdb0.fsmcoll0 (c904d8e5-593f-4133-b81d-a4e28a1049f0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.862-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test3_fsmdb0.agg_out'. Ident: collection-173--7234316082034423155, commit timestamp: Timestamp(1574796704, 1078)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.857-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test3_fsmdb0.agg_out (12bfc6f5-a5d9-4228-a70a-b419624ce864).
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.360-0500 I NETWORK [conn141] end connection 127.0.0.1:45482 (3 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:44.526-0500 I SHARDING [conn52] distributed lock with ts: 5ddd7da05cde74b6784bb7cc' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.984-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Deferring table drop for collection 'config.cache.chunks.test2_fsmdb0.fsmcoll0'. Ident: collection-116--2588534479858262356, commit timestamp: Timestamp(1574796701, 23)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.986-0500 I COMMAND [conn55] dropDatabase test2_fsmdb0 - starting
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.986-0500 I COMMAND [conn55] dropDatabase test2_fsmdb0 - dropped 0 collection(s)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.986-0500 I COMMAND [conn55] dropDatabase test2_fsmdb0 - finished
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.383-0500 I NETWORK [conn127] end connection 127.0.0.1:45362 (2 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.079-0500 I SHARDING [conn17] distributed lock 'test3_fsmdb0' acquired for 'enableSharding', ts : 5ddd7da35cde74b6784bb7e6
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:41.982-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Finishing collection drop for config.cache.chunks.test2_fsmdb0.fsmcoll0 (c904d8e5-593f-4133-b81d-a4e28a1049f0).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.864-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 06814303-ee3f-44f4-9533-10b3bb4d4368: test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748 ( 044c76b1-1b5b-4981-867a-b6690dd735b8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.857-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 4fb1c91d-b1bb-4f34-b5a6-a959123153c4 from test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564 to test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.998-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 took 0 ms and failed :: caused by :: NamespaceNotFound: database test2_fsmdb0 not found
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.384-0500 I NETWORK [conn129] end connection 127.0.0.1:45406 (1 connection now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.079-0500 I SHARDING [conn17] Enabling sharding for database [test3_fsmdb0] in config db
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:41.982-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test2_fsmdb0.fsmcoll0 (c904d8e5-593f-4133-b81d-a4e28a1049f0)'. Ident: 'index-332-8224331490264904478', commit timestamp: 'Timestamp(1574796701, 21)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.878-0500 I INDEX [ReplWriterWorker-4] index build: starting on test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.857-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.agg_out (12bfc6f5-a5d9-4228-a70a-b419624ce864)'. Ident: 'index-174--2310912778499990807', commit timestamp: 'Timestamp(1574796704, 1078)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:41.998-0500 I SHARDING [conn55] setting this node's cached database version for test2_fsmdb0 to {}
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.386-0500 I NETWORK [conn130] end connection 127.0.0.1:45414 (0 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.081-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7da35cde74b6784bb7e6' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:41.982-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test2_fsmdb0.fsmcoll0 (c904d8e5-593f-4133-b81d-a4e28a1049f0)'. Ident: 'index-333-8224331490264904478', commit timestamp: 'Timestamp(1574796701, 21)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.878-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.878-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 0a5fb425-c29e-4ebd-9fed-0fabf8159b64: test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682 (86ac9d9d-dbdd-45b4-b872-4d3229437831 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.609-0500 I STORAGE [TimestampMonitor] Removing drop-pending idents with drop timestamps before timestamp Timestamp(1574796696, 29)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.083-0500 I SHARDING [conn17] distributed lock 'test3_fsmdb0' acquired for 'shardCollection', ts : 5ddd7da35cde74b6784bb7ec
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:41.982-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Deferring table drop for collection 'config.cache.chunks.test2_fsmdb0.fsmcoll0'. Ident: collection-331-8224331490264904478, commit timestamp: Timestamp(1574796701, 21)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.857-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.agg_out (12bfc6f5-a5d9-4228-a70a-b419624ce864)'. Ident: 'index-175--2310912778499990807', commit timestamp: 'Timestamp(1574796704, 1078)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.878-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.609-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-109--2588534479858262356 (ns: config.cache.chunks.test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 11)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.085-0500 I SHARDING [conn17] distributed lock 'test3_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7da35cde74b6784bb7ee
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:41.991-0500 I COMMAND [conn37] dropDatabase test2_fsmdb0 - starting
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.857-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test3_fsmdb0.agg_out'. Ident: collection-173--2310912778499990807, commit timestamp: Timestamp(1574796704, 1078)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.879-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.610-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-110--2588534479858262356 (ns: config.cache.chunks.test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 11)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.093-0500 D4 TXN [conn52] New transaction started with txnNumber: 0 on session with lsid 534f8f8c-74ab-472e-ab2c-c35a9ec81a2d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:41.992-0500 I COMMAND [conn37] dropDatabase test2_fsmdb0 - dropped 0 collection(s)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.859-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: b9f980d3-4ed0-49d1-8c8b-a80fd2472cbb: test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748 ( 044c76b1-1b5b-4981-867a-b6690dd735b8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.879-0500 I STORAGE [ReplWriterWorker-9] createCollection: test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed with provided UUID: d108d732-d756-4f25-8812-a6483de9ea4c and options: { uuid: UUID("d108d732-d756-4f25-8812-a6483de9ea4c"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.611-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-108--2588534479858262356 (ns: config.cache.chunks.test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 11)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.094-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56886 #123 (38 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:41.992-0500 I COMMAND [conn37] dropDatabase test2_fsmdb0 - finished
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.872-0500 I INDEX [ReplWriterWorker-2] index build: starting on test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.881-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.612-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-102--2588534479858262356 (ns: test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 16)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.094-0500 I NETWORK [conn123] received client metadata from 127.0.0.1:56886 conn123: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:41.997-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test2_fsmdb0 took 0 ms and failed :: caused by :: NamespaceNotFound: database test2_fsmdb0 not found
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.873-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.890-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 0a5fb425-c29e-4ebd-9fed-0fabf8159b64: test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682 ( 86ac9d9d-dbdd-45b4-b872-4d3229437831 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.614-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-103--2588534479858262356 (ns: test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 16)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.095-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56888 #124 (39 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:41.997-0500 I SHARDING [conn37] setting this node's cached database version for test2_fsmdb0 to {}
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.873-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 49e956f9-1df8-4df3-a8cd-459fc7656a60: test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682 (86ac9d9d-dbdd-45b4-b872-4d3229437831 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.896-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.615-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-101--2588534479858262356 (ns: test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 16)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.095-0500 I NETWORK [conn124] received client metadata from 127.0.0.1:56888 conn124: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:42.933-0500 I STORAGE [TimestampMonitor] Removing drop-pending idents with drop timestamps before timestamp Timestamp(1574796696, 28)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.873-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.905-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710 (86906110-e72c-4bfd-9dfb-e0faa8857257) to test3_fsmdb0.agg_out and drop 4fb1c91d-b1bb-4f34-b5a6-a959123153c4.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.616-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-105--2588534479858262356 (ns: config.cache.chunks.test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 25)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.137-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test3_fsmdb0 from version {} to version { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:42.933-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-319-8224331490264904478 (ns: config.cache.chunks.test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 9)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.873-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.905-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test3_fsmdb0.agg_out (4fb1c91d-b1bb-4f34-b5a6-a959123153c4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796704, 2798), t: 1 } and commit timestamp Timestamp(1574796704, 2798)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.616-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-106--2588534479858262356 (ns: config.cache.chunks.test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 25)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.137-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test3_fsmdb0.agg_out to version 1|0||5ddd7da3cf8184c2e1493fd4 took 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:42.934-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-322-8224331490264904478 (ns: config.cache.chunks.test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 9)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.874-0500 I STORAGE [ReplWriterWorker-15] createCollection: test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed with provided UUID: d108d732-d756-4f25-8812-a6483de9ea4c and options: { uuid: UUID("d108d732-d756-4f25-8812-a6483de9ea4c"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.905-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test3_fsmdb0.agg_out (4fb1c91d-b1bb-4f34-b5a6-a959123153c4).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.617-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-104--2588534479858262356 (ns: config.cache.chunks.test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 25)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.139-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7da35cde74b6784bb7ee' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:42.935-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-317-8224331490264904478 (ns: config.cache.chunks.test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 9)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.876-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.905-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 86906110-e72c-4bfd-9dfb-e0faa8857257 from test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710 to test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.618-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-121--2588534479858262356 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 1469)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.140-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7da35cde74b6784bb7ec' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:42.937-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-37-8224331490264904478 (ns: test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 15)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.885-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 49e956f9-1df8-4df3-a8cd-459fc7656a60: test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682 ( 86ac9d9d-dbdd-45b4-b872-4d3229437831 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.905-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.agg_out (4fb1c91d-b1bb-4f34-b5a6-a959123153c4)'. Ident: 'index-178--7234316082034423155', commit timestamp: 'Timestamp(1574796704, 2798)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.619-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-122--2588534479858262356 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 1469)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.182-0500 I SHARDING [conn17] distributed lock 'test3_fsmdb0' acquired for 'enableSharding', ts : 5ddd7da35cde74b6784bb812
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:42.938-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-38-8224331490264904478 (ns: test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 15)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.893-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.905-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.agg_out (4fb1c91d-b1bb-4f34-b5a6-a959123153c4)'. Ident: 'index-187--7234316082034423155', commit timestamp: 'Timestamp(1574796704, 2798)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.620-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-120--2588534479858262356 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 1469)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.182-0500 I SHARDING [conn17] Enabling sharding for database [test3_fsmdb0] in config db
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:42.939-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-36-8224331490264904478 (ns: test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 15)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.901-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710 (86906110-e72c-4bfd-9dfb-e0faa8857257) to test3_fsmdb0.agg_out and drop 4fb1c91d-b1bb-4f34-b5a6-a959123153c4.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.905-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test3_fsmdb0.agg_out'. Ident: collection-177--7234316082034423155, commit timestamp: Timestamp(1574796704, 2798)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.621-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-129--2588534479858262356 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 1590)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.184-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7da35cde74b6784bb812' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:42.940-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-41-8224331490264904478 (ns: config.cache.chunks.test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 23)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.901-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test3_fsmdb0.agg_out (4fb1c91d-b1bb-4f34-b5a6-a959123153c4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796704, 2798), t: 1 } and commit timestamp Timestamp(1574796704, 2798)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.908-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614 (9994eb3f-8b56-4ba9-9d36-e7240503d188) to test3_fsmdb0.agg_out and drop 86906110-e72c-4bfd-9dfb-e0faa8857257.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.622-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-134--2588534479858262356 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 1590)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.186-0500 I SHARDING [conn19] distributed lock 'test3_fsmdb0' acquired for 'shardCollection', ts : 5ddd7da35cde74b6784bb819
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:42.940-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-42-8224331490264904478 (ns: config.cache.chunks.test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 23)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.901-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test3_fsmdb0.agg_out (4fb1c91d-b1bb-4f34-b5a6-a959123153c4).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.909-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test3_fsmdb0.agg_out (86906110-e72c-4bfd-9dfb-e0faa8857257) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796704, 2968), t: 1 } and commit timestamp Timestamp(1574796704, 2968)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.623-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-124--2588534479858262356 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 1590)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.188-0500 I SHARDING [conn19] distributed lock 'test3_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7da35cde74b6784bb81e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:42.941-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-40-8224331490264904478 (ns: config.cache.chunks.test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 23)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.901-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection 86906110-e72c-4bfd-9dfb-e0faa8857257 from test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710 to test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.909-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test3_fsmdb0.agg_out (86906110-e72c-4bfd-9dfb-e0faa8857257).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.624-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-130--2588534479858262356 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 2222)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.189-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test3_fsmdb0 from version {} to version { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.901-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.agg_out (4fb1c91d-b1bb-4f34-b5a6-a959123153c4)'. Ident: 'index-178--2310912778499990807', commit timestamp: 'Timestamp(1574796704, 2798)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.131-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39516 #157 (38 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.909-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection 9994eb3f-8b56-4ba9-9d36-e7240503d188 from test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614 to test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.625-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-136--2588534479858262356 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 2222)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.190-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test3_fsmdb0.agg_out to version 1|0||5ddd7da3cf8184c2e1493fd4 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.901-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.agg_out (4fb1c91d-b1bb-4f34-b5a6-a959123153c4)'. Ident: 'index-187--2310912778499990807', commit timestamp: 'Timestamp(1574796704, 2798)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.132-0500 I NETWORK [conn157] received client metadata from 127.0.0.1:39516 conn157: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.909-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.agg_out (86906110-e72c-4bfd-9dfb-e0faa8857257)'. Ident: 'index-182--7234316082034423155', commit timestamp: 'Timestamp(1574796704, 2968)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.626-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-125--2588534479858262356 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 2222)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.191-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7da35cde74b6784bb81e' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.901-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test3_fsmdb0.agg_out'. Ident: collection-177--2310912778499990807, commit timestamp: Timestamp(1574796704, 2798)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.132-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39518 #158 (39 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.909-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.agg_out (86906110-e72c-4bfd-9dfb-e0faa8857257)'. Ident: 'index-189--7234316082034423155', commit timestamp: 'Timestamp(1574796704, 2968)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.626-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-132--2588534479858262356 (ns: test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9) with drop timestamp Timestamp(1574796694, 3097)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.192-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7da35cde74b6784bb819' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.905-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614 (9994eb3f-8b56-4ba9-9d36-e7240503d188) to test3_fsmdb0.agg_out and drop 86906110-e72c-4bfd-9dfb-e0faa8857257.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.132-0500 I NETWORK [conn158] received client metadata from 127.0.0.1:39518 conn158: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.909-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test3_fsmdb0.agg_out'. Ident: collection-181--7234316082034423155, commit timestamp: Timestamp(1574796704, 2968)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.627-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-140--2588534479858262356 (ns: test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9) with drop timestamp Timestamp(1574796694, 3097)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.291-0500 I SHARDING [conn23] distributed lock 'test3_fsmdb0' acquired for 'enableSharding', ts : 5ddd7da35cde74b6784bb835
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.905-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test3_fsmdb0.agg_out (86906110-e72c-4bfd-9dfb-e0faa8857257) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796704, 2968), t: 1 } and commit timestamp Timestamp(1574796704, 2968)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.137-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39536 #159 (40 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.911-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748 (044c76b1-1b5b-4981-867a-b6690dd735b8) to test3_fsmdb0.agg_out and drop 9994eb3f-8b56-4ba9-9d36-e7240503d188.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.629-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-127--2588534479858262356 (ns: test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9) with drop timestamp Timestamp(1574796694, 3097)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.291-0500 I SHARDING [conn23] Enabling sharding for database [test3_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.905-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test3_fsmdb0.agg_out (86906110-e72c-4bfd-9dfb-e0faa8857257).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.137-0500 I NETWORK [conn159] received client metadata from 127.0.0.1:39536 conn159: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.911-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test3_fsmdb0.agg_out (9994eb3f-8b56-4ba9-9d36-e7240503d188) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796704, 3033), t: 1 } and commit timestamp Timestamp(1574796704, 3033)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.630-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-133--2588534479858262356 (ns: test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220) with drop timestamp Timestamp(1574796694, 3098)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.293-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7da35cde74b6784bb835' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.905-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 9994eb3f-8b56-4ba9-9d36-e7240503d188 from test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614 to test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.137-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39538 #160 (41 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.911-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test3_fsmdb0.agg_out (9994eb3f-8b56-4ba9-9d36-e7240503d188).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.631-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-142--2588534479858262356 (ns: test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220) with drop timestamp Timestamp(1574796694, 3098)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.296-0500 I SHARDING [conn22] distributed lock 'test3_fsmdb0' acquired for 'shardCollection', ts : 5ddd7da35cde74b6784bb83d
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.905-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.agg_out (86906110-e72c-4bfd-9dfb-e0faa8857257)'. Ident: 'index-182--2310912778499990807', commit timestamp: 'Timestamp(1574796704, 2968)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.137-0500 I NETWORK [conn160] received client metadata from 127.0.0.1:39538 conn160: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.911-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 044c76b1-1b5b-4981-867a-b6690dd735b8 from test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748 to test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.632-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-128--2588534479858262356 (ns: test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220) with drop timestamp Timestamp(1574796694, 3098)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.297-0500 I SHARDING [conn22] distributed lock 'test3_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7da35cde74b6784bb841
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.905-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.agg_out (86906110-e72c-4bfd-9dfb-e0faa8857257)'. Ident: 'index-189--2310912778499990807', commit timestamp: 'Timestamp(1574796704, 2968)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.154-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39552 #161 (42 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.911-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.agg_out (9994eb3f-8b56-4ba9-9d36-e7240503d188)'. Ident: 'index-180--7234316082034423155', commit timestamp: 'Timestamp(1574796704, 3033)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.633-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-146--2588534479858262356 (ns: test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946) with drop timestamp Timestamp(1574796695, 5)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.299-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test3_fsmdb0 from version {} to version { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.905-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test3_fsmdb0.agg_out'. Ident: collection-181--2310912778499990807, commit timestamp: Timestamp(1574796704, 2968)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.154-0500 I NETWORK [conn161] received client metadata from 127.0.0.1:39552 conn161: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.911-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.agg_out (9994eb3f-8b56-4ba9-9d36-e7240503d188)'. Ident: 'index-191--7234316082034423155', commit timestamp: 'Timestamp(1574796704, 3033)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.633-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-148--2588534479858262356 (ns: test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946) with drop timestamp Timestamp(1574796695, 5)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.299-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test3_fsmdb0.agg_out to version 1|0||5ddd7da3cf8184c2e1493fd4 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.907-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748 (044c76b1-1b5b-4981-867a-b6690dd735b8) to test3_fsmdb0.agg_out and drop 9994eb3f-8b56-4ba9-9d36-e7240503d188.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.163-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39564 #162 (43 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.911-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test3_fsmdb0.agg_out'. Ident: collection-179--7234316082034423155, commit timestamp: Timestamp(1574796704, 3033)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.634-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-144--2588534479858262356 (ns: test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946) with drop timestamp Timestamp(1574796695, 5)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.300-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7da35cde74b6784bb841' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.907-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test3_fsmdb0.agg_out (9994eb3f-8b56-4ba9-9d36-e7240503d188) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796704, 3033), t: 1 } and commit timestamp Timestamp(1574796704, 3033)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.163-0500 I NETWORK [conn162] received client metadata from 127.0.0.1:39564 conn162: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.913-0500 I STORAGE [ReplWriterWorker-1] createCollection: test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f with provided UUID: c61745e2-9e2c-43eb-bcff-b2a7c934a0dc and options: { uuid: UUID("c61745e2-9e2c-43eb-bcff-b2a7c934a0dc"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.635-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-153--2588534479858262356 (ns: test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e) with drop timestamp Timestamp(1574796695, 509)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.302-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7da35cde74b6784bb83d' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.907-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test3_fsmdb0.agg_out (9994eb3f-8b56-4ba9-9d36-e7240503d188).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.165-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39566 #163 (44 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.925-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.637-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-154--2588534479858262356 (ns: test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e) with drop timestamp Timestamp(1574796695, 509)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.327-0500 I SHARDING [conn22] distributed lock 'test3_fsmdb0' acquired for 'enableSharding', ts : 5ddd7da35cde74b6784bb84e
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.907-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 044c76b1-1b5b-4981-867a-b6690dd735b8 from test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748 to test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.165-0500 I NETWORK [conn163] received client metadata from 127.0.0.1:39566 conn163: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.939-0500 I INDEX [ReplWriterWorker-9] index build: starting on test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:42.638-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-152--2588534479858262356 (ns: test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e) with drop timestamp Timestamp(1574796695, 509)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.328-0500 I SHARDING [conn22] Enabling sharding for database [test3_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.907-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.agg_out (9994eb3f-8b56-4ba9-9d36-e7240503d188)'. Ident: 'index-180--2310912778499990807', commit timestamp: 'Timestamp(1574796704, 3033)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.167-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39572 #164 (45 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.939-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.134-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46990 #161 (39 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.329-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7da35cde74b6784bb84e' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.907-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.agg_out (9994eb3f-8b56-4ba9-9d36-e7240503d188)'. Ident: 'index-191--2310912778499990807', commit timestamp: 'Timestamp(1574796704, 3033)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.167-0500 I NETWORK [conn164] received client metadata from 127.0.0.1:39572 conn164: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.939-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: c818c775-180f-4945-a093-dcbe8b55acd5: test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed (d108d732-d756-4f25-8812-a6483de9ea4c ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.134-0500 I NETWORK [conn161] received client metadata from 127.0.0.1:46990 conn161: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.331-0500 I SHARDING [conn23] distributed lock 'test3_fsmdb0' acquired for 'shardCollection', ts : 5ddd7da35cde74b6784bb855
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.907-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test3_fsmdb0.agg_out'. Ident: collection-179--2310912778499990807, commit timestamp: Timestamp(1574796704, 3033)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.195-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39588 #165 (46 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.939-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.134-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46996 #162 (40 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.332-0500 I SHARDING [conn23] distributed lock 'test3_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7da35cde74b6784bb85a
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.909-0500 I STORAGE [ReplWriterWorker-8] createCollection: test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f with provided UUID: c61745e2-9e2c-43eb-bcff-b2a7c934a0dc and options: { uuid: UUID("c61745e2-9e2c-43eb-bcff-b2a7c934a0dc"), temp: true }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.196-0500 I NETWORK [conn165] received client metadata from 127.0.0.1:39588 conn165: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.940-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.134-0500 I NETWORK [conn162] received client metadata from 127.0.0.1:46996 conn162: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.334-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test3_fsmdb0 from version {} to version { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.923-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.198-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39590 #166 (47 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.941-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682 (86ac9d9d-dbdd-45b4-b872-4d3229437831) to test3_fsmdb0.agg_out and drop 044c76b1-1b5b-4981-867a-b6690dd735b8.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.139-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47010 #163 (41 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.334-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test3_fsmdb0.agg_out to version 1|0||5ddd7da3cf8184c2e1493fd4 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.936-0500 I INDEX [ReplWriterWorker-2] index build: starting on test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.198-0500 I NETWORK [conn166] received client metadata from 127.0.0.1:39590 conn166: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.942-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.139-0500 I NETWORK [conn163] received client metadata from 127.0.0.1:47010 conn163: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.336-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7da35cde74b6784bb85a' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.936-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.208-0500 W CONTROL [conn166] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 327 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.943-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test3_fsmdb0.agg_out (044c76b1-1b5b-4981-867a-b6690dd735b8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796704, 3089), t: 1 } and commit timestamp Timestamp(1574796704, 3089)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.140-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47012 #164 (42 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.337-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7da35cde74b6784bb855' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.936-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 7c64ab6b-2f96-4bd2-8cbf-9cece153d8c1: test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed (d108d732-d756-4f25-8812-a6483de9ea4c ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.213-0500 W CONTROL [conn166] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 327 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.943-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test3_fsmdb0.agg_out (044c76b1-1b5b-4981-867a-b6690dd735b8).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.140-0500 I NETWORK [conn164] received client metadata from 127.0.0.1:47012 conn164: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.344-0500 I SHARDING [conn23] distributed lock 'test3_fsmdb0' acquired for 'enableSharding', ts : 5ddd7da35cde74b6784bb867
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.936-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.215-0500 I NETWORK [conn165] end connection 127.0.0.1:39588 (46 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.943-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 86ac9d9d-dbdd-45b4-b872-4d3229437831 from test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682 to test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.155-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47020 #165 (43 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.344-0500 I SHARDING [conn23] Enabling sharding for database [test3_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.937-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.215-0500 I NETWORK [conn166] end connection 127.0.0.1:39590 (45 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.943-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.agg_out (044c76b1-1b5b-4981-867a-b6690dd735b8)'. Ident: 'index-184--7234316082034423155', commit timestamp: 'Timestamp(1574796704, 3089)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.156-0500 I NETWORK [conn165] received client metadata from 127.0.0.1:47020 conn165: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.345-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7da35cde74b6784bb867' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.937-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682 (86ac9d9d-dbdd-45b4-b872-4d3229437831) to test3_fsmdb0.agg_out and drop 044c76b1-1b5b-4981-867a-b6690dd735b8.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.250-0500 I STORAGE [conn48] createCollection: test3_fsmdb0.fsmcoll0 with provided UUID: 81145456-1c0e-4ef0-89a6-ab06e3485635 and options: { uuid: UUID("81145456-1c0e-4ef0-89a6-ab06e3485635") }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.943-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.agg_out (044c76b1-1b5b-4981-867a-b6690dd735b8)'. Ident: 'index-193--7234316082034423155', commit timestamp: 'Timestamp(1574796704, 3089)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.168-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47040 #166 (44 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.347-0500 I SHARDING [conn22] distributed lock 'test3_fsmdb0' acquired for 'shardCollection', ts : 5ddd7da35cde74b6784bb86f
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.939-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.257-0500 I NETWORK [conn158] end connection 127.0.0.1:39518 (44 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.943-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test3_fsmdb0.agg_out'. Ident: collection-183--7234316082034423155, commit timestamp: Timestamp(1574796704, 3089)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.169-0500 I NETWORK [conn166] received client metadata from 127.0.0.1:47040 conn166: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.348-0500 I SHARDING [conn22] distributed lock 'test3_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7da35cde74b6784bb873
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.939-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test3_fsmdb0.agg_out (044c76b1-1b5b-4981-867a-b6690dd735b8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796704, 3089), t: 1 } and commit timestamp Timestamp(1574796704, 3089)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.261-0500 I INDEX [conn48] index build: done building index _id_ on ns test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:44.944-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: c818c775-180f-4945-a093-dcbe8b55acd5: test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed ( d108d732-d756-4f25-8812-a6483de9ea4c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.170-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47042 #167 (45 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.350-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test3_fsmdb0 from version {} to version { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.939-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test3_fsmdb0.agg_out (044c76b1-1b5b-4981-867a-b6690dd735b8).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.264-0500 I NETWORK [conn157] end connection 127.0.0.1:39516 (43 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.170-0500 I NETWORK [conn167] received client metadata from 127.0.0.1:47042 conn167: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.063-0500 I STORAGE [ReplWriterWorker-4] createCollection: test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d with provided UUID: 06ddf75a-604c-46b9-832b-cc3a7313d379 and options: { uuid: UUID("06ddf75a-604c-46b9-832b-cc3a7313d379"), temp: true }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.350-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test3_fsmdb0.agg_out to version 1|0||5ddd7da3cf8184c2e1493fd4 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.939-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection 86ac9d9d-dbdd-45b4-b872-4d3229437831 from test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682 to test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.266-0500 I INDEX [conn48] index build: done building index _id_hashed on ns test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.172-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47048 #168 (46 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.080-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.351-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7da35cde74b6784bb873' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.939-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.agg_out (044c76b1-1b5b-4981-867a-b6690dd735b8)'. Ident: 'index-184--2310912778499990807', commit timestamp: 'Timestamp(1574796704, 3089)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.267-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test3_fsmdb0 from version {} to version { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.172-0500 I NETWORK [conn168] received client metadata from 127.0.0.1:47048 conn168: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.081-0500 I STORAGE [ReplWriterWorker-10] createCollection: test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f with provided UUID: c3f9e0a0-1f5d-4c67-ab59-d4fb5f300c9c and options: { uuid: UUID("c3f9e0a0-1f5d-4c67-ab59-d4fb5f300c9c"), temp: true }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.352-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7da35cde74b6784bb86f' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.939-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.agg_out (044c76b1-1b5b-4981-867a-b6690dd735b8)'. Ident: 'index-193--2310912778499990807', commit timestamp: 'Timestamp(1574796704, 3089)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.267-0500 I SHARDING [conn48] Marking collection test3_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.194-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test3_fsmdb0 from version {} to version { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.096-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.360-0500 I NETWORK [conn124] end connection 127.0.0.1:56888 (38 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.939-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test3_fsmdb0.agg_out'. Ident: collection-183--2310912778499990807, commit timestamp: Timestamp(1574796704, 3089)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.300-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test3_fsmdb0.fsmcoll0 to version 1|3||5ddd7da0cf8184c2e1493df9 took 1 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.195-0500 I SHARDING [conn55] setting this node's cached database version for test3_fsmdb0 to { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.099-0500 I STORAGE [ReplWriterWorker-5] createCollection: test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc with provided UUID: 25f11fc2-52a2-41e6-9ab2-e763ab10ac0f and options: { uuid: UUID("25f11fc2-52a2-41e6-9ab2-e763ab10ac0f"), temp: true }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.367-0500 I NETWORK [conn123] end connection 127.0.0.1:56886 (37 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:44.940-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 7c64ab6b-2f96-4bd2-8cbf-9cece153d8c1: test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed ( d108d732-d756-4f25-8812-a6483de9ea4c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.300-0500 I SHARDING [conn63] Updating metadata for collection test3_fsmdb0.fsmcoll0 from collection version: to collection version: 1|3||5ddd7da0cf8184c2e1493df9, shard version: 1|1||5ddd7da0cf8184c2e1493df9 due to version change
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.201-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47062 #169 (47 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.101-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35666 #71 (12 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.384-0500 I NETWORK [conn119] end connection 127.0.0.1:56782 (36 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.300-0500 I STORAGE [ShardServerCatalogCacheLoader-0] createCollection: config.cache.chunks.test3_fsmdb0.fsmcoll0 with provided UUID: d291b2bc-f179-4f06-8164-0b81d0131eb1 and options: { uuid: UUID("d291b2bc-f179-4f06-8164-0b81d0131eb1") }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.063-0500 I STORAGE [ReplWriterWorker-15] createCollection: test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d with provided UUID: 06ddf75a-604c-46b9-832b-cc3a7313d379 and options: { uuid: UUID("06ddf75a-604c-46b9-832b-cc3a7313d379"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.201-0500 I NETWORK [conn169] received client metadata from 127.0.0.1:47062 conn169: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.101-0500 I NETWORK [conn71] received client metadata from 127.0.0.1:35666 conn71: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.384-0500 I NETWORK [conn120] end connection 127.0.0.1:56810 (35 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.315-0500 I INDEX [ShardServerCatalogCacheLoader-0] index build: done building index _id_ on ns config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.077-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.201-0500 I STORAGE [conn55] createCollection: test3_fsmdb0.fsmcoll0 with provided UUID: 81145456-1c0e-4ef0-89a6-ab06e3485635 and options: { uuid: UUID("81145456-1c0e-4ef0-89a6-ab06e3485635") }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.115-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.385-0500 I NETWORK [conn121] end connection 127.0.0.1:56820 (34 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.315-0500 I INDEX [ShardServerCatalogCacheLoader-0] Registering index build: 80443940-ac35-4b87-96b8-ff0f514f4e6c
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.078-0500 I STORAGE [ReplWriterWorker-12] createCollection: test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f with provided UUID: c3f9e0a0-1f5d-4c67-ab59-d4fb5f300c9c and options: { uuid: UUID("c3f9e0a0-1f5d-4c67-ab59-d4fb5f300c9c"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.204-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47064 #170 (48 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.119-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed (d108d732-d756-4f25-8812-a6483de9ea4c) to test3_fsmdb0.agg_out and drop 86ac9d9d-dbdd-45b4-b872-4d3229437831.
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:47.880-0500 MongoDB shell version v0.0.0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.387-0500 I NETWORK [conn122] end connection 127.0.0.1:56822 (33 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.840-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.840-0500 Implicit session: session { "id" : UUID("8570f5e3-ee7d-4e10-91ee-3d3bffc2542c") }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.840-0500 MongoDB server version: 0.0.0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.840-0500 true
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.840-0500 2019-11-26T14:31:47.943-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.840-0500 2019-11-26T14:31:47.943-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.840-0500 2019-11-26T14:31:47.944-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.840-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.840-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.840-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.840-0500 [jsTest] New session started with sessionID: { "id" : UUID("b12fafab-589b-402b-b80e-61dc4183f86b") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.840-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.840-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.840-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.840-0500 2019-11-26T14:31:47.948-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.840-0500 2019-11-26T14:31:47.948-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.840-0500 2019-11-26T14:31:47.948-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.841-0500 2019-11-26T14:31:47.948-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.841-0500 2019-11-26T14:31:47.949-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.841-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.841-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.841-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.841-0500 [jsTest] New session started with sessionID: { "id" : UUID("d1b40e07-3722-45c8-8322-1144f1bfb222") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.841-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.841-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.841-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.841-0500 2019-11-26T14:31:47.951-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.841-0500 2019-11-26T14:31:47.951-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.841-0500 2019-11-26T14:31:47.951-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.841-0500 2019-11-26T14:31:47.951-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.841-0500 2019-11-26T14:31:47.952-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.330-0500 I INDEX [ShardServerCatalogCacheLoader-0] index build: starting on config.cache.chunks.test3_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.096-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.204-0500 I NETWORK [conn170] received client metadata from 127.0.0.1:47064 conn170: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.933-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45524 #144 (1 connection now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:47.949-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53596 #77 (12 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:47.949-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52710 #83 (11 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.119-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test3_fsmdb0.agg_out (86ac9d9d-dbdd-45b4-b872-4d3229437831) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796707, 505), t: 1 } and commit timestamp Timestamp(1574796707, 505)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.397-0500 I NETWORK [conn118] end connection 127.0.0.1:56780 (32 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.330-0500 I INDEX [ShardServerCatalogCacheLoader-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.100-0500 I STORAGE [ReplWriterWorker-4] createCollection: test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc with provided UUID: 25f11fc2-52a2-41e6-9ab2-e763ab10ac0f and options: { uuid: UUID("25f11fc2-52a2-41e6-9ab2-e763ab10ac0f"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.212-0500 I INDEX [conn55] index build: done building index _id_ on ns test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:47.933-0500 I NETWORK [conn144] received client metadata from 127.0.0.1:45524 conn144: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:47.949-0500 I NETWORK [conn83] received client metadata from 127.0.0.1:52710 conn83: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:47.949-0500 I NETWORK [conn77] received client metadata from 127.0.0.1:53596 conn77: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.119-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test3_fsmdb0.agg_out (86ac9d9d-dbdd-45b4-b872-4d3229437831).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.943-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56928 #125 (33 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.330-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Index build initialized: 80443940-ac35-4b87-96b8-ff0f514f4e6c: config.cache.chunks.test3_fsmdb0.fsmcoll0 (d291b2bc-f179-4f06-8164-0b81d0131eb1 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.101-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52304 #77 (12 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.212-0500 I INDEX [conn55] Registering index build: 0cef9545-beb5-4010-af37-b798025d4b21
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.119-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection d108d732-d756-4f25-8812-a6483de9ea4c from test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed to test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.119-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.agg_out (86ac9d9d-dbdd-45b4-b872-4d3229437831)'. Ident: 'index-186--7234316082034423155', commit timestamp: 'Timestamp(1574796707, 505)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.331-0500 I INDEX [ShardServerCatalogCacheLoader-0] Waiting for index build to complete: 80443940-ac35-4b87-96b8-ff0f514f4e6c
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.101-0500 I NETWORK [conn77] received client metadata from 127.0.0.1:52304 conn77: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.215-0500 W CONTROL [conn170] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 47 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.944-0500 I NETWORK [conn125] received client metadata from 127.0.0.1:56928 conn125: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.119-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.agg_out (86ac9d9d-dbdd-45b4-b872-4d3229437831)'. Ident: 'index-195--7234316082034423155', commit timestamp: 'Timestamp(1574796707, 505)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.331-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.114-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.225-0500 I INDEX [conn55] index build: starting on test3_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.944-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56930 #126 (34 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.119-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test3_fsmdb0.agg_out'. Ident: collection-185--7234316082034423155, commit timestamp: Timestamp(1574796707, 505)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.331-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.117-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed (d108d732-d756-4f25-8812-a6483de9ea4c) to test3_fsmdb0.agg_out and drop 86ac9d9d-dbdd-45b4-b872-4d3229437831.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.225-0500 I INDEX [conn55] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:47.944-0500 I NETWORK [conn126] received client metadata from 127.0.0.1:56930 conn126: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.134-0500 I INDEX [ReplWriterWorker-11] index build: starting on test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.335-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index lastmod_1 on ns config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.117-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test3_fsmdb0.agg_out (86ac9d9d-dbdd-45b4-b872-4d3229437831) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796707, 505), t: 1 } and commit timestamp Timestamp(1574796707, 505)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.225-0500 I STORAGE [conn55] Index build initialized: 0cef9545-beb5-4010-af37-b798025d4b21: test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.134-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.337-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 80443940-ac35-4b87-96b8-ff0f514f4e6c: config.cache.chunks.test3_fsmdb0.fsmcoll0 ( d291b2bc-f179-4f06-8164-0b81d0131eb1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.117-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test3_fsmdb0.agg_out (86ac9d9d-dbdd-45b4-b872-4d3229437831).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.225-0500 I INDEX [conn55] Waiting for index build to complete: 0cef9545-beb5-4010-af37-b798025d4b21
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.134-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 299460aa-e9ff-4b53-92ce-73c7a5d602ad: test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f (c61745e2-9e2c-43eb-bcff-b2a7c934a0dc ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.337-0500 I INDEX [ShardServerCatalogCacheLoader-0] Index build completed: 80443940-ac35-4b87-96b8-ff0f514f4e6c
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.117-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection d108d732-d756-4f25-8812-a6483de9ea4c from test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed to test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.225-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.135-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:44.337-0500 I SHARDING [ShardServerCatalogCacheLoader-0] Marking collection config.cache.chunks.test3_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.117-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.agg_out (86ac9d9d-dbdd-45b4-b872-4d3229437831)'. Ident: 'index-186--2310912778499990807', commit timestamp: 'Timestamp(1574796707, 505)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.226-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.135-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.098-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39630 #167 (44 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.117-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.agg_out (86ac9d9d-dbdd-45b4-b872-4d3229437831)'. Ident: 'index-195--2310912778499990807', commit timestamp: 'Timestamp(1574796707, 505)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.228-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.137-0500 I STORAGE [ReplWriterWorker-8] createCollection: config.cache.chunks.test3_fsmdb0.agg_out with provided UUID: 4c26dac0-af8d-4579-bbb5-32356c1d2f49 and options: { uuid: UUID("4c26dac0-af8d-4579-bbb5-32356c1d2f49") }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.098-0500 I NETWORK [conn167] received client metadata from 127.0.0.1:39630 conn167: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.117-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test3_fsmdb0.agg_out'. Ident: collection-185--2310912778499990807, commit timestamp: Timestamp(1574796707, 505)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.231-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 0cef9545-beb5-4010-af37-b798025d4b21: test3_fsmdb0.fsmcoll0 ( 81145456-1c0e-4ef0-89a6-ab06e3485635 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.138-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.099-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39636 #168 (45 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.133-0500 I INDEX [ReplWriterWorker-0] index build: starting on test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.231-0500 I INDEX [conn55] Index build completed: 0cef9545-beb5-4010-af37-b798025d4b21
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.147-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 299460aa-e9ff-4b53-92ce-73c7a5d602ad: test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f ( c61745e2-9e2c-43eb-bcff-b2a7c934a0dc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.099-0500 I NETWORK [conn168] received client metadata from 127.0.0.1:39636 conn168: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.133-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.247-0500 I SHARDING [conn55] CMD: shardcollection: { _shardsvrShardCollection: "test3_fsmdb0.fsmcoll0", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("53087a90-9976-412a-8155-7dc18f9c5dc2"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796704, 11), signature: { hash: BinData(0, 9FAD85F96B04780E79458F8D92AFFEB7FD9EF168), keyId: 6763700092420489256 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45414", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796704, 11), t: 1 } }, $db: "admin" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.155-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns config.cache.chunks.test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.145-0500 I COMMAND [conn68] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.133-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 1964891e-79ed-4e28-a9d9-032cbf5821f5: test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f (c61745e2-9e2c-43eb-bcff-b2a7c934a0dc ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.247-0500 I SHARDING [conn55] about to log metadata event into changelog: { _id: "nz_desktop:20004-2019-11-26T14:31:44.247-0500-5ddd7da0cf8184c2e1493df7", server: "nz_desktop:20004", shard: "shard-rs1", clientAddr: "127.0.0.1:46028", time: new Date(1574796704247), what: "shardCollection.start", ns: "test3_fsmdb0.fsmcoll0", details: { shardKey: { _id: "hashed" }, collection: "test3_fsmdb0.fsmcoll0", uuid: UUID("81145456-1c0e-4ef0-89a6-ab06e3485635"), empty: true, fromMapReduce: false, primary: "shard-rs1:shard-rs1/localhost:20004,localhost:20005,localhost:20006", numChunks: 4 } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.176-0500 I INDEX [ReplWriterWorker-5] index build: starting on test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.152-0500 I COMMAND [conn68] CMD: dropIndexes test3_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.133-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.250-0500 W CONTROL [conn170] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 51 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.176-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.171-0500 I COMMAND [conn68] CMD: dropIndexes test3_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.134-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.253-0500 I NETWORK [conn169] end connection 127.0.0.1:47062 (47 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.176-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 5506f32c-7c53-4410-a098-b4668049f961: test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d (06ddf75a-604c-46b9-832b-cc3a7313d379 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.171-0500 I COMMAND [conn68] CMD: dropIndexes test3_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.136-0500 I STORAGE [ReplWriterWorker-4] createCollection: config.cache.chunks.test3_fsmdb0.agg_out with provided UUID: 4c26dac0-af8d-4579-bbb5-32356c1d2f49 and options: { uuid: UUID("4c26dac0-af8d-4579-bbb5-32356c1d2f49") }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.253-0500 I NETWORK [conn170] end connection 127.0.0.1:47064 (46 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.177-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.195-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39652 #169 (46 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.136-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.257-0500 I NETWORK [conn162] end connection 127.0.0.1:46996 (45 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.178-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.195-0500 I NETWORK [conn169] received client metadata from 127.0.0.1:39652 conn169: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.146-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 1964891e-79ed-4e28-a9d9-032cbf5821f5: test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f ( c61745e2-9e2c-43eb-bcff-b2a7c934a0dc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.264-0500 I NETWORK [conn161] end connection 127.0.0.1:46990 (44 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.182-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.197-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39654 #170 (47 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.153-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns config.cache.chunks.test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.298-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test3_fsmdb0.fsmcoll0 to version 1|3||5ddd7da0cf8184c2e1493df9 took 1 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.190-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 5506f32c-7c53-4410-a098-b4668049f961: test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d ( 06ddf75a-604c-46b9-832b-cc3a7313d379 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.197-0500 I NETWORK [conn170] received client metadata from 127.0.0.1:39654 conn170: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.176-0500 I INDEX [ReplWriterWorker-0] index build: starting on test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.298-0500 I SHARDING [conn55] Marking collection test3_fsmdb0.fsmcoll0 as collection version: 1|3||5ddd7da0cf8184c2e1493df9, shard version: 1|3||5ddd7da0cf8184c2e1493df9
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.198-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35688 #72 (13 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.208-0500 W CONTROL [conn170] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 327 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.176-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.298-0500 I STORAGE [ShardServerCatalogCacheLoader-1] createCollection: config.cache.chunks.test3_fsmdb0.fsmcoll0 with provided UUID: a33e44c0-60ea-478a-83bd-e45f3213aca7 and options: { uuid: UUID("a33e44c0-60ea-478a-83bd-e45f3213aca7") }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.199-0500 I INDEX [ReplWriterWorker-13] index build: starting on test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.211-0500 I COMMAND [conn68] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.176-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: f0f01256-30c9-4ee4-8ff9-a85b392c8d32: test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d (06ddf75a-604c-46b9-832b-cc3a7313d379 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.314-0500 I INDEX [ShardServerCatalogCacheLoader-1] index build: done building index _id_ on ns config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.199-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.211-0500 I COMMAND [conn65] CMD: dropIndexes test3_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.176-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.314-0500 I INDEX [ShardServerCatalogCacheLoader-1] Registering index build: d90af0d0-481d-4d42-9804-535d294de621
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.199-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: d489176f-a057-4a9c-9235-9a881ad8ffd3: test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f (c3f9e0a0-1f5d-4c67-ab59-d4fb5f300c9c ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.224-0500 W CONTROL [conn170] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 327 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.176-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.329-0500 I INDEX [ShardServerCatalogCacheLoader-1] index build: starting on config.cache.chunks.test3_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.199-0500 I NETWORK [conn72] received client metadata from 127.0.0.1:35688 conn72: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.227-0500 I NETWORK [conn169] end connection 127.0.0.1:39652 (46 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.180-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.329-0500 I INDEX [ShardServerCatalogCacheLoader-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.199-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.227-0500 I NETWORK [conn170] end connection 127.0.0.1:39654 (45 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.184-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: f0f01256-30c9-4ee4-8ff9-a85b392c8d32: test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d ( 06ddf75a-604c-46b9-832b-cc3a7313d379 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.329-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Index build initialized: d90af0d0-481d-4d42-9804-535d294de621: config.cache.chunks.test3_fsmdb0.fsmcoll0 (a33e44c0-60ea-478a-83bd-e45f3213aca7 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.199-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.231-0500 I COMMAND [conn65] CMD: dropIndexes test3_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.198-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52324 #78 (13 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.329-0500 I INDEX [ShardServerCatalogCacheLoader-1] Waiting for index build to complete: d90af0d0-481d-4d42-9804-535d294de621
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.202-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.233-0500 I COMMAND [conn65] CMD: dropIndexes test3_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.198-0500 I NETWORK [conn78] received client metadata from 127.0.0.1:52324 conn78: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.329-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.204-0500 I COMMAND [ReplWriterWorker-13] CMD: drop test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.245-0500 I COMMAND [conn65] CMD: dropIndexes test3_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.201-0500 I INDEX [ReplWriterWorker-7] index build: starting on test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.330-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.204-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f (c61745e2-9e2c-43eb-bcff-b2a7c934a0dc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796707, 1024), t: 1 } and commit timestamp Timestamp(1574796707, 1024)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.248-0500 I COMMAND [conn65] CMD: dropIndexes test3_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.201-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.333-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.204-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f (c61745e2-9e2c-43eb-bcff-b2a7c934a0dc).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.249-0500 I COMMAND [conn68] CMD: dropIndexes test3_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.201-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: d93a42d9-c951-40d9-b4f1-a74a39918482: test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f (c3f9e0a0-1f5d-4c67-ab59-d4fb5f300c9c ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.335-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: d90af0d0-481d-4d42-9804-535d294de621: config.cache.chunks.test3_fsmdb0.fsmcoll0 ( a33e44c0-60ea-478a-83bd-e45f3213aca7 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.204-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f (c61745e2-9e2c-43eb-bcff-b2a7c934a0dc)'. Ident: 'index-200--7234316082034423155', commit timestamp: 'Timestamp(1574796707, 1024)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.254-0500 I COMMAND [conn68] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.201-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.335-0500 I INDEX [ShardServerCatalogCacheLoader-1] Index build completed: d90af0d0-481d-4d42-9804-535d294de621
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.204-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f (c61745e2-9e2c-43eb-bcff-b2a7c934a0dc)'. Ident: 'index-209--7234316082034423155', commit timestamp: 'Timestamp(1574796707, 1024)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.254-0500 I COMMAND [conn68] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.201-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.335-0500 I SHARDING [ShardServerCatalogCacheLoader-1] Marking collection config.cache.chunks.test3_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.204-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f'. Ident: collection-199--7234316082034423155, commit timestamp: Timestamp(1574796707, 1024)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.255-0500 I COMMAND [conn68] CMD: dropIndexes test3_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.203-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.339-0500 I SHARDING [conn55] Created 4 chunk(s) for: test3_fsmdb0.fsmcoll0, producing collection version 1|3||5ddd7da0cf8184c2e1493df9
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.204-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: d489176f-a057-4a9c-9235-9a881ad8ffd3: test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f ( c3f9e0a0-1f5d-4c67-ab59-d4fb5f300c9c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.256-0500 I COMMAND [conn71] CMD: dropIndexes test3_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.207-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: d93a42d9-c951-40d9-b4f1-a74a39918482: test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f ( c3f9e0a0-1f5d-4c67-ab59-d4fb5f300c9c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.339-0500 I SHARDING [conn55] about to log metadata event into changelog: { _id: "nz_desktop:20004-2019-11-26T14:31:44.339-0500-5ddd7da0cf8184c2e1493e29", server: "nz_desktop:20004", shard: "shard-rs1", clientAddr: "127.0.0.1:46028", time: new Date(1574796704339), what: "shardCollection.end", ns: "test3_fsmdb0.fsmcoll0", details: { version: "1|3||5ddd7da0cf8184c2e1493df9" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.209-0500 W CONTROL [conn72] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 48 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.256-0500 I COMMAND [conn68] CMD: dropIndexes test3_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.207-0500 I COMMAND [ReplWriterWorker-1] CMD: drop test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.340-0500 I COMMAND [conn55] command admin.$cmd appName: "MongoDB Shell" command: _shardsvrShardCollection { _shardsvrShardCollection: "test3_fsmdb0.fsmcoll0", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("53087a90-9976-412a-8155-7dc18f9c5dc2"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796704, 11), signature: { hash: BinData(0, 9FAD85F96B04780E79458F8D92AFFEB7FD9EF168), keyId: 6763700092420489256 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45414", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796704, 11), t: 1 } }, $db: "admin" } numYields:0 reslen:415 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 9 } }, ReplicationStateTransition: { acquireCount: { w: 15 } }, Global: { acquireCount: { r: 8, w: 7 } }, Database: { acquireCount: { r: 8, w: 7, W: 1 } }, Collection: { acquireCount: { r: 8, w: 3, W: 4 } }, Mutex: { acquireCount: { r: 16, W: 4 } } } flowControl:{ acquireCount: 5 } storage:{} protocol:op_msg 140ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.221-0500 I INDEX [ReplWriterWorker-14] index build: starting on test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.266-0500 I COMMAND [conn68] CMD: dropIndexes test3_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.207-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f (c61745e2-9e2c-43eb-bcff-b2a7c934a0dc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796707, 1024), t: 1 } and commit timestamp Timestamp(1574796707, 1024)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.496-0500 I STORAGE [conn55] createCollection: test3_fsmdb0.agg_out with generated UUID: 12bfc6f5-a5d9-4228-a70a-b419624ce864 and options: {}
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.221-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.267-0500 I COMMAND [conn68] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.207-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f (c61745e2-9e2c-43eb-bcff-b2a7c934a0dc).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.506-0500 I INDEX [conn55] index build: done building index _id_ on ns test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.221-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 8f9bc7bd-cae3-42c4-a567-508deb3e437d: test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc (25f11fc2-52a2-41e6-9ab2-e763ab10ac0f ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.269-0500 I COMMAND [conn68] CMD: dropIndexes test3_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.207-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f (c61745e2-9e2c-43eb-bcff-b2a7c934a0dc)'. Ident: 'index-200--2310912778499990807', commit timestamp: 'Timestamp(1574796707, 1024)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.528-0500 I INDEX [conn65] Registering index build: b7fa8ec3-2ae2-4680-80ec-ec228211063b
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.221-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.270-0500 I COMMAND [conn68] CMD: dropIndexes test3_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.207-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f (c61745e2-9e2c-43eb-bcff-b2a7c934a0dc)'. Ident: 'index-209--2310912778499990807', commit timestamp: 'Timestamp(1574796707, 1024)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.538-0500 I INDEX [conn65] index build: starting on test3_fsmdb0.agg_out properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.222-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.273-0500 I COMMAND [conn68] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.207-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f'. Ident: collection-199--2310912778499990807, commit timestamp: Timestamp(1574796707, 1024)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.538-0500 I INDEX [conn65] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.224-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.273-0500 I COMMAND [conn68] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.209-0500 W CONTROL [conn78] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 44 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.539-0500 I STORAGE [conn65] Index build initialized: b7fa8ec3-2ae2-4680-80ec-ec228211063b: test3_fsmdb0.agg_out (12bfc6f5-a5d9-4228-a70a-b419624ce864 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.234-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 8f9bc7bd-cae3-42c4-a567-508deb3e437d: test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc ( 25f11fc2-52a2-41e6-9ab2-e763ab10ac0f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.274-0500 I COMMAND [conn68] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.222-0500 I INDEX [ReplWriterWorker-0] index build: starting on test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.539-0500 I INDEX [conn65] Waiting for index build to complete: b7fa8ec3-2ae2-4680-80ec-ec228211063b
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.241-0500 I INDEX [ReplWriterWorker-12] index build: starting on config.cache.chunks.test3_fsmdb0.agg_out properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.276-0500 I COMMAND [conn68] CMD: dropIndexes test3_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.222-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.539-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.241-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.277-0500 I COMMAND [conn68] CMD: dropIndexes test3_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.222-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 70d8d1b4-5c15-4671-8b07-f7399ba279b8: test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc (25f11fc2-52a2-41e6-9ab2-e763ab10ac0f ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.539-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.241-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: d657213e-4f15-4cf8-bcd3-1b8dbecef3e9: config.cache.chunks.test3_fsmdb0.agg_out (4c26dac0-af8d-4579-bbb5-32356c1d2f49 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.283-0500 I COMMAND [conn68] CMD: dropIndexes test3_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.222-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.541-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.241-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.288-0500 I COMMAND [conn71] CMD: dropIndexes test3_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.223-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.542-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: b7fa8ec3-2ae2-4680-80ec-ec228211063b: test3_fsmdb0.agg_out ( 12bfc6f5-a5d9-4228-a70a-b419624ce864 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.242-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.288-0500 I COMMAND [conn68] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.226-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.542-0500 I INDEX [conn65] Index build completed: b7fa8ec3-2ae2-4680-80ec-ec228211063b
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.244-0500 I SHARDING [ReplWriterWorker-3] Marking collection config.cache.chunks.test3_fsmdb0.agg_out as collection version:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.291-0500 I COMMAND [conn68] CMD: dropIndexes test3_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.235-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 70d8d1b4-5c15-4671-8b07-f7399ba279b8: test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc ( 25f11fc2-52a2-41e6-9ab2-e763ab10ac0f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.649-0500 I STORAGE [conn88] createCollection: test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564 with generated UUID: 4fb1c91d-b1bb-4f34-b5a6-a959123153c4 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.248-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: drain applied 1 side writes (inserted: 1, deleted: 0) for 'lastmod_1' in 3 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.292-0500 I COMMAND [conn71] CMD: dropIndexes test3_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.243-0500 I INDEX [ReplWriterWorker-2] index build: starting on config.cache.chunks.test3_fsmdb0.agg_out properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.649-0500 I STORAGE [conn85] createCollection: test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614 with generated UUID: 9994eb3f-8b56-4ba9-9d36-e7240503d188 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.248-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index lastmod_1 on ns config.cache.chunks.test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.301-0500 I COMMAND [conn68] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.243-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.650-0500 I STORAGE [conn84] createCollection: test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710 with generated UUID: 86906110-e72c-4bfd-9dfb-e0faa8857257 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.250-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: d657213e-4f15-4cf8-bcd3-1b8dbecef3e9: config.cache.chunks.test3_fsmdb0.agg_out ( 4c26dac0-af8d-4579-bbb5-32356c1d2f49 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.312-0500 I COMMAND [conn71] CMD: dropIndexes test3_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.243-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 6d227acc-b106-40dc-8ea3-288388433e2f: config.cache.chunks.test3_fsmdb0.agg_out (4c26dac0-af8d-4579-bbb5-32356c1d2f49 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.651-0500 I STORAGE [conn77] createCollection: test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748 with generated UUID: 044c76b1-1b5b-4981-867a-b6690dd735b8 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.263-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.316-0500 I COMMAND [conn70] CMD: dropIndexes test3_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.243-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.651-0500 I STORAGE [conn82] createCollection: test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682 with generated UUID: 86ac9d9d-dbdd-45b4-b872-4d3229437831 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.263-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f (c3f9e0a0-1f5d-4c67-ab59-d4fb5f300c9c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796707, 2484), t: 1 } and commit timestamp Timestamp(1574796707, 2484)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.323-0500 I COMMAND [conn70] CMD: dropIndexes test3_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.244-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.678-0500 I INDEX [conn88] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.263-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f (c3f9e0a0-1f5d-4c67-ab59-d4fb5f300c9c).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.324-0500 I COMMAND [conn70] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.246-0500 I SHARDING [ReplWriterWorker-13] Marking collection config.cache.chunks.test3_fsmdb0.agg_out as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.679-0500 I INDEX [conn88] Registering index build: 61e61973-d4f1-444b-883c-40bf2de236f9
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.263-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f (c3f9e0a0-1f5d-4c67-ab59-d4fb5f300c9c)'. Ident: 'index-206--7234316082034423155', commit timestamp: 'Timestamp(1574796707, 2484)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.328-0500 I COMMAND [conn70] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.248-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: drain applied 1 side writes (inserted: 1, deleted: 0) for 'lastmod_1' in 1 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.684-0500 I INDEX [conn85] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.263-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f (c3f9e0a0-1f5d-4c67-ab59-d4fb5f300c9c)'. Ident: 'index-215--7234316082034423155', commit timestamp: 'Timestamp(1574796707, 2484)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.333-0500 I COMMAND [conn70] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.248-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index lastmod_1 on ns config.cache.chunks.test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.684-0500 I INDEX [conn85] Registering index build: 95ecce81-85a6-444b-9cd3-332bff0c5e7a
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.263-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f'. Ident: collection-205--7234316082034423155, commit timestamp: Timestamp(1574796707, 2484)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.361-0500 I NETWORK [conn168] end connection 127.0.0.1:39636 (44 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.248-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 6d227acc-b106-40dc-8ea3-288388433e2f: config.cache.chunks.test3_fsmdb0.agg_out ( 4c26dac0-af8d-4579-bbb5-32356c1d2f49 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.690-0500 I INDEX [conn84] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.264-0500 I COMMAND [ReplWriterWorker-3] CMD: drop test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.361-0500 I COMMAND [conn70] CMD: dropIndexes test3_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.277-0500 I COMMAND [ReplWriterWorker-3] CMD: drop test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.691-0500 I INDEX [conn84] Registering index build: 8cd2b2a1-868c-455d-8b96-b313512f7fad
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.264-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d (06ddf75a-604c-46b9-832b-cc3a7313d379) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796707, 2485), t: 1 } and commit timestamp Timestamp(1574796707, 2485)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.367-0500 I NETWORK [conn167] end connection 127.0.0.1:39630 (43 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.277-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f (c3f9e0a0-1f5d-4c67-ab59-d4fb5f300c9c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796707, 2484), t: 1 } and commit timestamp Timestamp(1574796707, 2484)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.699-0500 I INDEX [conn77] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.264-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d (06ddf75a-604c-46b9-832b-cc3a7313d379).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.384-0500 I NETWORK [conn160] end connection 127.0.0.1:39538 (42 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.277-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f (c3f9e0a0-1f5d-4c67-ab59-d4fb5f300c9c).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.700-0500 I INDEX [conn77] Registering index build: 0417f35f-8e5b-47ba-941e-6eb136664ec8
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.264-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d (06ddf75a-604c-46b9-832b-cc3a7313d379)'. Ident: 'index-204--7234316082034423155', commit timestamp: 'Timestamp(1574796707, 2485)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.384-0500 I NETWORK [conn161] end connection 127.0.0.1:39552 (41 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.277-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f (c3f9e0a0-1f5d-4c67-ab59-d4fb5f300c9c)'. Ident: 'index-206--2310912778499990807', commit timestamp: 'Timestamp(1574796707, 2484)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.707-0500 I INDEX [conn82] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.264-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d (06ddf75a-604c-46b9-832b-cc3a7313d379)'. Ident: 'index-213--7234316082034423155', commit timestamp: 'Timestamp(1574796707, 2485)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.385-0500 I NETWORK [conn162] end connection 127.0.0.1:39564 (40 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.277-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f (c3f9e0a0-1f5d-4c67-ab59-d4fb5f300c9c)'. Ident: 'index-215--2310912778499990807', commit timestamp: 'Timestamp(1574796707, 2484)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.707-0500 I INDEX [conn82] Registering index build: 8f88ffbc-176c-4ef0-8570-e33d724925db
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.264-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d'. Ident: collection-203--7234316082034423155, commit timestamp: Timestamp(1574796707, 2485)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.385-0500 I NETWORK [conn164] end connection 127.0.0.1:39572 (39 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.277-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f'. Ident: collection-205--2310912778499990807, commit timestamp: Timestamp(1574796707, 2484)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.721-0500 I INDEX [conn88] index build: starting on test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.266-0500 I COMMAND [ReplWriterWorker-11] CMD: drop test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.385-0500 I NETWORK [conn163] end connection 127.0.0.1:39566 (38 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.278-0500 I COMMAND [ReplWriterWorker-11] CMD: drop test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.721-0500 I INDEX [conn88] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.267-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc (25f11fc2-52a2-41e6-9ab2-e763ab10ac0f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796707, 2539), t: 1 } and commit timestamp Timestamp(1574796707, 2539)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.397-0500 I NETWORK [conn159] end connection 127.0.0.1:39536 (37 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.278-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d (06ddf75a-604c-46b9-832b-cc3a7313d379) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796707, 2485), t: 1 } and commit timestamp Timestamp(1574796707, 2485)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.721-0500 I STORAGE [conn88] Index build initialized: 61e61973-d4f1-444b-883c-40bf2de236f9: test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564 (4fb1c91d-b1bb-4f34-b5a6-a959123153c4 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.267-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc (25f11fc2-52a2-41e6-9ab2-e763ab10ac0f).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.949-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39672 #171 (38 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.278-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d (06ddf75a-604c-46b9-832b-cc3a7313d379).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.721-0500 I INDEX [conn88] Waiting for index build to complete: 61e61973-d4f1-444b-883c-40bf2de236f9
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.267-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc (25f11fc2-52a2-41e6-9ab2-e763ab10ac0f)'. Ident: 'index-208--7234316082034423155', commit timestamp: 'Timestamp(1574796707, 2539)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.949-0500 I NETWORK [conn171] received client metadata from 127.0.0.1:39672 conn171: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.278-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d (06ddf75a-604c-46b9-832b-cc3a7313d379)'. Ident: 'index-204--2310912778499990807', commit timestamp: 'Timestamp(1574796707, 2485)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.736-0500 I INDEX [conn85] index build: starting on test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.267-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc (25f11fc2-52a2-41e6-9ab2-e763ab10ac0f)'. Ident: 'index-217--7234316082034423155', commit timestamp: 'Timestamp(1574796707, 2539)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.949-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39678 #172 (39 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.278-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d (06ddf75a-604c-46b9-832b-cc3a7313d379)'. Ident: 'index-213--2310912778499990807', commit timestamp: 'Timestamp(1574796707, 2485)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.736-0500 I INDEX [conn85] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.267-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc'. Ident: collection-207--7234316082034423155, commit timestamp: Timestamp(1574796707, 2539)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:47.950-0500 I NETWORK [conn172] received client metadata from 127.0.0.1:39678 conn172: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.278-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d'. Ident: collection-203--2310912778499990807, commit timestamp: Timestamp(1574796707, 2485)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.736-0500 I STORAGE [conn85] Index build initialized: 95ecce81-85a6-444b-9cd3-332bff0c5e7a: test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614 (9994eb3f-8b56-4ba9-9d36-e7240503d188 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.267-0500 I STORAGE [ReplWriterWorker-6] createCollection: test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 with provided UUID: 6b785421-9783-477e-b4dc-e9674336abe9 and options: { uuid: UUID("6b785421-9783-477e-b4dc-e9674336abe9"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.281-0500 I COMMAND [ReplWriterWorker-13] CMD: drop test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.736-0500 I INDEX [conn85] Waiting for index build to complete: 95ecce81-85a6-444b-9cd3-332bff0c5e7a
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.281-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.281-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc (25f11fc2-52a2-41e6-9ab2-e763ab10ac0f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796707, 2539), t: 1 } and commit timestamp Timestamp(1574796707, 2539)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.736-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.299-0500 I INDEX [ReplWriterWorker-6] index build: starting on test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.281-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc (25f11fc2-52a2-41e6-9ab2-e763ab10ac0f).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.737-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.299-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.281-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc (25f11fc2-52a2-41e6-9ab2-e763ab10ac0f)'. Ident: 'index-208--2310912778499990807', commit timestamp: 'Timestamp(1574796707, 2539)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.744-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.299-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 6d54b49c-b52f-4ccd-96d1-fbbf14699ffd: test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 (6b785421-9783-477e-b4dc-e9674336abe9 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.281-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc (25f11fc2-52a2-41e6-9ab2-e763ab10ac0f)'. Ident: 'index-217--2310912778499990807', commit timestamp: 'Timestamp(1574796707, 2539)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.752-0500 I INDEX [conn84] index build: starting on test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.299-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.281-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc'. Ident: collection-207--2310912778499990807, commit timestamp: Timestamp(1574796707, 2539)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.752-0500 I INDEX [conn84] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.300-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.282-0500 I STORAGE [ReplWriterWorker-4] createCollection: test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 with provided UUID: 6b785421-9783-477e-b4dc-e9674336abe9 and options: { uuid: UUID("6b785421-9783-477e-b4dc-e9674336abe9"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.752-0500 I STORAGE [conn84] Index build initialized: 8cd2b2a1-868c-455d-8b96-b313512f7fad: test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710 (86906110-e72c-4bfd-9dfb-e0faa8857257 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.302-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.296-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.752-0500 I INDEX [conn84] Waiting for index build to complete: 8cd2b2a1-868c-455d-8b96-b313512f7fad
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.304-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 6d54b49c-b52f-4ccd-96d1-fbbf14699ffd: test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 ( 6b785421-9783-477e-b4dc-e9674336abe9 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.316-0500 I INDEX [ReplWriterWorker-4] index build: starting on test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.752-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.315-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.316-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.752-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.315-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 (6b785421-9783-477e-b4dc-e9674336abe9) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796707, 3056), t: 1 } and commit timestamp Timestamp(1574796707, 3056)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.316-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: cea69107-2bfa-4470-bfbc-31113be76a44: test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 (6b785421-9783-477e-b4dc-e9674336abe9 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.753-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 61e61973-d4f1-444b-883c-40bf2de236f9: test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564 ( 4fb1c91d-b1bb-4f34-b5a6-a959123153c4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.315-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 (6b785421-9783-477e-b4dc-e9674336abe9).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.316-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.754-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.315-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 (6b785421-9783-477e-b4dc-e9674336abe9)'. Ident: 'index-222--7234316082034423155', commit timestamp: 'Timestamp(1574796707, 3056)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.317-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.754-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.315-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 (6b785421-9783-477e-b4dc-e9674336abe9)'. Ident: 'index-223--7234316082034423155', commit timestamp: 'Timestamp(1574796707, 3056)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.319-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.764-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.315-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7'. Ident: collection-221--7234316082034423155, commit timestamp: Timestamp(1574796707, 3056)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.321-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: cea69107-2bfa-4470-bfbc-31113be76a44: test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 ( 6b785421-9783-477e-b4dc-e9674336abe9 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.767-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.355-0500 W CONTROL [conn72] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 77 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.332-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.774-0500 I INDEX [conn77] index build: starting on test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.357-0500 I NETWORK [conn72] end connection 127.0.0.1:35688 (12 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.333-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 (6b785421-9783-477e-b4dc-e9674336abe9) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796707, 3056), t: 1 } and commit timestamp Timestamp(1574796707, 3056)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.774-0500 I INDEX [conn77] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.367-0500 I NETWORK [conn71] end connection 127.0.0.1:35666 (11 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.333-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 (6b785421-9783-477e-b4dc-e9674336abe9).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.774-0500 I STORAGE [conn77] Index build initialized: 0417f35f-8e5b-47ba-941e-6eb136664ec8: test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748 (044c76b1-1b5b-4981-867a-b6690dd735b8 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.386-0500 I NETWORK [conn69] end connection 127.0.0.1:35604 (10 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.333-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 (6b785421-9783-477e-b4dc-e9674336abe9)'. Ident: 'index-222--2310912778499990807', commit timestamp: 'Timestamp(1574796707, 3056)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.774-0500 I INDEX [conn77] Waiting for index build to complete: 0417f35f-8e5b-47ba-941e-6eb136664ec8
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.397-0500 I NETWORK [conn68] end connection 127.0.0.1:35566 (9 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.333-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 (6b785421-9783-477e-b4dc-e9674336abe9)'. Ident: 'index-223--2310912778499990807', commit timestamp: 'Timestamp(1574796707, 3056)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.774-0500 I INDEX [conn88] Index build completed: 61e61973-d4f1-444b-883c-40bf2de236f9
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.951-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35708 #73 (10 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.333-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7'. Ident: collection-221--2310912778499990807, commit timestamp: Timestamp(1574796707, 3056)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.774-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:47.952-0500 I NETWORK [conn73] received client metadata from 127.0.0.1:35708 conn73: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.355-0500 W CONTROL [conn78] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 124 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.775-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 8cd2b2a1-868c-455d-8b96-b313512f7fad: test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710 ( 86906110-e72c-4bfd-9dfb-e0faa8857257 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.357-0500 I NETWORK [conn78] end connection 127.0.0.1:52324 (12 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.779-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 95ecce81-85a6-444b-9cd3-332bff0c5e7a: test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614 ( 9994eb3f-8b56-4ba9-9d36-e7240503d188 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.367-0500 I NETWORK [conn77] end connection 127.0.0.1:52304 (11 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.780-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.386-0500 I NETWORK [conn75] end connection 127.0.0.1:52242 (10 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.783-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.397-0500 I NETWORK [conn74] end connection 127.0.0.1:52204 (9 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.794-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 0417f35f-8e5b-47ba-941e-6eb136664ec8: test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748 ( 044c76b1-1b5b-4981-867a-b6690dd735b8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.951-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52344 #79 (10 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.801-0500 I INDEX [conn82] index build: starting on test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:47.951-0500 I NETWORK [conn79] received client metadata from 127.0.0.1:52344 conn79: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.801-0500 I INDEX [conn82] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.801-0500 I STORAGE [conn82] Index build initialized: 8f88ffbc-176c-4ef0-8570-e33d724925db: test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682 (86ac9d9d-dbdd-45b4-b872-4d3229437831 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.802-0500 I INDEX [conn84] Index build completed: 8cd2b2a1-868c-455d-8b96-b313512f7fad
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.802-0500 I INDEX [conn85] Index build completed: 95ecce81-85a6-444b-9cd3-332bff0c5e7a
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.802-0500 I INDEX [conn82] Waiting for index build to complete: 8f88ffbc-176c-4ef0-8570-e33d724925db
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.802-0500 I INDEX [conn77] Index build completed: 0417f35f-8e5b-47ba-941e-6eb136664ec8
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.802-0500 I COMMAND [conn88] renameCollectionForCommand: rename test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564 to test3_fsmdb0.agg_out and drop test3_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.802-0500 I COMMAND [conn84] command test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("fe44fdfe-c512-4ee5-9746-0ef4f91d78d0"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796704, 564), signature: { hash: BinData(0, 9FAD85F96B04780E79458F8D92AFFEB7FD9EF168), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45474", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796704, 556), t: 1 } }, $db: "test3_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 110ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.802-0500 I COMMAND [conn85] command test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614 appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("49d93db0-9abd-4103-bc81-b25086705499"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796704, 564), signature: { hash: BinData(0, 9FAD85F96B04780E79458F8D92AFFEB7FD9EF168), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45478", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796704, 556), t: 1 } }, $db: "test3_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 117ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.802-0500 I COMMAND [conn77] command test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748 appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("c396db2f-788f-419f-b744-d7ae3889c6f5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796704, 564), signature: { hash: BinData(0, 9FAD85F96B04780E79458F8D92AFFEB7FD9EF168), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58618", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796704, 556), t: 1 } }, $db: "test3_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 101ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.802-0500 I STORAGE [conn88] dropCollection: test3_fsmdb0.agg_out (12bfc6f5-a5d9-4228-a70a-b419624ce864) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796704, 1078), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.802-0500 I STORAGE [conn88] Finishing collection drop for test3_fsmdb0.agg_out (12bfc6f5-a5d9-4228-a70a-b419624ce864).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.802-0500 I STORAGE [conn88] renameCollection: renaming collection 4fb1c91d-b1bb-4f34-b5a6-a959123153c4 from test3_fsmdb0.tmp.agg_out.81495f67-9195-4644-adc5-0465bc3f1564 to test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.802-0500 I STORAGE [conn88] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.agg_out (12bfc6f5-a5d9-4228-a70a-b419624ce864)'. Ident: 'index-165--2588534479858262356', commit timestamp: 'Timestamp(1574796704, 1078)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.802-0500 I STORAGE [conn88] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.agg_out (12bfc6f5-a5d9-4228-a70a-b419624ce864)'. Ident: 'index-166--2588534479858262356', commit timestamp: 'Timestamp(1574796704, 1078)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.802-0500 I STORAGE [conn88] Deferring table drop for collection 'test3_fsmdb0.agg_out'. Ident: collection-164--2588534479858262356, commit timestamp: Timestamp(1574796704, 1078)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.802-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.802-0500 I COMMAND [conn65] command test3_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d53e7426-95aa-4942-aab6-beb057515432"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4406302753950035930, ns: "test3_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5386534648310953848, ns: "test3_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test3_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796704648), clusterTime: Timestamp(1574796704, 559) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d53e7426-95aa-4942-aab6-beb057515432"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796704, 559), signature: { hash: BinData(0, 9FAD85F96B04780E79458F8D92AFFEB7FD9EF168), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45464", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796704, 556), t: 1 } }, $db: "test3_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 153ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.803-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.805-0500 I COMMAND [conn65] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.805-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.808-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 8f88ffbc-176c-4ef0-8570-e33d724925db: test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682 ( 86ac9d9d-dbdd-45b4-b872-4d3229437831 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.808-0500 I INDEX [conn82] Index build completed: 8f88ffbc-176c-4ef0-8570-e33d724925db
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.808-0500 I COMMAND [conn82] command test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682 appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("8f169c90-43a1-4c3d-84ba-96afe3bea6ba"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796704, 564), signature: { hash: BinData(0, 9FAD85F96B04780E79458F8D92AFFEB7FD9EF168), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58622", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796704, 556), t: 1 } }, $db: "test3_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 100ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.808-0500 I STORAGE [conn84] createCollection: test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed with generated UUID: d108d732-d756-4f25-8812-a6483de9ea4c and options: { temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.830-0500 I INDEX [conn84] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.830-0500 I COMMAND [conn77] renameCollectionForCommand: rename test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710 to test3_fsmdb0.agg_out and drop test3_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.830-0500 I STORAGE [conn77] dropCollection: test3_fsmdb0.agg_out (4fb1c91d-b1bb-4f34-b5a6-a959123153c4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796704, 2798), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.830-0500 I STORAGE [conn77] Finishing collection drop for test3_fsmdb0.agg_out (4fb1c91d-b1bb-4f34-b5a6-a959123153c4).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.830-0500 I STORAGE [conn77] renameCollection: renaming collection 86906110-e72c-4bfd-9dfb-e0faa8857257 from test3_fsmdb0.tmp.agg_out.19d64e54-a241-4ba6-92d0-5f4bcdd29710 to test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.830-0500 I STORAGE [conn77] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.agg_out (4fb1c91d-b1bb-4f34-b5a6-a959123153c4)'. Ident: 'index-173--2588534479858262356', commit timestamp: 'Timestamp(1574796704, 2798)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.830-0500 I STORAGE [conn77] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.agg_out (4fb1c91d-b1bb-4f34-b5a6-a959123153c4)'. Ident: 'index-178--2588534479858262356', commit timestamp: 'Timestamp(1574796704, 2798)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.830-0500 I STORAGE [conn77] Deferring table drop for collection 'test3_fsmdb0.agg_out'. Ident: collection-168--2588534479858262356, commit timestamp: Timestamp(1574796704, 2798)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.830-0500 I INDEX [conn84] Registering index build: a43af36d-d0bc-4b24-97f5-610b5a34327b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.831-0500 I COMMAND [conn64] command test3_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("fe44fdfe-c512-4ee5-9746-0ef4f91d78d0"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2915954206931698547, ns: "test3_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6413536294153025808, ns: "test3_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test3_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796704648), clusterTime: Timestamp(1574796704, 559) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("fe44fdfe-c512-4ee5-9746-0ef4f91d78d0"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796704, 559), signature: { hash: BinData(0, 9FAD85F96B04780E79458F8D92AFFEB7FD9EF168), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45474", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796704, 556), t: 1 } }, $db: "test3_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 182ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.846-0500 I INDEX [conn84] index build: starting on test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.846-0500 I INDEX [conn84] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.846-0500 I STORAGE [conn84] Index build initialized: a43af36d-d0bc-4b24-97f5-610b5a34327b: test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed (d108d732-d756-4f25-8812-a6483de9ea4c ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.846-0500 I INDEX [conn84] Waiting for index build to complete: a43af36d-d0bc-4b24-97f5-610b5a34327b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.846-0500 I COMMAND [conn82] renameCollectionForCommand: rename test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614 to test3_fsmdb0.agg_out and drop test3_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.847-0500 I STORAGE [conn82] dropCollection: test3_fsmdb0.agg_out (86906110-e72c-4bfd-9dfb-e0faa8857257) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796704, 2968), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.847-0500 I STORAGE [conn82] Finishing collection drop for test3_fsmdb0.agg_out (86906110-e72c-4bfd-9dfb-e0faa8857257).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.847-0500 I STORAGE [conn82] renameCollection: renaming collection 9994eb3f-8b56-4ba9-9d36-e7240503d188 from test3_fsmdb0.tmp.agg_out.ed8b3633-c75a-49e7-b54b-664ad516d614 to test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.847-0500 I STORAGE [conn82] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.agg_out (86906110-e72c-4bfd-9dfb-e0faa8857257)'. Ident: 'index-175--2588534479858262356', commit timestamp: 'Timestamp(1574796704, 2968)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.847-0500 I STORAGE [conn82] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.agg_out (86906110-e72c-4bfd-9dfb-e0faa8857257)'. Ident: 'index-182--2588534479858262356', commit timestamp: 'Timestamp(1574796704, 2968)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.847-0500 I STORAGE [conn82] Deferring table drop for collection 'test3_fsmdb0.agg_out'. Ident: collection-170--2588534479858262356, commit timestamp: Timestamp(1574796704, 2968)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.847-0500 I COMMAND [conn62] command test3_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("49d93db0-9abd-4103-bc81-b25086705499"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8515651388444169636, ns: "test3_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5026479692655778993, ns: "test3_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test3_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796704648), clusterTime: Timestamp(1574796704, 559) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("49d93db0-9abd-4103-bc81-b25086705499"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796704, 559), signature: { hash: BinData(0, 9FAD85F96B04780E79458F8D92AFFEB7FD9EF168), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45478", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796704, 556), t: 1 } }, $db: "test3_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 198ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.848-0500 I COMMAND [conn85] renameCollectionForCommand: rename test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748 to test3_fsmdb0.agg_out and drop test3_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.848-0500 I STORAGE [conn85] dropCollection: test3_fsmdb0.agg_out (9994eb3f-8b56-4ba9-9d36-e7240503d188) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796704, 3033), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.848-0500 I STORAGE [conn85] Finishing collection drop for test3_fsmdb0.agg_out (9994eb3f-8b56-4ba9-9d36-e7240503d188).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.848-0500 I STORAGE [conn85] renameCollection: renaming collection 044c76b1-1b5b-4981-867a-b6690dd735b8 from test3_fsmdb0.tmp.agg_out.488e7c62-42cb-43ee-976a-108001baa748 to test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.848-0500 I STORAGE [conn85] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.agg_out (9994eb3f-8b56-4ba9-9d36-e7240503d188)'. Ident: 'index-174--2588534479858262356', commit timestamp: 'Timestamp(1574796704, 3033)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.848-0500 I STORAGE [conn85] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.agg_out (9994eb3f-8b56-4ba9-9d36-e7240503d188)'. Ident: 'index-180--2588534479858262356', commit timestamp: 'Timestamp(1574796704, 3033)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.848-0500 I STORAGE [conn85] Deferring table drop for collection 'test3_fsmdb0.agg_out'. Ident: collection-169--2588534479858262356, commit timestamp: Timestamp(1574796704, 3033)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.848-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.849-0500 I COMMAND [conn80] command test3_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("c396db2f-788f-419f-b744-d7ae3889c6f5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7686464888871164630, ns: "test3_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7174954862882355649, ns: "test3_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test3_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796704650), clusterTime: Timestamp(1574796704, 556) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("c396db2f-788f-419f-b744-d7ae3889c6f5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796704, 562), signature: { hash: BinData(0, 9FAD85F96B04780E79458F8D92AFFEB7FD9EF168), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58618", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796704, 556), t: 1 } }, $db: "test3_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 198ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.849-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.849-0500 I STORAGE [conn85] createCollection: test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f with generated UUID: c61745e2-9e2c-43eb-bcff-b2a7c934a0dc and options: { temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.851-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.859-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: a43af36d-d0bc-4b24-97f5-610b5a34327b: test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed ( d108d732-d756-4f25-8812-a6483de9ea4c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.859-0500 I INDEX [conn84] Index build completed: a43af36d-d0bc-4b24-97f5-610b5a34327b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.867-0500 I INDEX [conn85] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.867-0500 I COMMAND [conn88] renameCollectionForCommand: rename test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682 to test3_fsmdb0.agg_out and drop test3_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.867-0500 I STORAGE [conn88] dropCollection: test3_fsmdb0.agg_out (044c76b1-1b5b-4981-867a-b6690dd735b8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796704, 3089), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.867-0500 I STORAGE [conn88] Finishing collection drop for test3_fsmdb0.agg_out (044c76b1-1b5b-4981-867a-b6690dd735b8).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.867-0500 I STORAGE [conn88] renameCollection: renaming collection 86ac9d9d-dbdd-45b4-b872-4d3229437831 from test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682 to test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:44.867-0500 I STORAGE [conn88] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.agg_out (044c76b1-1b5b-4981-867a-b6690dd735b8)'. Ident: 'index-176--2588534479858262356', commit timestamp: 'Timestamp(1574796704, 3089)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.020-0500 I STORAGE [conn88] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.agg_out (044c76b1-1b5b-4981-867a-b6690dd735b8)'. Ident: 'index-184--2588534479858262356', commit timestamp: 'Timestamp(1574796704, 3089)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.020-0500 I STORAGE [conn88] Deferring table drop for collection 'test3_fsmdb0.agg_out'. Ident: collection-171--2588534479858262356, commit timestamp: Timestamp(1574796704, 3089)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.020-0500 I COMMAND [conn88] command test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682 appName: "tid:3" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test3_fsmdb0.tmp.agg_out.1a13dadf-6ddf-46d7-a87e-789b78ff2682", to: "test3_fsmdb0.agg_out", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("8f169c90-43a1-4c3d-84ba-96afe3bea6ba"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796704, 3086), signature: { hash: BinData(0, 9FAD85F96B04780E79458F8D92AFFEB7FD9EF168), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58622", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796704, 556), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 17181 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2169ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.020-0500 I INDEX [conn85] Registering index build: f6e46f30-0eff-430e-bb0b-5166ff899c11
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.020-0500 I COMMAND [conn80] command test3_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } } ], fromMongos: true, needsMerge: true, collation: { locale: "simple" }, cursor: { batchSize: 0 }, runtimeConstants: { localNow: new Date(1574796704851), clusterTime: Timestamp(1574796704, 3085) }, use44SortKeys: true, allowImplicitCollectionCreation: false, shardVersion: [ Timestamp(1, 3), ObjectId('5ddd7da0cf8184c2e1493df9') ], lsid: { id: UUID("c396db2f-788f-419f-b744-d7ae3889c6f5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796704, 3085), signature: { hash: BinData(0, 9FAD85F96B04780E79458F8D92AFFEB7FD9EF168), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58618", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796704, 556), t: 1 } }, $db: "test3_fsmdb0" } planSummary: COLLSCAN cursorid:8803074564796889966 keysExamined:0 docsExamined:0 numYields:0 nreturned:0 queryHash:CC4733C9 planCacheKey:CC4733C9 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2168631 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 3 } } } protocol:op_msg 2168ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.020-0500 I COMMAND [conn75] command test3_fsmdb0.agg_out command: listIndexes { listIndexes: "agg_out", databaseVersion: { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 }, $clusterTime: { clusterTime: Timestamp(1574796704, 3086), signature: { hash: BinData(0, 9FAD85F96B04780E79458F8D92AFFEB7FD9EF168), keyId: 6763700092420489256 } }, $configServerState: { opTime: { ts: Timestamp(1574796704, 556), t: 1 } }, $db: "test3_fsmdb0" } numYields:0 reslen:495 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2169763 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 2169ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.020-0500 I COMMAND [conn81] command test3_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("8f169c90-43a1-4c3d-84ba-96afe3bea6ba"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1465571478000532307, ns: "test3_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 887347545742035382, ns: "test3_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test3_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796704650), clusterTime: Timestamp(1574796704, 556) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("8f169c90-43a1-4c3d-84ba-96afe3bea6ba"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796704, 562), signature: { hash: BinData(0, 9FAD85F96B04780E79458F8D92AFFEB7FD9EF168), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58622", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796704, 556), t: 1 } }, $db: "test3_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2369ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.021-0500 I STORAGE [conn88] createCollection: test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d with generated UUID: 06ddf75a-604c-46b9-832b-cc3a7313d379 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.021-0500 I STORAGE [conn82] createCollection: test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f with generated UUID: c3f9e0a0-1f5d-4c67-ab59-d4fb5f300c9c and options: { temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.021-0500 I COMMAND [conn84] command test3_fsmdb0.fsmcoll0 appName: "tid:4" command: getMore { getMore: 3245724004094637703, collection: "fsmcoll0", lsid: { id: UUID("d53e7426-95aa-4942-aab6-beb057515432"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796704, 3088), signature: { hash: BinData(0, 9FAD85F96B04780E79458F8D92AFFEB7FD9EF168), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45464", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796704, 556), t: 1 } }, $db: "test3_fsmdb0" } originatingCommand: { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } } ], fromMongos: true, needsMerge: true, collation: { locale: "simple" }, cursor: { batchSize: 0 }, runtimeConstants: { localNow: new Date(1574796704807), clusterTime: Timestamp(1574796704, 1078) }, use44SortKeys: true, allowImplicitCollectionCreation: false, shardVersion: [ Timestamp(1, 3), ObjectId('5ddd7da0cf8184c2e1493df9') ], lsid: { id: UUID("d53e7426-95aa-4942-aab6-beb057515432"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796704, 1078), signature: { hash: BinData(0, 9FAD85F96B04780E79458F8D92AFFEB7FD9EF168), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45464", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796704, 556), t: 1 } }, $db: "test3_fsmdb0" } planSummary: COLLSCAN cursorid:3245724004094637703 keysExamined:0 docsExamined:495 cursorExhausted:1 numYields:3 nreturned:247 reslen:252728 locks:{ ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 4 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2160475 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 2161ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.024-0500 I STORAGE [conn77] createCollection: test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc with generated UUID: 25f11fc2-52a2-41e6-9ab2-e763ab10ac0f and options: { temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.056-0500 I INDEX [conn85] index build: starting on test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.056-0500 I INDEX [conn85] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.056-0500 I STORAGE [conn85] Index build initialized: f6e46f30-0eff-430e-bb0b-5166ff899c11: test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f (c61745e2-9e2c-43eb-bcff-b2a7c934a0dc ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.056-0500 I INDEX [conn85] Waiting for index build to complete: f6e46f30-0eff-430e-bb0b-5166ff899c11
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.061-0500 I INDEX [conn88] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.066-0500 I INDEX [conn82] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.074-0500 I INDEX [conn77] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.074-0500 I COMMAND [conn84] renameCollectionForCommand: rename test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed to test3_fsmdb0.agg_out and drop test3_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.075-0500 I STORAGE [conn84] dropCollection: test3_fsmdb0.agg_out (86ac9d9d-dbdd-45b4-b872-4d3229437831) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796707, 505), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.075-0500 I STORAGE [conn84] Finishing collection drop for test3_fsmdb0.agg_out (86ac9d9d-dbdd-45b4-b872-4d3229437831).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.075-0500 I STORAGE [conn84] renameCollection: renaming collection d108d732-d756-4f25-8812-a6483de9ea4c from test3_fsmdb0.tmp.agg_out.2c1a46a1-ae0f-440e-aea0-479573af90ed to test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.075-0500 I STORAGE [conn84] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.agg_out (86ac9d9d-dbdd-45b4-b872-4d3229437831)'. Ident: 'index-177--2588534479858262356', commit timestamp: 'Timestamp(1574796707, 505)'
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.867-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.075-0500 I STORAGE [conn84] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.agg_out (86ac9d9d-dbdd-45b4-b872-4d3229437831)'. Ident: 'index-186--2588534479858262356', commit timestamp: 'Timestamp(1574796707, 505)'
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.867-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.075-0500 I STORAGE [conn84] Deferring table drop for collection 'test3_fsmdb0.agg_out'. Ident: collection-172--2588534479858262356, commit timestamp: Timestamp(1574796707, 505)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.867-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.075-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.867-0500 [jsTest] New session started with sessionID: { "id" : UUID("d520cb23-8658-44be-b979-c285338dfcda") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.075-0500 I INDEX [conn82] Registering index build: 5bd571b7-4228-4cab-baf2-c00e4f009b34
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.868-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.075-0500 I INDEX [conn88] Registering index build: c3a53417-07c1-461b-baeb-2b3b7d81100e
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.868-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.075-0500 I INDEX [conn77] Registering index build: 3735dae9-5f47-4f43-95f3-d9ee00637cdc
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.868-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.075-0500 I COMMAND [conn65] command test3_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d53e7426-95aa-4942-aab6-beb057515432"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4661839519033223101, ns: "test3_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3245724004094637703, ns: "test3_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test3_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796704807), clusterTime: Timestamp(1574796704, 1078) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d53e7426-95aa-4942-aab6-beb057515432"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796704, 1080), signature: { hash: BinData(0, 9FAD85F96B04780E79458F8D92AFFEB7FD9EF168), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45464", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796704, 556), t: 1 } }, $db: "test3_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2267ms
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.868-0500 Skipping data consistency checks for 1-node CSRS: { "type" : "sharded cluster", "configsvr" : { "type" : "replica set", "primary" : "localhost:20000", "nodes" : [ "localhost:20000" ] }, "shards" : { "shard-rs0" : { "type" : "replica set", "primary" : "localhost:20001", "nodes" : [ "localhost:20001", "localhost:20002", "localhost:20003" ] }, "shard-rs1" : { "type" : "replica set", "primary" : "localhost:20004", "nodes" : [ "localhost:20004", "localhost:20005", "localhost:20006" ] } }, "mongos" : { "type" : "mongos router", "nodes" : [ "localhost:20007", "localhost:20008" ] } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.076-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.087-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.089-0500 I SHARDING [conn55] CMD: shardcollection: { _shardsvrShardCollection: "test3_fsmdb0.agg_out", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("d53e7426-95aa-4942-aab6-beb057515432"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796707, 509), signature: { hash: BinData(0, 8FA7BC795933CD5EC84780342D3C64992EABC6BB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45464", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796707, 509), t: 1 } }, $db: "admin" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.089-0500 I SHARDING [conn55] about to log metadata event into changelog: { _id: "nz_desktop:20004-2019-11-26T14:31:47.089-0500-5ddd7da3cf8184c2e1493fd0", server: "nz_desktop:20004", shard: "shard-rs1", clientAddr: "127.0.0.1:46028", time: new Date(1574796707089), what: "shardCollection.start", ns: "test3_fsmdb0.agg_out", details: { shardKey: { _id: "hashed" }, collection: "test3_fsmdb0.agg_out", uuid: UUID("d108d732-d756-4f25-8812-a6483de9ea4c"), empty: false, fromMapReduce: false, primary: "shard-rs1:shard-rs1/localhost:20004,localhost:20005,localhost:20006", numChunks: 1 } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.095-0500 I INDEX [conn82] index build: starting on test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.095-0500 I INDEX [conn82] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.095-0500 I STORAGE [conn82] Index build initialized: 5bd571b7-4228-4cab-baf2-c00e4f009b34: test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f (c3f9e0a0-1f5d-4c67-ab59-d4fb5f300c9c ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.095-0500 I INDEX [conn82] Waiting for index build to complete: 5bd571b7-4228-4cab-baf2-c00e4f009b34
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.095-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: f6e46f30-0eff-430e-bb0b-5166ff899c11: test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f ( c61745e2-9e2c-43eb-bcff-b2a7c934a0dc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.097-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test3_fsmdb0.agg_out to version 1|0||5ddd7da3cf8184c2e1493fd4 took 1 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.097-0500 I SHARDING [conn55] Marking collection test3_fsmdb0.agg_out as collection version: 1|0||5ddd7da3cf8184c2e1493fd4, shard version: 1|0||5ddd7da3cf8184c2e1493fd4
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.097-0500 I SHARDING [conn55] Created 1 chunk(s) for: test3_fsmdb0.agg_out, producing collection version 1|0||5ddd7da3cf8184c2e1493fd4
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.097-0500 I SHARDING [conn55] about to log metadata event into changelog: { _id: "nz_desktop:20004-2019-11-26T14:31:47.097-0500-5ddd7da3cf8184c2e1493fdc", server: "nz_desktop:20004", shard: "shard-rs1", clientAddr: "127.0.0.1:46028", time: new Date(1574796707097), what: "shardCollection.end", ns: "test3_fsmdb0.agg_out", details: { version: "1|0||5ddd7da3cf8184c2e1493fd4" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.097-0500 I STORAGE [ShardServerCatalogCacheLoader-1] createCollection: config.cache.chunks.test3_fsmdb0.agg_out with provided UUID: 4c26dac0-af8d-4579-bbb5-32356c1d2f49 and options: { uuid: UUID("4c26dac0-af8d-4579-bbb5-32356c1d2f49") }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.100-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47104 #171 (45 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.101-0500 I NETWORK [conn171] received client metadata from 127.0.0.1:47104 conn171: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.101-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47110 #172 (46 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.101-0500 I NETWORK [conn172] received client metadata from 127.0.0.1:47110 conn172: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.118-0500 I INDEX [conn88] index build: starting on test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.118-0500 I INDEX [conn88] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.118-0500 I STORAGE [conn88] Index build initialized: c3a53417-07c1-461b-baeb-2b3b7d81100e: test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d (06ddf75a-604c-46b9-832b-cc3a7313d379 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.118-0500 I INDEX [conn88] Waiting for index build to complete: c3a53417-07c1-461b-baeb-2b3b7d81100e
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.118-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.118-0500 I INDEX [conn85] Index build completed: f6e46f30-0eff-430e-bb0b-5166ff899c11
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.118-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.118-0500 I COMMAND [conn85] command test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("fe44fdfe-c512-4ee5-9746-0ef4f91d78d0"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796704, 3089), signature: { hash: BinData(0, 9FAD85F96B04780E79458F8D92AFFEB7FD9EF168), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45474", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796704, 556), t: 1 } }, $db: "test3_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 2152435 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2250ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.126-0500 I INDEX [ShardServerCatalogCacheLoader-1] index build: done building index _id_ on ns config.cache.chunks.test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.126-0500 I INDEX [ShardServerCatalogCacheLoader-1] Registering index build: c2a48653-1276-4029-8926-0f2e194429e3
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.126-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.127-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.138-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.142-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.145-0500 I COMMAND [conn65] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.150-0500 I INDEX [conn77] index build: starting on test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.150-0500 I INDEX [conn77] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.150-0500 I STORAGE [conn77] Index build initialized: 3735dae9-5f47-4f43-95f3-d9ee00637cdc: test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc (25f11fc2-52a2-41e6-9ab2-e763ab10ac0f ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.150-0500 I INDEX [conn77] Waiting for index build to complete: 3735dae9-5f47-4f43-95f3-d9ee00637cdc
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.150-0500 I COMMAND [conn85] renameCollectionForCommand: rename test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f to test3_fsmdb0.agg_out and drop test3_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.151-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: c3a53417-07c1-461b-baeb-2b3b7d81100e: test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d ( 06ddf75a-604c-46b9-832b-cc3a7313d379 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.152-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 5bd571b7-4228-4cab-baf2-c00e4f009b34: test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f ( c3f9e0a0-1f5d-4c67-ab59-d4fb5f300c9c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.167-0500 I INDEX [ShardServerCatalogCacheLoader-1] index build: starting on config.cache.chunks.test3_fsmdb0.agg_out properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.167-0500 I INDEX [ShardServerCatalogCacheLoader-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.167-0500 I STORAGE [ShardServerCatalogCacheLoader-1] Index build initialized: c2a48653-1276-4029-8926-0f2e194429e3: config.cache.chunks.test3_fsmdb0.agg_out (4c26dac0-af8d-4579-bbb5-32356c1d2f49 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.167-0500 I INDEX [ShardServerCatalogCacheLoader-1] Waiting for index build to complete: c2a48653-1276-4029-8926-0f2e194429e3
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.167-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.167-0500 I INDEX [conn88] Index build completed: c3a53417-07c1-461b-baeb-2b3b7d81100e
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.167-0500 I INDEX [conn82] Index build completed: 5bd571b7-4228-4cab-baf2-c00e4f009b34
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.167-0500 I COMMAND [conn65] CMD: dropIndexes test3_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.168-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.168-0500 I COMMAND [conn88] command test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("49d93db0-9abd-4103-bc81-b25086705499"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796707, 504), signature: { hash: BinData(0, 8FA7BC795933CD5EC84780342D3C64992EABC6BB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45478", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796705, 2), t: 1 } }, $db: "test3_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 13101 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 105ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.168-0500 I COMMAND [conn82] command test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("c396db2f-788f-419f-b744-d7ae3889c6f5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796707, 504), signature: { hash: BinData(0, 8FA7BC795933CD5EC84780342D3C64992EABC6BB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58618", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796705, 2), t: 1 } }, $db: "test3_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 8637 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 101ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.168-0500 I COMMAND [conn82] CMD: drop test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.168-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.168-0500 I STORAGE [conn82] dropCollection: test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f (c61745e2-9e2c-43eb-bcff-b2a7c934a0dc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.168-0500 I STORAGE [conn82] Finishing collection drop for test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f (c61745e2-9e2c-43eb-bcff-b2a7c934a0dc).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.168-0500 I STORAGE [conn82] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f (c61745e2-9e2c-43eb-bcff-b2a7c934a0dc)'. Ident: 'index-193--2588534479858262356', commit timestamp: 'Timestamp(1574796707, 1024)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.168-0500 I STORAGE [conn82] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f (c61745e2-9e2c-43eb-bcff-b2a7c934a0dc)'. Ident: 'index-194--2588534479858262356', commit timestamp: 'Timestamp(1574796707, 1024)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.168-0500 I STORAGE [conn82] Deferring table drop for collection 'test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f'. Ident: collection-192--2588534479858262356, commit timestamp: Timestamp(1574796707, 1024)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.169-0500 I COMMAND [conn64] command test3_fsmdb0.agg_out appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("fe44fdfe-c512-4ee5-9746-0ef4f91d78d0"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8188976172727342084, ns: "test3_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3378716903916430981, ns: "test3_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test3_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796704833), clusterTime: Timestamp(1574796704, 2966) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("fe44fdfe-c512-4ee5-9746-0ef4f91d78d0"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796704, 3085), signature: { hash: BinData(0, 9FAD85F96B04780E79458F8D92AFFEB7FD9EF168), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45474", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796704, 556), t: 1 } }, $db: "test3_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f\", to: \"test3_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: cannot rename to a sharded collection" errName:IllegalOperation errCode:20 reslen:745 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2319ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.169-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.171-0500 I COMMAND [conn64] CMD: dropIndexes test3_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.171-0500 I COMMAND [conn64] CMD: dropIndexes test3_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.172-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.174-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.178-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 3735dae9-5f47-4f43-95f3-d9ee00637cdc: test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc ( 25f11fc2-52a2-41e6-9ab2-e763ab10ac0f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.178-0500 I INDEX [conn77] Index build completed: 3735dae9-5f47-4f43-95f3-d9ee00637cdc
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.178-0500 I COMMAND [conn77] command test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("b19cb8b9-6008-4a39-b374-14d64435bb80"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("8f169c90-43a1-4c3d-84ba-96afe3bea6ba"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796707, 505), signature: { hash: BinData(0, 8FA7BC795933CD5EC84780342D3C64992EABC6BB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58622", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796705, 2), t: 1 } }, $db: "test3_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 144 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 103ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.180-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: c2a48653-1276-4029-8926-0f2e194429e3: config.cache.chunks.test3_fsmdb0.agg_out ( 4c26dac0-af8d-4579-bbb5-32356c1d2f49 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.180-0500 I INDEX [ShardServerCatalogCacheLoader-1] Index build completed: c2a48653-1276-4029-8926-0f2e194429e3
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.180-0500 I SHARDING [ShardServerCatalogCacheLoader-1] Marking collection config.cache.chunks.test3_fsmdb0.agg_out as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.194-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47116 #173 (47 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.195-0500 I NETWORK [conn173] received client metadata from 127.0.0.1:47116 conn173: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.197-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47122 #174 (48 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.197-0500 I NETWORK [conn174] received client metadata from 127.0.0.1:47122 conn174: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.206-0500 I COMMAND [conn85] CMD: drop test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.206-0500 I COMMAND [conn88] CMD: drop test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.206-0500 I STORAGE [conn85] dropCollection: test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f (c3f9e0a0-1f5d-4c67-ab59-d4fb5f300c9c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.206-0500 I STORAGE [conn85] Finishing collection drop for test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f (c3f9e0a0-1f5d-4c67-ab59-d4fb5f300c9c).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.206-0500 I STORAGE [conn88] dropCollection: test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d (06ddf75a-604c-46b9-832b-cc3a7313d379) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.206-0500 I STORAGE [conn88] Finishing collection drop for test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d (06ddf75a-604c-46b9-832b-cc3a7313d379).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.206-0500 I STORAGE [conn85] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f (c3f9e0a0-1f5d-4c67-ab59-d4fb5f300c9c)'. Ident: 'index-200--2588534479858262356', commit timestamp: 'Timestamp(1574796707, 2484)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.206-0500 I STORAGE [conn85] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f (c3f9e0a0-1f5d-4c67-ab59-d4fb5f300c9c)'. Ident: 'index-202--2588534479858262356', commit timestamp: 'Timestamp(1574796707, 2484)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.206-0500 I STORAGE [conn85] Deferring table drop for collection 'test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f'. Ident: collection-196--2588534479858262356, commit timestamp: Timestamp(1574796707, 2484)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.206-0500 I STORAGE [conn88] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d (06ddf75a-604c-46b9-832b-cc3a7313d379)'. Ident: 'index-199--2588534479858262356', commit timestamp: 'Timestamp(1574796707, 2485)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.206-0500 I STORAGE [conn88] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d (06ddf75a-604c-46b9-832b-cc3a7313d379)'. Ident: 'index-204--2588534479858262356', commit timestamp: 'Timestamp(1574796707, 2485)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.206-0500 I STORAGE [conn88] Deferring table drop for collection 'test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d'. Ident: collection-195--2588534479858262356, commit timestamp: Timestamp(1574796707, 2485)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.206-0500 I COMMAND [conn81] command test3_fsmdb0.agg_out appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("c396db2f-788f-419f-b744-d7ae3889c6f5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2392721151879588387, ns: "test3_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8803074564796889966, ns: "test3_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test3_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796704851), clusterTime: Timestamp(1574796704, 3085) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("c396db2f-788f-419f-b744-d7ae3889c6f5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796705, 2), signature: { hash: BinData(0, A94F790F2CCAE4650050581E2116F32625D18A5A), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58618", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796705, 2), t: 1 } }, $db: "test3_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f\", to: \"test3_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test3_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:883 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 185ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.207-0500 I COMMAND [conn62] command test3_fsmdb0.agg_out appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("49d93db0-9abd-4103-bc81-b25086705499"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 568500337628208955, ns: "test3_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1142861448448978223, ns: "test3_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test3_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796704849), clusterTime: Timestamp(1574796704, 3085) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("49d93db0-9abd-4103-bc81-b25086705499"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796704, 3086), signature: { hash: BinData(0, 9FAD85F96B04780E79458F8D92AFFEB7FD9EF168), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45478", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796704, 556), t: 1 } }, $db: "test3_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d\", to: \"test3_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test3_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:883 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2356ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.208-0500 W CONTROL [conn174] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 51 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.209-0500 I COMMAND [conn77] CMD: drop test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.209-0500 I STORAGE [conn77] dropCollection: test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc (25f11fc2-52a2-41e6-9ab2-e763ab10ac0f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.209-0500 I STORAGE [conn77] Finishing collection drop for test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc (25f11fc2-52a2-41e6-9ab2-e763ab10ac0f).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.209-0500 I STORAGE [conn77] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc (25f11fc2-52a2-41e6-9ab2-e763ab10ac0f)'. Ident: 'index-201--2588534479858262356', commit timestamp: 'Timestamp(1574796707, 2539)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.209-0500 I STORAGE [conn77] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc (25f11fc2-52a2-41e6-9ab2-e763ab10ac0f)'. Ident: 'index-208--2588534479858262356', commit timestamp: 'Timestamp(1574796707, 2539)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.209-0500 I STORAGE [conn77] Deferring table drop for collection 'test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc'. Ident: collection-197--2588534479858262356, commit timestamp: Timestamp(1574796707, 2539)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.209-0500 I COMMAND [conn80] command test3_fsmdb0.agg_out appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("8f169c90-43a1-4c3d-84ba-96afe3bea6ba"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 368888093052401494, ns: "test3_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6004341946708811292, ns: "test3_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test3_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796707023), clusterTime: Timestamp(1574796705, 2) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("8f169c90-43a1-4c3d-84ba-96afe3bea6ba"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796707, 2), signature: { hash: BinData(0, 8FA7BC795933CD5EC84780342D3C64992EABC6BB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58622", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796705, 2), t: 1 } }, $db: "test3_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc\", to: \"test3_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test3_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:880 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 185ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.211-0500 I COMMAND [conn65] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.212-0500 I COMMAND [conn62] CMD: dropIndexes test3_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.212-0500 I STORAGE [conn77] createCollection: test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 with generated UUID: 6b785421-9783-477e-b4dc-e9674336abe9 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.226-0500 I INDEX [conn77] index build: done building index _id_ on ns test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.227-0500 I INDEX [conn77] Registering index build: d5f319af-e459-46f5-a973-b802d9b46f23
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.242-0500 I INDEX [conn77] index build: starting on test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.242-0500 I INDEX [conn77] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.242-0500 I STORAGE [conn77] Index build initialized: d5f319af-e459-46f5-a973-b802d9b46f23: test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 (6b785421-9783-477e-b4dc-e9674336abe9 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.242-0500 I INDEX [conn77] Waiting for index build to complete: d5f319af-e459-46f5-a973-b802d9b46f23
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.242-0500 I COMMAND [conn64] CMD: dropIndexes test3_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.242-0500 I COMMAND [conn65] CMD: dropIndexes test3_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.242-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.243-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.245-0500 I COMMAND [conn65] CMD: dropIndexes test3_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.247-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.248-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: d5f319af-e459-46f5-a973-b802d9b46f23: test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 ( 6b785421-9783-477e-b4dc-e9674336abe9 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.248-0500 I INDEX [conn77] Index build completed: d5f319af-e459-46f5-a973-b802d9b46f23
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.248-0500 I COMMAND [conn65] CMD: dropIndexes test3_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.249-0500 I COMMAND [conn62] CMD: dropIndexes test3_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.254-0500 I COMMAND [conn62] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.254-0500 I COMMAND [conn62] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.256-0500 I COMMAND [conn62] CMD: dropIndexes test3_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.256-0500 I COMMAND [conn80] CMD: dropIndexes test3_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.257-0500 I COMMAND [conn62] CMD: dropIndexes test3_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.266-0500 I COMMAND [conn62] CMD: dropIndexes test3_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.267-0500 I COMMAND [conn62] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.269-0500 I COMMAND [conn62] CMD: dropIndexes test3_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.270-0500 I COMMAND [conn62] CMD: dropIndexes test3_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.273-0500 I COMMAND [conn77] CMD: drop test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.273-0500 I STORAGE [conn77] dropCollection: test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 (6b785421-9783-477e-b4dc-e9674336abe9) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.273-0500 I COMMAND [conn62] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.273-0500 I STORAGE [conn77] Finishing collection drop for test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 (6b785421-9783-477e-b4dc-e9674336abe9).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.273-0500 I STORAGE [conn77] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 (6b785421-9783-477e-b4dc-e9674336abe9)'. Ident: 'index-213--2588534479858262356', commit timestamp: 'Timestamp(1574796707, 3056)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.273-0500 I STORAGE [conn77] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7 (6b785421-9783-477e-b4dc-e9674336abe9)'. Ident: 'index-214--2588534479858262356', commit timestamp: 'Timestamp(1574796707, 3056)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.273-0500 I STORAGE [conn77] Deferring table drop for collection 'test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7'. Ident: collection-212--2588534479858262356, commit timestamp: Timestamp(1574796707, 3056)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.273-0500 I COMMAND [conn62] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.274-0500 I COMMAND [conn62] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.276-0500 I COMMAND [conn62] CMD: dropIndexes test3_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.277-0500 I COMMAND [conn62] CMD: dropIndexes test3_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.283-0500 I COMMAND [conn62] CMD: dropIndexes test3_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.288-0500 I COMMAND [conn81] CMD: dropIndexes test3_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.288-0500 I COMMAND [conn62] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.291-0500 I COMMAND [conn62] CMD: dropIndexes test3_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.292-0500 I COMMAND [conn81] CMD: dropIndexes test3_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.301-0500 I COMMAND [conn62] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.312-0500 I COMMAND [conn81] CMD: dropIndexes test3_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.317-0500 I COMMAND [conn80] CMD: dropIndexes test3_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.323-0500 I COMMAND [conn80] CMD: dropIndexes test3_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.324-0500 I COMMAND [conn80] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.328-0500 I COMMAND [conn80] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.333-0500 I COMMAND [conn80] CMD: dropIndexes test3_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.354-0500 W CONTROL [conn174] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 82 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.357-0500 I NETWORK [conn173] end connection 127.0.0.1:47116 (47 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.357-0500 I NETWORK [conn174] end connection 127.0.0.1:47122 (46 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.361-0500 I NETWORK [conn172] end connection 127.0.0.1:47110 (45 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.361-0500 I COMMAND [conn80] CMD: dropIndexes test3_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.367-0500 I NETWORK [conn171] end connection 127.0.0.1:47104 (44 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.384-0500 I NETWORK [conn164] end connection 127.0.0.1:47012 (43 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.384-0500 I NETWORK [conn165] end connection 127.0.0.1:47020 (42 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.385-0500 I NETWORK [conn166] end connection 127.0.0.1:47040 (41 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.385-0500 I NETWORK [conn168] end connection 127.0.0.1:47048 (40 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.386-0500 I NETWORK [conn167] end connection 127.0.0.1:47042 (39 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.397-0500 I NETWORK [conn163] end connection 127.0.0.1:47010 (38 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.951-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47148 #175 (39 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.952-0500 I NETWORK [conn175] received client metadata from 127.0.0.1:47148 conn175: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.952-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47152 #176 (40 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:47.952-0500 I NETWORK [conn176] received client metadata from 127.0.0.1:47152 conn176: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:49.866-0500 I COMMAND [conn176] command admin.$cmd appName: "MongoDB Shell" command: isMaster { isMaster: 1, hostInfo: "nz_desktop:27017", client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }, $db: "admin" } numYields:0 reslen:964 locks:{} protocol:op_query 1913ms
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.919-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:49.919-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45546 #145 (2 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:49.919-0500 I NETWORK [conn145] received client metadata from 127.0.0.1:45546 conn145: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.921-0500 Implicit session: session { "id" : UUID("b6288110-1f3a-48d6-9267-3baa45c04d3a") }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.923-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:49.927-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39690 #173 (40 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:49.927-0500 I NETWORK [conn173] received client metadata from 127.0.0.1:39690 conn173: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.928-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.928-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.928-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.929-0500 [jsTest] New session started with sessionID: { "id" : UUID("6be2db43-94c3-4578-bd11-e3e83352bedb") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.929-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.929-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.929-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.929-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:49.928-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45550 #146 (3 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:49.929-0500 I NETWORK [conn146] received client metadata from 127.0.0.1:45550 conn146: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.930-0500 Implicit session: session { "id" : UUID("6b383769-a49a-4c6b-a3d1-3231d2ac823b") }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.931-0500 Recreating replica set from config {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.931-0500 "_id" : "shard-rs0",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.931-0500 "version" : 2,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.931-0500 "protocolVersion" : NumberLong(1),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.931-0500 "writeConcernMajorityJournalDefault" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.931-0500 "members" : [
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.931-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.931-0500 "_id" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.931-0500 "host" : "localhost:20001",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.931-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.931-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.931-0500 "hidden" : false,
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:49.931-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39694 #174 (41 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.932-0500 "priority" : 1,
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:49.931-0500 I NETWORK [conn174] received client metadata from 127.0.0.1:39694 conn174: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.932-0500 "tags" : {
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:49.932-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52730 #84 (12 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.932-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.932-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.932-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.932-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.932-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.932-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.932-0500 "_id" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.932-0500 "host" : "localhost:20002",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.933-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.933-0500 "buildIndexes" : true,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:49.932-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53620 #78 (13 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.933-0500 "hidden" : false,
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:49.932-0500 I NETWORK [conn84] received client metadata from 127.0.0.1:52730 conn84: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.933-0500 "priority" : 0,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:49.933-0500 I NETWORK [conn78] received client metadata from 127.0.0.1:53620 conn78: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.933-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.933-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.933-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.933-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.934-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.934-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.934-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.934-0500 "_id" : 2,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.934-0500 "host" : "localhost:20003",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.934-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.934-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.934-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.934-0500 "priority" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.934-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.934-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.934-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.934-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.934-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.935-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.935-0500 ],
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.935-0500 "settings" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.935-0500 "chainingAllowed" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.935-0500 "heartbeatIntervalMillis" : 2000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.935-0500 "heartbeatTimeoutSecs" : 10,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.935-0500 "electionTimeoutMillis" : 86400000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.935-0500 "catchUpTimeoutMillis" : -1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.935-0500 "catchUpTakeoverDelayMillis" : 30000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.935-0500 "getLastErrorModes" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.935-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.935-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.935-0500 "getLastErrorDefaults" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.936-0500 "w" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.936-0500 "wtimeout" : 0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.936-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.936-0500 "replicaSetId" : ObjectId("5ddd7d683bbfe7fa5630d3b8")
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.936-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.936-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.936-0500 MongoDB server version: 0.0.0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.936-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.936-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.936-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.936-0500 [jsTest] New session started with sessionID: { "id" : UUID("8a5085ac-28f2-4da6-92b5-6fc61e4d3ec7") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.936-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.936-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.936-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.937-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.937-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.937-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.937-0500 [jsTest] New session started with sessionID: { "id" : UUID("e1d64c17-817e-447f-a536-2050937173b3") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.937-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.937-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.937-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.937-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.937-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.937-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:49.936-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47166 #177 (41 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.937-0500 [jsTest] New session started with sessionID: { "id" : UUID("e4db7fe1-1de4-4d20-b62e-3c6ae4aaf186") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:49.936-0500 I NETWORK [conn177] received client metadata from 127.0.0.1:47166 conn177: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.938-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.938-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.938-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.938-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.938-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.938-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.938-0500 [jsTest] New session started with sessionID: { "id" : UUID("deb71925-d6db-4fa3-984a-5bcfce742cf7") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.938-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.938-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.938-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.939-0500 Recreating replica set from config {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.939-0500 "_id" : "shard-rs1",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.939-0500 "version" : 2,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.939-0500 "protocolVersion" : NumberLong(1),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.939-0500 "writeConcernMajorityJournalDefault" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.939-0500 "members" : [
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.939-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.939-0500 "_id" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.939-0500 "host" : "localhost:20004",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.940-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.940-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.940-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.940-0500 "priority" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.940-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.940-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.940-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.940-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.940-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.940-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.940-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.940-0500 "_id" : 1,
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:49.939-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47168 #178 (42 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.941-0500 "host" : "localhost:20005",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.941-0500 "arbiterOnly" : false,
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:49.940-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52368 #80 (11 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.941-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.941-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.941-0500 "priority" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.941-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.941-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.941-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.941-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.941-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.942-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.942-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.942-0500 "_id" : 2,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.942-0500 "host" : "localhost:20006",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.942-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.942-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.942-0500 "hidden" : false,
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:49.940-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35730 #74 (11 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.942-0500 "priority" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.942-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.942-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.942-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.942-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.942-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.942-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.942-0500 ],
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.942-0500 "settings" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.942-0500 "chainingAllowed" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.942-0500 "heartbeatIntervalMillis" : 2000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.942-0500 "heartbeatTimeoutSecs" : 10,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.942-0500 "electionTimeoutMillis" : 86400000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.942-0500 "catchUpTimeoutMillis" : -1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.943-0500 "catchUpTakeoverDelayMillis" : 30000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.943-0500 "getLastErrorModes" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.943-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.943-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.943-0500 "getLastErrorDefaults" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.943-0500 "w" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.943-0500 "wtimeout" : 0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.943-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.943-0500 "replicaSetId" : ObjectId("5ddd7d6bcf8184c2e1492eba")
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.943-0500 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:49.939-0500 I NETWORK [conn178] received client metadata from 127.0.0.1:47168 conn178: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.943-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.943-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.943-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.943-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.943-0500 [jsTest] New session started with sessionID: { "id" : UUID("6da63686-e9af-409b-b590-c2d8f740af51") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.943-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.943-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.943-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.943-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.943-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.944-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.944-0500 [jsTest] New session started with sessionID: { "id" : UUID("de814254-ee60-459f-a626-482272122515") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.944-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.944-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.944-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.944-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.944-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.944-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.944-0500 [jsTest] New session started with sessionID: { "id" : UUID("fbf14df9-d548-4b72-b67c-a120109d98f8") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.944-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.944-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.944-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:49.940-0500 I NETWORK [conn80] received client metadata from 127.0.0.1:52368 conn80: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:49.941-0500 I NETWORK [conn74] received client metadata from 127.0.0.1:35730 conn74: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.949-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.949-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.949-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.949-0500 [jsTest] Freezing nodes: [localhost:20002,localhost:20003]
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.949-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.949-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.949-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:49.950-0500 I COMMAND [conn84] Attempting to step down in response to replSetStepDown command
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:49.950-0500 I REPL [conn84] 'freezing' for 86400 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:49.951-0500 I COMMAND [conn78] Attempting to step down in response to replSetStepDown command
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:49.951-0500 I REPL [conn78] 'freezing' for 86400 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:49.954-0500 I COMMAND [conn174] CMD fsync: sync:1 lock:1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.956-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.956-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.956-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.956-0500 [jsTest] Freezing nodes: [localhost:20005,localhost:20006]
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.956-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.956-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.956-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:49.956-0500 I COMMAND [conn80] Attempting to step down in response to replSetStepDown command
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:49.957-0500 I REPL [conn80] 'freezing' for 86400 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:49.958-0500 I COMMAND [conn74] Attempting to step down in response to replSetStepDown command
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:49.958-0500 I REPL [conn74] 'freezing' for 86400 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:49.961-0500 I COMMAND [conn178] CMD fsync: sync:1 lock:1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:49.992-0500 W COMMAND [fsyncLockWorker] WARNING: instance is locked, blocking all writes. The fsync command has finished execution, remember to unlock the instance using fsyncUnlock().
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:49.992-0500 I COMMAND [conn174] mongod is locked and no writes are allowed. db.fsyncUnlock() to unlock
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:49.992-0500 I COMMAND [conn174] Lock count is 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:49.992-0500 I COMMAND [conn174] For more info see http://dochub.mongodb.org/core/fsynccommand
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.992-0500 ReplSetTest awaitReplication: going to check only localhost:20002,localhost:20003
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.999-0500 ReplSetTest awaitReplication: starting: optime for primary, localhost:20001, is { "ts" : Timestamp(1574796709, 6), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:49.999-0500 ReplSetTest awaitReplication: checking secondaries against latest primary optime { "ts" : Timestamp(1574796709, 6), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:50.004-0500 ReplSetTest awaitReplication: checking secondary #0: localhost:20002
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:50.007-0500 ReplSetTest awaitReplication: secondary #0, localhost:20002, is synced
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:50.009-0500 ReplSetTest awaitReplication: checking secondary #1: localhost:20003
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:50.010-0500 ReplSetTest awaitReplication: secondary #1, localhost:20003, is synced
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:50.010-0500 ReplSetTest awaitReplication: finished: all 2 secondaries synced at optime { "ts" : Timestamp(1574796709, 6), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:50.014-0500 checkDBHashesForReplSet checking data hashes against primary: localhost:20001
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:50.014-0500 checkDBHashesForReplSet going to check data hashes on secondary: localhost:20002
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:50.016-0500 checkDBHashesForReplSet going to check data hashes on secondary: localhost:20003
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.102-0500 W COMMAND [fsyncLockWorker] WARNING: instance is locked, blocking all writes. The fsync command has finished execution, remember to unlock the instance using fsyncUnlock().
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.102-0500 I COMMAND [conn178] mongod is locked and no writes are allowed. db.fsyncUnlock() to unlock
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.102-0500 I COMMAND [conn178] Lock count is 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.102-0500 I COMMAND [conn178] For more info see http://dochub.mongodb.org/core/fsynccommand
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.102-0500 I COMMAND [conn178] command admin.$cmd appName: "MongoDB Shell" command: fsync { fsync: 1.0, lock: 1.0, allowFsyncFailure: true, lsid: { id: UUID("6da63686-e9af-409b-b590-c2d8f740af51") }, $clusterTime: { clusterTime: Timestamp(1574796709, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:477 locks:{ Mutex: { acquireCount: { W: 1 } } } protocol:op_msg 141ms
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:50.102-0500 ReplSetTest awaitReplication: going to check only localhost:20005,localhost:20006
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:50.109-0500 ReplSetTest awaitReplication: starting: optime for primary, localhost:20004, is { "ts" : Timestamp(1574796709, 8), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:50.110-0500 ReplSetTest awaitReplication: checking secondaries against latest primary optime { "ts" : Timestamp(1574796709, 8), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:50.112-0500 ReplSetTest awaitReplication: checking secondary #0: localhost:20005
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:50.113-0500 ReplSetTest awaitReplication: secondary #0, localhost:20005, is synced
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:50.115-0500 ReplSetTest awaitReplication: checking secondary #1: localhost:20006
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:50.116-0500 ReplSetTest awaitReplication: secondary #1, localhost:20006, is synced
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:50.116-0500 ReplSetTest awaitReplication: finished: all 2 secondaries synced at optime { "ts" : Timestamp(1574796709, 8), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:50.120-0500 checkDBHashesForReplSet checking data hashes against primary: localhost:20004
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:50.120-0500 checkDBHashesForReplSet going to check data hashes on secondary: localhost:20005
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:50.122-0500 checkDBHashesForReplSet going to check data hashes on secondary: localhost:20006
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:50.126-0500 I COMMAND [conn174] command: unlock requested
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:50.128-0500 I COMMAND [conn174] fsyncUnlock completed. mongod is now unlocked and free to accept writes
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:50.128-0500 I REPL [conn84] 'unfreezing'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:50.129-0500 I REPL [conn78] 'unfreezing'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:50.130-0500 I NETWORK [conn145] end connection 127.0.0.1:45546 (2 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:50.130-0500 I NETWORK [conn173] end connection 127.0.0.1:39690 (40 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:50.131-0500 I NETWORK [conn78] end connection 127.0.0.1:53620 (12 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:50.131-0500 I NETWORK [conn84] end connection 127.0.0.1:52730 (11 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:50.131-0500 I NETWORK [conn174] end connection 127.0.0.1:39694 (39 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.264-0500 I COMMAND [conn178] command: unlock requested
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.266-0500 I COMMAND [conn178] fsyncUnlock completed. mongod is now unlocked and free to accept writes
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:50.267-0500 I REPL [conn80] 'unfreezing'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:50.267-0500 I REPL [conn74] 'unfreezing'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:50.268-0500 I NETWORK [conn146] end connection 127.0.0.1:45550 (1 connection now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.268-0500 I NETWORK [conn177] end connection 127.0.0.1:47166 (41 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:50.269-0500 I NETWORK [conn80] end connection 127.0.0.1:52368 (10 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:50.269-0500 I NETWORK [conn74] end connection 127.0.0.1:35730 (10 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.269-0500 I NETWORK [conn178] end connection 127.0.0.1:47168 (40 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:50.271-0500 Finished data consistency checks for cluster in 2331 ms.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:50.272-0500 I NETWORK [conn144] end connection 127.0.0.1:45524 (0 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:50.272-0500 I NETWORK [conn126] end connection 127.0.0.1:56930 (33 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:50.272-0500 I NETWORK [conn172] end connection 127.0.0.1:39678 (38 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.273-0500 I NETWORK [conn176] end connection 127.0.0.1:47152 (39 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.280-0500 I NETWORK [conn175] end connection 127.0.0.1:47148 (38 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:50.280-0500 I NETWORK [conn73] end connection 127.0.0.1:35708 (9 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:31:50.281-0500 JSTest jstests/hooks/run_check_repl_dbhash.js finished.
[executor:fsm_workload_test:job0] 2019-11-26T14:31:50.281-0500 agg_out:CheckReplDBHash ran in 2.43 seconds: no failures detected.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:50.280-0500 I NETWORK [conn79] end connection 127.0.0.1:52344 (9 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:50.280-0500 I NETWORK [conn83] end connection 127.0.0.1:52710 (10 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:50.280-0500 I NETWORK [conn77] end connection 127.0.0.1:53596 (11 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:50.280-0500 I NETWORK [conn125] end connection 127.0.0.1:56928 (32 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:50.280-0500 I NETWORK [conn171] end connection 127.0.0.1:39672 (37 connections now open)
[executor:fsm_workload_test:job0] 2019-11-26T14:31:50.282-0500 Running agg_out:ValidateCollections...
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:50.283-0500 Starting JSTest jstests/hooks/run_validate_collections.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_validate_collections"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_validate_collections.js
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.609-0500 I STORAGE [TimestampMonitor] Removing drop-pending idents with drop timestamps before timestamp Timestamp(1574796704, 8)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:50.933-0500 I STORAGE [TimestampMonitor] Removing drop-pending idents with drop timestamps before timestamp Timestamp(1574796704, 6)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:52.941-0500 I STORAGE [TimestampMonitor] Removing drop-pending idents with drop timestamps before timestamp Timestamp(1574796704, 6)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:52.952-0500 I STORAGE [TimestampMonitor] Removing drop-pending idents with drop timestamps before timestamp Timestamp(1574796704, 6)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.609-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-131--2588534479858262356 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796701, 5)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:50.933-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-329-8224331490264904478 (ns: test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 13)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:52.941-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-38--4104909142373009110 (ns: test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 14)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:52.952-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-38--8000595249233899911 (ns: test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 14)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.612-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-138--2588534479858262356 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796701, 5)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.022-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-39--4104909142373009110 (ns: test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 14)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:50.936-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-330-8224331490264904478 (ns: test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 13)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.001-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-39--8000595249233899911 (ns: test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 14)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.613-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-126--2588534479858262356 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796701, 5)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.084-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-37--4104909142373009110 (ns: test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 14)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:50.937-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-328-8224331490264904478 (ns: test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 13)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.047-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-37--8000595249233899911 (ns: test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 14)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.614-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-147--2588534479858262356 (ns: config.cache.chunks.test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796701, 9)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:50.937-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-332-8224331490264904478 (ns: config.cache.chunks.test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 21)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.108-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-42--8000595249233899911 (ns: config.cache.chunks.test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 22)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.614-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-150--2588534479858262356 (ns: config.cache.chunks.test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796701, 9)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:50.938-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-333-8224331490264904478 (ns: config.cache.chunks.test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 21)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.615-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-145--2588534479858262356 (ns: config.cache.chunks.test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796701, 9)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:50.939-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-331-8224331490264904478 (ns: config.cache.chunks.test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 21)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.616-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-113--2588534479858262356 (ns: test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 14)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.617-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-114--2588534479858262356 (ns: test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 14)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.619-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-112--2588534479858262356 (ns: test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 14)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.620-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-117--2588534479858262356 (ns: config.cache.chunks.test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 23)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.621-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-118--2588534479858262356 (ns: config.cache.chunks.test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 23)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:50.622-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-116--2588534479858262356 (ns: config.cache.chunks.test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 23)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.116-0500 JSTest jstests/hooks/run_validate_collections.js started with pid 16035.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.133-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-42--4104909142373009110 (ns: config.cache.chunks.test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 22)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.138-0500 MongoDB shell version v0.0.0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.154-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-43--4104909142373009110 (ns: config.cache.chunks.test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 22)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.179-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-43--8000595249233899911 (ns: config.cache.chunks.test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 22)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.189-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:53.190-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45566 #147 (1 connection now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:53.190-0500 I NETWORK [conn147] received client metadata from 127.0.0.1:45566 conn147: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.192-0500 Implicit session: session { "id" : UUID("8395cbf3-cd40-4eac-962b-e88dd2d22972") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.193-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.195-0500 true
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.198-0500 2019-11-26T14:31:53.198-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.198-0500 2019-11-26T14:31:53.198-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.199-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56970 #127 (33 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.199-0500 I NETWORK [conn127] received client metadata from 127.0.0.1:56970 conn127: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.199-0500 2019-11-26T14:31:53.199-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.199-0500 I NETWORK [listener] connection accepted from 127.0.0.1:56972 #128 (34 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.200-0500 I NETWORK [conn128] received client metadata from 127.0.0.1:56972 conn128: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.200-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.201-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.201-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.201-0500 [jsTest] New session started with sessionID: { "id" : UUID("ee562858-2c5b-4418-a552-920dc943f5f3") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.201-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.201-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.201-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.202-0500 2019-11-26T14:31:53.202-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.202-0500 2019-11-26T14:31:53.202-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.202-0500 2019-11-26T14:31:53.202-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.203-0500 2019-11-26T14:31:53.202-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.203-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52748 #85 (11 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.203-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53640 #79 (12 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.203-0500 I NETWORK [conn85] received client metadata from 127.0.0.1:52748 conn85: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.203-0500 I NETWORK [conn79] received client metadata from 127.0.0.1:53640 conn79: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.203-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39716 #175 (38 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.203-0500 I NETWORK [conn175] received client metadata from 127.0.0.1:39716 conn175: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.203-0500 2019-11-26T14:31:53.203-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.203-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39720 #176 (39 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.204-0500 I NETWORK [conn176] received client metadata from 127.0.0.1:39720 conn176: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.204-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.204-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.204-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.204-0500 [jsTest] New session started with sessionID: { "id" : UUID("0a29ee78-316e-4804-9b53-e2389224481f") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.205-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.205-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.205-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.205-0500 2019-11-26T14:31:53.205-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.205-0500 2019-11-26T14:31:53.205-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.205-0500 2019-11-26T14:31:53.205-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.205-0500 2019-11-26T14:31:53.205-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.205-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35746 #75 (10 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:53.205-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47192 #179 (39 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.205-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52388 #81 (10 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.205-0500 I NETWORK [conn75] received client metadata from 127.0.0.1:35746 conn75: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:53.205-0500 I NETWORK [conn179] received client metadata from 127.0.0.1:47192 conn179: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.206-0500 2019-11-26T14:31:53.206-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.205-0500 I NETWORK [conn81] received client metadata from 127.0.0.1:52388 conn81: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:53.206-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47194 #180 (40 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:53.206-0500 I NETWORK [conn180] received client metadata from 127.0.0.1:47194 conn180: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.207-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.207-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.207-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.207-0500 [jsTest] New session started with sessionID: { "id" : UUID("2f9fec9b-14d7-40f1-8a4b-9e9fc5a7ee60") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.207-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.207-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.207-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.242-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-41--4104909142373009110 (ns: config.cache.chunks.test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 22)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.242-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-41--8000595249233899911 (ns: config.cache.chunks.test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 22)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.285-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.285-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:53.285-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45588 #148 (2 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:53.286-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45590 #149 (3 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:53.286-0500 I NETWORK [conn148] received client metadata from 127.0.0.1:45588 conn148: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:53.286-0500 I NETWORK [conn149] received client metadata from 127.0.0.1:45590 conn149: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.286-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.286-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.286-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:53.286-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45592 #150 (4 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:53.286-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45594 #151 (5 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:53.287-0500 I NETWORK [conn150] received client metadata from 127.0.0.1:45592 conn150: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:53.287-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45596 #152 (6 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:53.287-0500 I NETWORK [conn151] received client metadata from 127.0.0.1:45594 conn151: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:53.287-0500 I NETWORK [conn152] received client metadata from 127.0.0.1:45596 conn152: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.287-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:53.287-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45598 #153 (7 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:53.287-0500 I NETWORK [conn153] received client metadata from 127.0.0.1:45598 conn153: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.288-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.288-0500 Implicit session: session { "id" : UUID("eb08df52-a7b3-4241-b274-a4f1379133c1") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.288-0500 Implicit session: session { "id" : UUID("5b9bb4ec-a7a3-4a48-8eb0-92554674476e") }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:53.288-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45600 #154 (8 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:53.288-0500 I NETWORK [conn154] received client metadata from 127.0.0.1:45600 conn154: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.288-0500 Implicit session: session { "id" : UUID("bb16d904-2525-46b2-be68-13eddc7ae5ea") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.288-0500 Implicit session: session { "id" : UUID("576f1ed2-f808-435e-99b8-df4cdbb7dbad") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.288-0500 Implicit session: session { "id" : UUID("da5f649f-f388-496b-a9e4-1506909f5cee") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.289-0500 Implicit session: session { "id" : UUID("27fa7a20-bbac-46ea-857d-b6239bf56d4a") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.289-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.289-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.290-0500 Implicit session: session { "id" : UUID("ad706e62-0170-4e5b-8d57-4dab557931c6") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.290-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.290-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.290-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.291-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.291-0500 Running validate() on localhost:20000
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.291-0500 Running validate() on localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.291-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57004 #129 (35 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.291-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-54--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 1572)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.292-0500 Running validate() on localhost:20004
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.292-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.292-0500 Running validate() on localhost:20002
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.292-0500 Running validate() on localhost:20005
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.291-0500 I NETWORK [conn129] received client metadata from 127.0.0.1:57004 conn129: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.291-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39746 #177 (40 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.291-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-54--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 1572)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:53.291-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47214 #181 (41 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.292-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52416 #82 (11 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.292-0500 Running validate() on localhost:20003
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.293-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.293-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.293-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.291-0500 I NETWORK [conn177] received client metadata from 127.0.0.1:39746 conn177: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.293-0500 [jsTest] New session started with sessionID: { "id" : UUID("e49435c8-b92d-488d-881e-b91b86a8a0d6") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.292-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52784 #86 (12 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.293-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.293-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.293-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.293-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.293-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.293-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.293-0500 [jsTest] New session started with sessionID: { "id" : UUID("7a5de6ed-2fe1-4dfa-95ed-3c1edd0f7c1b") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.293-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.293-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.293-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.293-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.293-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.293-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500 [jsTest] New session started with sessionID: { "id" : UUID("dde9e419-c5ac-45c6-82b2-fb794d6355b6") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500 [jsTest] New session started with sessionID: { "id" : UUID("062a8369-aa85-4710-98c0-d8991d6c8cc4") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500 Running validate() on localhost:20006
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500 [jsTest] New session started with sessionID: { "id" : UUID("13610985-66ad-42da-9287-9b4db3b8e39b") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500 [jsTest] New session started with sessionID: { "id" : UUID("d889996e-68cb-4b26-af01-0ebccab29090") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.294-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.295-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.295-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:53.292-0500 I NETWORK [conn181] received client metadata from 127.0.0.1:47214 conn181: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.292-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53676 #80 (13 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.292-0500 I NETWORK [conn86] received client metadata from 127.0.0.1:52784 conn86: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.293-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35780 #76 (11 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.293-0500 I NETWORK [conn82] received client metadata from 127.0.0.1:52416 conn82: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.295-0500 I NETWORK [conn76] received client metadata from 127.0.0.1:35780 conn76: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.293-0500 I NETWORK [conn80] received client metadata from 127.0.0.1:53676 conn80: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.296-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.296-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.296-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.296-0500 [jsTest] New session started with sessionID: { "id" : UUID("107e60c6-371c-4df8-9cbe-efcb5822e5b5") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.296-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.296-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:53.296-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.296-0500 I COMMAND [conn177] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:53.297-0500 I COMMAND [conn181] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.297-0500 I COMMAND [conn86] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.297-0500 I COMMAND [conn129] CMD: validate admin.system.keys, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.298-0500 I COMMAND [conn80] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.298-0500 I INDEX [conn177] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.298-0500 I INDEX [conn129] validating the internal structure of index _id_ on collection admin.system.keys
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.299-0500 I COMMAND [conn82] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.299-0500 W STORAGE [conn82] Could not complete validation of table:collection-17--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.299-0500 I INDEX [conn82] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.299-0500 W STORAGE [conn82] Could not complete validation of table:index-18--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.300-0500 I INDEX [conn82] validating collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.300-0500 I INDEX [conn82] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.300-0500 I INDEX [conn82] Validation complete for collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.300-0500 I COMMAND [conn76] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.300-0500 W STORAGE [conn76] Could not complete validation of table:collection-17--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.300-0500 I INDEX [conn76] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.300-0500 I INDEX [conn129] validating collection admin.system.keys (UUID: 807238e6-a72f-4ef0-b305-4bab60afd0e6)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.300-0500 I INDEX [conn129] validating index consistency _id_ on collection admin.system.keys
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.300-0500 I INDEX [conn129] Validation complete for collection admin.system.keys (UUID: 807238e6-a72f-4ef0-b305-4bab60afd0e6). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.300-0500 W STORAGE [conn76] Could not complete validation of table:index-18--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.300-0500 I INDEX [conn76] validating collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.300-0500 I INDEX [conn76] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.300-0500 I INDEX [conn76] Validation complete for collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.301-0500 I INDEX [conn177] validating collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.301-0500 I COMMAND [conn129] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.301-0500 I INDEX [conn177] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.301-0500 I INDEX [conn177] Validation complete for collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.301-0500 I INDEX [conn129] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.302-0500 I COMMAND [conn82] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.302-0500 W STORAGE [conn82] Could not complete validation of table:collection-29--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.302-0500 I INDEX [conn82] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.302-0500 W STORAGE [conn82] Could not complete validation of table:index-30--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.302-0500 I INDEX [conn82] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.302-0500 W STORAGE [conn82] Could not complete validation of table:index-31--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.302-0500 I INDEX [conn82] validating collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.302-0500 I INDEX [conn82] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.302-0500 I INDEX [conn82] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.302-0500 I INDEX [conn82] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.302-0500 I COMMAND [conn76] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.302-0500 W STORAGE [conn76] Could not complete validation of table:collection-29--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.302-0500 I INDEX [conn76] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.302-0500 W STORAGE [conn76] Could not complete validation of table:index-30--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.302-0500 I INDEX [conn76] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.302-0500 W STORAGE [conn76] Could not complete validation of table:index-31--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.302-0500 I INDEX [conn76] validating collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.302-0500 I INDEX [conn76] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.302-0500 I INDEX [conn76] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.302-0500 I INDEX [conn76] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.303-0500 I COMMAND [conn177] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.303-0500 I COMMAND [conn76] CMD: validate config.cache.chunks.test3_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.303-0500 W STORAGE [conn76] Could not complete validation of table:collection-211--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.303-0500 I INDEX [conn76] validating the internal structure of index _id_ on collection config.cache.chunks.test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.303-0500 W STORAGE [conn76] Could not complete validation of table:index-212--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.303-0500 I INDEX [conn76] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.303-0500 W STORAGE [conn76] Could not complete validation of table:index-219--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.303-0500 I COMMAND [conn82] CMD: validate config.cache.chunks.test3_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.303-0500 I INDEX [conn76] validating collection config.cache.chunks.test3_fsmdb0.agg_out (UUID: 4c26dac0-af8d-4579-bbb5-32356c1d2f49)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.303-0500 W STORAGE [conn82] Could not complete validation of table:collection-211--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.303-0500 I INDEX [conn82] validating the internal structure of index _id_ on collection config.cache.chunks.test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.303-0500 W STORAGE [conn82] Could not complete validation of table:index-212--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.303-0500 I INDEX [conn76] validating index consistency _id_ on collection config.cache.chunks.test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.303-0500 I INDEX [conn82] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.303-0500 I INDEX [conn76] validating index consistency lastmod_1 on collection config.cache.chunks.test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.303-0500 W STORAGE [conn82] Could not complete validation of table:index-219--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.304-0500 I INDEX [conn129] validating collection admin.system.version (UUID: 1b1834a4-71ee-49e7-abbc-7ae09d5089b2)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.303-0500 I INDEX [conn76] Validation complete for collection config.cache.chunks.test3_fsmdb0.agg_out (UUID: 4c26dac0-af8d-4579-bbb5-32356c1d2f49). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.304-0500 I INDEX [conn177] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.304-0500 I INDEX [conn129] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.303-0500 I INDEX [conn82] validating collection config.cache.chunks.test3_fsmdb0.agg_out (UUID: 4c26dac0-af8d-4579-bbb5-32356c1d2f49)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.304-0500 I COMMAND [conn76] CMD: validate config.cache.chunks.test3_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.304-0500 I INDEX [conn129] Validation complete for collection admin.system.version (UUID: 1b1834a4-71ee-49e7-abbc-7ae09d5089b2). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.303-0500 I INDEX [conn82] validating index consistency _id_ on collection config.cache.chunks.test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.304-0500 W STORAGE [conn76] Could not complete validation of table:collection-169--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.303-0500 I INDEX [conn82] validating index consistency lastmod_1 on collection config.cache.chunks.test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.304-0500 I INDEX [conn76] validating the internal structure of index _id_ on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.303-0500 I INDEX [conn82] Validation complete for collection config.cache.chunks.test3_fsmdb0.agg_out (UUID: 4c26dac0-af8d-4579-bbb5-32356c1d2f49). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.304-0500 W STORAGE [conn76] Could not complete validation of table:index-170--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.304-0500 I INDEX [conn76] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.304-0500 W STORAGE [conn76] Could not complete validation of table:index-171--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.304-0500 I INDEX [conn76] validating collection config.cache.chunks.test3_fsmdb0.fsmcoll0 (UUID: a33e44c0-60ea-478a-83bd-e45f3213aca7)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.304-0500 I INDEX [conn76] validating index consistency _id_ on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.304-0500 I INDEX [conn76] validating index consistency lastmod_1 on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.304-0500 I INDEX [conn76] Validation complete for collection config.cache.chunks.test3_fsmdb0.fsmcoll0 (UUID: a33e44c0-60ea-478a-83bd-e45f3213aca7). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.304-0500 I COMMAND [conn82] CMD: validate config.cache.chunks.test3_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.304-0500 W STORAGE [conn82] Could not complete validation of table:collection-169--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.304-0500 I INDEX [conn82] validating the internal structure of index _id_ on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.304-0500 W STORAGE [conn82] Could not complete validation of table:index-170--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.304-0500 I INDEX [conn82] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.304-0500 W STORAGE [conn82] Could not complete validation of table:index-171--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.304-0500 I INDEX [conn82] validating collection config.cache.chunks.test3_fsmdb0.fsmcoll0 (UUID: a33e44c0-60ea-478a-83bd-e45f3213aca7)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.304-0500 I INDEX [conn82] validating index consistency _id_ on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.305-0500 I INDEX [conn82] validating index consistency lastmod_1 on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.305-0500 I INDEX [conn82] Validation complete for collection config.cache.chunks.test3_fsmdb0.fsmcoll0 (UUID: a33e44c0-60ea-478a-83bd-e45f3213aca7). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.305-0500 I COMMAND [conn76] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.305-0500 W STORAGE [conn76] Could not complete validation of table:collection-27--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.305-0500 I INDEX [conn76] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.305-0500 W STORAGE [conn76] Could not complete validation of table:index-28--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.305-0500 I INDEX [conn76] validating collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.305-0500 I INDEX [conn76] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.305-0500 I INDEX [conn76] Validation complete for collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.305-0500 I COMMAND [conn82] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.305-0500 W STORAGE [conn82] Could not complete validation of table:collection-27--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.305-0500 I INDEX [conn82] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.305-0500 I INDEX [conn177] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.305-0500 W STORAGE [conn82] Could not complete validation of table:index-28--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.306-0500 I COMMAND [conn76] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.306-0500 W STORAGE [conn76] Could not complete validation of table:collection-25--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.306-0500 I INDEX [conn76] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.306-0500 W STORAGE [conn76] Could not complete validation of table:index-26--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.306-0500 I INDEX [conn76] validating collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.306-0500 I INDEX [conn76] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.306-0500 I COMMAND [conn129] CMD: validate config.actionlog, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.305-0500 I INDEX [conn82] validating collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.306-0500 I INDEX [conn76] Validation complete for collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.306-0500 I INDEX [conn82] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.306-0500 I INDEX [conn82] Validation complete for collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.306-0500 I COMMAND [conn82] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.306-0500 I COMMAND [conn76] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.306-0500 W STORAGE [conn76] Could not complete validation of table:collection-21--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.306-0500 I INDEX [conn129] validating the internal structure of index _id_ on collection config.actionlog
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.306-0500 W STORAGE [conn82] Could not complete validation of table:collection-25--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.306-0500 I INDEX [conn82] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.306-0500 W STORAGE [conn82] Could not complete validation of table:index-26--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.307-0500 I INDEX [conn82] validating collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.307-0500 I INDEX [conn82] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.307-0500 I INDEX [conn82] Validation complete for collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.306-0500 I INDEX [conn76] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.306-0500 W STORAGE [conn76] Could not complete validation of table:index-22--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.307-0500 I INDEX [conn76] validating collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.307-0500 I INDEX [conn76] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.307-0500 I INDEX [conn76] Validation complete for collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.307-0500 I COMMAND [conn82] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.307-0500 W STORAGE [conn82] Could not complete validation of table:collection-21--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.307-0500 I INDEX [conn82] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.307-0500 W STORAGE [conn82] Could not complete validation of table:index-22--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.308-0500 I INDEX [conn82] validating collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.308-0500 I INDEX [conn82] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.308-0500 I INDEX [conn82] Validation complete for collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.308-0500 I INDEX [conn177] validating collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.308-0500 I INDEX [conn177] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.308-0500 I INDEX [conn177] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.308-0500 I INDEX [conn177] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.308-0500 I COMMAND [conn76] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.309-0500 I COMMAND [conn177] CMD: validate config.cache.chunks.test3_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.309-0500 W STORAGE [conn177] Could not complete validation of table:collection-338-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.309-0500 I INDEX [conn177] validating the internal structure of index _id_ on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.309-0500 W STORAGE [conn177] Could not complete validation of table:index-339-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.309-0500 I INDEX [conn177] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.309-0500 W STORAGE [conn177] Could not complete validation of table:index-340-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.309-0500 I INDEX [conn177] validating collection config.cache.chunks.test3_fsmdb0.fsmcoll0 (UUID: d291b2bc-f179-4f06-8164-0b81d0131eb1)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.309-0500 I INDEX [conn177] validating index consistency _id_ on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.309-0500 I INDEX [conn177] validating index consistency lastmod_1 on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.309-0500 I INDEX [conn177] Validation complete for collection config.cache.chunks.test3_fsmdb0.fsmcoll0 (UUID: d291b2bc-f179-4f06-8164-0b81d0131eb1). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.310-0500 I COMMAND [conn82] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.310-0500 I COMMAND [conn177] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.310-0500 W STORAGE [conn177] Could not complete validation of table:collection-20-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.310-0500 I INDEX [conn177] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.310-0500 W STORAGE [conn177] Could not complete validation of table:index-23-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.310-0500 I INDEX [conn177] validating collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.310-0500 I INDEX [conn177] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.310-0500 I INDEX [conn177] Validation complete for collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.311-0500 I COMMAND [conn177] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.311-0500 W STORAGE [conn177] Could not complete validation of table:collection-19-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.311-0500 I INDEX [conn177] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.311-0500 W STORAGE [conn177] Could not complete validation of table:index-21-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.311-0500 I INDEX [conn177] validating collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.311-0500 I INDEX [conn177] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.311-0500 I INDEX [conn177] Validation complete for collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.312-0500 I COMMAND [conn177] CMD: validate config.system.sessions, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.315-0500 W STORAGE [conn76] Could not complete validation of table:collection-16--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.315-0500 I INDEX [conn76] validating collection local.oplog.rs (UUID: 307925b3-4143-4c06-a46a-f04119b3afb4)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.317-0500 W STORAGE [conn82] Could not complete validation of table:collection-16--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.317-0500 I INDEX [conn82] validating collection local.oplog.rs (UUID: 6c707c3f-4064-4e35-98fb-b2fff8245539)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.337-0500 I INDEX [conn76] Validation complete for collection local.oplog.rs (UUID: 307925b3-4143-4c06-a46a-f04119b3afb4). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.338-0500 I COMMAND [conn76] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.339-0500 I INDEX [conn82] Validation complete for collection local.oplog.rs (UUID: 6c707c3f-4064-4e35-98fb-b2fff8245539). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.340-0500 I COMMAND [conn82] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.354-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-55--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 1572)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.354-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-55--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 1572)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.354-0500 I INDEX [conn129] validating collection config.actionlog (UUID: ff427093-1de4-4a9f-83c9-6b01392e1aea)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.354-0500 I INDEX [conn129] validating index consistency _id_ on collection config.actionlog
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.354-0500 I INDEX [conn129] Validation complete for collection config.actionlog (UUID: ff427093-1de4-4a9f-83c9-6b01392e1aea). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.355-0500 I INDEX [conn76] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.355-0500 I INDEX [conn86] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.355-0500 I COMMAND [conn129] CMD: validate config.changelog, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.355-0500 W STORAGE [conn129] Could not complete validation of table:collection-49-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.355-0500 I INDEX [conn129] validating the internal structure of index _id_ on collection config.changelog
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.355-0500 W STORAGE [conn129] Could not complete validation of table:index-50-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.355-0500 I INDEX [conn129] validating collection config.changelog (UUID: 65b892c8-48e9-4ca9-8300-743a486a361f)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.355-0500 I INDEX [conn177] validating the internal structure of index _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.355-0500 I INDEX [conn129] validating index consistency _id_ on collection config.changelog
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.355-0500 I INDEX [conn129] Validation complete for collection config.changelog (UUID: 65b892c8-48e9-4ca9-8300-743a486a361f). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.355-0500 I INDEX [conn80] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.356-0500 I COMMAND [conn129] CMD: validate config.chunks, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.356-0500 W STORAGE [conn129] Could not complete validation of table:collection-17-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.356-0500 I INDEX [conn129] validating the internal structure of index _id_ on collection config.chunks
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.356-0500 W STORAGE [conn129] Could not complete validation of table:index-18-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.356-0500 I INDEX [conn129] validating the internal structure of index ns_1_min_1 on collection config.chunks
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.356-0500 W STORAGE [conn129] Could not complete validation of table:index-19-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.356-0500 I INDEX [conn129] validating the internal structure of index ns_1_shard_1_min_1 on collection config.chunks
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.356-0500 W STORAGE [conn129] Could not complete validation of table:index-20-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.356-0500 I INDEX [conn129] validating the internal structure of index ns_1_lastmod_1 on collection config.chunks
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.356-0500 W STORAGE [conn129] Could not complete validation of table:index-21-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.356-0500 I INDEX [conn129] validating collection config.chunks (UUID: e7035d0b-a892-4426-b520-83da62bcbda6)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.356-0500 I INDEX [conn129] validating index consistency _id_ on collection config.chunks
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.356-0500 I INDEX [conn129] validating index consistency ns_1_min_1 on collection config.chunks
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.356-0500 I INDEX [conn129] validating index consistency ns_1_shard_1_min_1 on collection config.chunks
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.356-0500 I INDEX [conn129] validating index consistency ns_1_lastmod_1 on collection config.chunks
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.356-0500 I INDEX [conn129] Validation complete for collection config.chunks (UUID: e7035d0b-a892-4426-b520-83da62bcbda6). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.357-0500 I INDEX [conn177] validating the internal structure of index lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.357-0500 I INDEX [conn82] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.357-0500 I COMMAND [conn129] CMD: validate config.collections, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.357-0500 W STORAGE [conn129] Could not complete validation of table:collection-51-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.357-0500 I INDEX [conn129] validating the internal structure of index _id_ on collection config.collections
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.357-0500 W STORAGE [conn129] Could not complete validation of table:index-52-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.357-0500 I INDEX [conn129] validating collection config.collections (UUID: c846d630-16e0-4675-b90f-3cd769544ef0)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.357-0500 I INDEX [conn76] validating collection local.replset.election (UUID: 7b059263-7419-4cf5-8072-b44957d729c9)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.357-0500 I INDEX [conn129] validating index consistency _id_ on collection config.collections
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.357-0500 I INDEX [conn76] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.357-0500 I INDEX [conn76] Validation complete for collection local.replset.election (UUID: 7b059263-7419-4cf5-8072-b44957d729c9). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.357-0500 I INDEX [conn129] Validation complete for collection config.collections (UUID: c846d630-16e0-4675-b90f-3cd769544ef0). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.358-0500 I COMMAND [conn76] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.358-0500 W STORAGE [conn76] Could not complete validation of table:collection-4--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.358-0500 I INDEX [conn76] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.358-0500 I COMMAND [conn129] CMD: validate config.databases, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.358-0500 W STORAGE [conn129] Could not complete validation of table:collection-55-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.358-0500 I INDEX [conn129] validating the internal structure of index _id_ on collection config.databases
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.358-0500 W STORAGE [conn129] Could not complete validation of table:index-56-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.358-0500 I INDEX [conn129] validating collection config.databases (UUID: 1c31f9a7-ee46-41d3-a296-2e1f323b51b8)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.358-0500 I INDEX [conn129] validating index consistency _id_ on collection config.databases
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.358-0500 I INDEX [conn129] Validation complete for collection config.databases (UUID: 1c31f9a7-ee46-41d3-a296-2e1f323b51b8). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.359-0500 I COMMAND [conn129] CMD: validate config.lockpings, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.359-0500 W STORAGE [conn129] Could not complete validation of table:collection-32-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.359-0500 I INDEX [conn177] validating collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.359-0500 I INDEX [conn177] validating index consistency _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.359-0500 I INDEX [conn177] validating index consistency lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.359-0500 I INDEX [conn177] Validation complete for collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.359-0500 I INDEX [conn129] validating the internal structure of index _id_ on collection config.lockpings
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.359-0500 I INDEX [conn76] validating collection local.replset.minvalid (UUID: e1166351-a2a9-4335-b202-a653b252b811)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.359-0500 I INDEX [conn76] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.359-0500 I INDEX [conn76] Validation complete for collection local.replset.minvalid (UUID: e1166351-a2a9-4335-b202-a653b252b811). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.360-0500 I COMMAND [conn177] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.360-0500 W STORAGE [conn177] Could not complete validation of table:collection-15-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.360-0500 I INDEX [conn177] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.360-0500 W STORAGE [conn177] Could not complete validation of table:index-16-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.360-0500 I INDEX [conn177] validating collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.360-0500 I INDEX [conn177] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.360-0500 I INDEX [conn177] Validation complete for collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.360-0500 I COMMAND [conn76] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.360-0500 I INDEX [conn129] validating the internal structure of index ping_1 on collection config.lockpings
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.360-0500 W STORAGE [conn129] Could not complete validation of table:index-34-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.360-0500 I INDEX [conn129] validating collection config.lockpings (UUID: f662f115-623a-496b-9953-7132cdf8c056)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.360-0500 I INDEX [conn129] validating index consistency _id_ on collection config.lockpings
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.360-0500 I INDEX [conn129] validating index consistency ping_1 on collection config.lockpings
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.360-0500 I INDEX [conn129] Validation complete for collection config.lockpings (UUID: f662f115-623a-496b-9953-7132cdf8c056). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.361-0500 I COMMAND [conn129] CMD: validate config.locks, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.361-0500 W STORAGE [conn129] Could not complete validation of table:collection-28-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.361-0500 I INDEX [conn129] validating the internal structure of index _id_ on collection config.locks
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.361-0500 W STORAGE [conn129] Could not complete validation of table:index-29-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.361-0500 I INDEX [conn129] validating the internal structure of index ts_1 on collection config.locks
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.362-0500 I COMMAND [conn177] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.364-0500 I INDEX [conn82] validating collection local.replset.election (UUID: 6a83721b-d0f2-438c-a2e3-ec6a11e75236)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:53.399-0500 I NETWORK [conn149] end connection 127.0.0.1:45590 (7 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.403-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-53--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 1572)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.403-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-53--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 1572)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.491-0500 I INDEX [conn76] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.361-0500 W STORAGE [conn129] Could not complete validation of table:index-30-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.362-0500 W STORAGE [conn177] Could not complete validation of table:collection-10-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.364-0500 I INDEX [conn82] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:53.415-0500 I NETWORK [conn148] end connection 127.0.0.1:45588 (6 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.490-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-58--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 1817)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.466-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-58--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 1817)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.493-0500 I INDEX [conn76] validating collection local.replset.oplogTruncateAfterPoint (UUID: 022b88bb-9282-4f39-aad1-6988341f4ac1)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.361-0500 I INDEX [conn129] validating the internal structure of index state_1_process_1 on collection config.locks
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.362-0500 I INDEX [conn177] validating collection local.oplog.rs (UUID: 5f1b9ff7-2fef-4590-8e90-0f3704b0f5df)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.364-0500 I INDEX [conn82] Validation complete for collection local.replset.election (UUID: 6a83721b-d0f2-438c-a2e3-ec6a11e75236). No corruption found.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:53.493-0500 I NETWORK [conn152] end connection 127.0.0.1:45596 (5 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.491-0500 I INDEX [conn86] validating collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.467-0500 I INDEX [conn80] validating collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.493-0500 I INDEX [conn76] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.361-0500 W STORAGE [conn129] Could not complete validation of table:index-31-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.388-0500 I INDEX [conn177] Validation complete for collection local.oplog.rs (UUID: 5f1b9ff7-2fef-4590-8e90-0f3704b0f5df). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.365-0500 I COMMAND [conn82] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:53.512-0500 I NETWORK [conn154] end connection 127.0.0.1:45600 (4 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.491-0500 I INDEX [conn86] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.467-0500 I INDEX [conn80] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.493-0500 I INDEX [conn76] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 022b88bb-9282-4f39-aad1-6988341f4ac1). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.361-0500 I INDEX [conn129] validating collection config.locks (UUID: dbde06c7-d8ac-4f80-ab9f-cae486f16451)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.388-0500 I COMMAND [conn177] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.365-0500 W STORAGE [conn82] Could not complete validation of table:collection-4--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.491-0500 I INDEX [conn86] Validation complete for collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.467-0500 I INDEX [conn80] Validation complete for collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.493-0500 I COMMAND [conn76] command local.$cmd appName: "MongoDB Shell" command: validate { validate: "replset.oplogTruncateAfterPoint", full: true, lsid: { id: UUID("107e60c6-371c-4df8-9cbe-efcb5822e5b5") }, $clusterTime: { clusterTime: Timestamp(1574796709, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:563 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{ data: { bytesRead: 287, timeReadingMicros: 7 } } protocol:op_msg 133ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.361-0500 I INDEX [conn129] validating index consistency _id_ on collection config.locks
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.389-0500 I INDEX [conn177] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.365-0500 I INDEX [conn82] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.491-0500 I COMMAND [conn86] command admin.$cmd appName: "MongoDB Shell" command: validate { validate: "system.version", full: true, lsid: { id: UUID("062a8369-aa85-4710-98c0-d8991d6c8cc4") }, $clusterTime: { clusterTime: Timestamp(1574796709, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:546 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { W: 1 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{ data: { bytesRead: 436, timeReadingMicros: 8 }, timeWaitingMicros: { schemaLock: 85741 } } protocol:op_msg 194ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.468-0500 I COMMAND [conn80] command admin.$cmd appName: "MongoDB Shell" command: validate { validate: "system.version", full: true, lsid: { id: UUID("13610985-66ad-42da-9287-9b4db3b8e39b") }, $clusterTime: { clusterTime: Timestamp(1574796709, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:546 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { W: 1 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{ data: { bytesRead: 436, timeReadingMicros: 9 }, timeWaitingMicros: { schemaLock: 60940 } } protocol:op_msg 169ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.494-0500 I COMMAND [conn76] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.495-0500 I INDEX [conn76] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.391-0500 I INDEX [conn177] validating collection local.replset.election (UUID: 801ad0de-17c3-44b2-a878-e91b8de004c5)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.370-0500 I INDEX [conn82] validating collection local.replset.minvalid (UUID: 3f481e27-9697-4b6d-b77b-0bd9b43c5dfa)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.493-0500 I COMMAND [conn86] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.469-0500 I COMMAND [conn80] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.361-0500 I INDEX [conn129] validating index consistency ts_1 on collection config.locks
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.497-0500 I INDEX [conn76] validating collection local.startup_log (UUID: 62f9eac5-a715-4818-9af1-edc47894f622)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.391-0500 I INDEX [conn177] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.370-0500 I INDEX [conn82] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.553-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-67--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 1817)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.516-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-67--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 1817)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.361-0500 I INDEX [conn129] validating index consistency state_1_process_1 on collection config.locks
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.497-0500 I INDEX [conn76] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.391-0500 I INDEX [conn177] Validation complete for collection local.replset.election (UUID: 801ad0de-17c3-44b2-a878-e91b8de004c5). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.370-0500 I INDEX [conn82] Validation complete for collection local.replset.minvalid (UUID: 3f481e27-9697-4b6d-b77b-0bd9b43c5dfa). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.554-0500 I INDEX [conn86] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.516-0500 I INDEX [conn80] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.362-0500 I INDEX [conn129] Validation complete for collection config.locks (UUID: dbde06c7-d8ac-4f80-ab9f-cae486f16451). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.497-0500 I INDEX [conn76] Validation complete for collection local.startup_log (UUID: 62f9eac5-a715-4818-9af1-edc47894f622). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.392-0500 I COMMAND [conn177] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.371-0500 I COMMAND [conn82] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.602-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-57--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 1817)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.577-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-57--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 1817)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.362-0500 I COMMAND [conn129] CMD: validate config.migrations, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.497-0500 I COMMAND [conn76] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.393-0500 I INDEX [conn177] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.467-0500 I INDEX [conn82] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.604-0500 I INDEX [conn86] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.578-0500 I INDEX [conn80] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.362-0500 W STORAGE [conn129] Could not complete validation of table:collection-22-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.498-0500 I INDEX [conn76] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.395-0500 I INDEX [conn177] validating collection local.replset.minvalid (UUID: a96fd08c-e1c8-43e5-868a-0849697b175e)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.472-0500 I INDEX [conn82] validating collection local.replset.oplogTruncateAfterPoint (UUID: ae67a1b2-b2be-4d7e-8242-18f3082bc280)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.665-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-60--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 2832)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.665-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-60--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 2832)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.362-0500 I INDEX [conn129] validating the internal structure of index _id_ on collection config.migrations
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.362-0500 W STORAGE [conn129] Could not complete validation of table:index-23-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.395-0500 I INDEX [conn177] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.472-0500 I INDEX [conn82] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.714-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-69--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 2832)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.714-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-69--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 2832)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.500-0500 I INDEX [conn76] validating collection local.system.replset (UUID: c43cc3e4-845d-4144-8406-83bf4df96d39)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.362-0500 I INDEX [conn129] validating the internal structure of index ns_1_min_1 on collection config.migrations
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.362-0500 W STORAGE [conn129] Could not complete validation of table:index-24-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.362-0500 I INDEX [conn129] validating collection config.migrations (UUID: 550e32ef-0dd4-48f9-bb5e-9e21bec0734f)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.715-0500 I INDEX [conn86] validating collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.777-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-59--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 2832)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.500-0500 I INDEX [conn76] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.395-0500 I INDEX [conn177] Validation complete for collection local.replset.minvalid (UUID: a96fd08c-e1c8-43e5-868a-0849697b175e). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.472-0500 I INDEX [conn82] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: ae67a1b2-b2be-4d7e-8242-18f3082bc280). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.362-0500 I INDEX [conn129] validating index consistency _id_ on collection config.migrations
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.715-0500 I INDEX [conn86] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.777-0500 I INDEX [conn80] validating collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.501-0500 I INDEX [conn76] Validation complete for collection local.system.replset (UUID: c43cc3e4-845d-4144-8406-83bf4df96d39). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.395-0500 I COMMAND [conn177] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.472-0500 I COMMAND [conn82] command local.$cmd appName: "MongoDB Shell" command: validate { validate: "replset.oplogTruncateAfterPoint", full: true, lsid: { id: UUID("d889996e-68cb-4b26-af01-0ebccab29090") }, $clusterTime: { clusterTime: Timestamp(1574796709, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:563 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{ data: { bytesRead: 287, timeReadingMicros: 9 } } protocol:op_msg 101ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.362-0500 I INDEX [conn129] validating index consistency ns_1_min_1 on collection config.migrations
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.715-0500 I INDEX [conn86] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.777-0500 I INDEX [conn80] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.501-0500 I COMMAND [conn76] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.396-0500 I INDEX [conn177] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.473-0500 I COMMAND [conn82] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.362-0500 I INDEX [conn129] Validation complete for collection config.migrations (UUID: 550e32ef-0dd4-48f9-bb5e-9e21bec0734f). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.715-0500 I INDEX [conn86] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.777-0500 I INDEX [conn80] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.502-0500 I INDEX [conn76] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.398-0500 I INDEX [conn177] validating collection local.replset.oplogTruncateAfterPoint (UUID: 4ac06258-0ea7-46c8-b773-0c637830872b)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.474-0500 I INDEX [conn82] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.363-0500 I COMMAND [conn129] CMD: validate config.mongos, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.715-0500 I COMMAND [conn86] command config.$cmd appName: "MongoDB Shell" command: validate { validate: "cache.chunks.config.system.sessions", full: true, lsid: { id: UUID("062a8369-aa85-4710-98c0-d8991d6c8cc4") }, $clusterTime: { clusterTime: Timestamp(1574796709, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:607 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{ data: { bytesRead: 441, timeReadingMicros: 10 }, timeWaitingMicros: { handleLock: 1, schemaLock: 47356 } } protocol:op_msg 221ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.778-0500 I INDEX [conn80] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.504-0500 I INDEX [conn76] validating collection local.system.rollback.id (UUID: af3b2fdb-b5ae-49b3-a026-c55e1bf822c0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.398-0500 I INDEX [conn177] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.477-0500 I INDEX [conn82] validating collection local.startup_log (UUID: fb2ea5d2-ac7b-4697-a368-9f5d41483423)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.363-0500 W STORAGE [conn129] Could not complete validation of table:collection-43-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.715-0500 I COMMAND [conn86] CMD: validate config.cache.chunks.test3_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.778-0500 I COMMAND [conn80] command config.$cmd appName: "MongoDB Shell" command: validate { validate: "cache.chunks.config.system.sessions", full: true, lsid: { id: UUID("13610985-66ad-42da-9287-9b4db3b8e39b") }, $clusterTime: { clusterTime: Timestamp(1574796709, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:607 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{ data: { bytesRead: 441, timeReadingMicros: 11 }, timeWaitingMicros: { handleLock: 29, schemaLock: 110113 } } protocol:op_msg 308ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.504-0500 I INDEX [conn76] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.398-0500 I INDEX [conn177] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 4ac06258-0ea7-46c8-b773-0c637830872b). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.477-0500 I INDEX [conn82] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.363-0500 I INDEX [conn129] validating the internal structure of index _id_ on collection config.mongos
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.777-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-59--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 2832)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.778-0500 I COMMAND [conn80] CMD: validate config.cache.chunks.test3_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.504-0500 I INDEX [conn76] Validation complete for collection local.system.rollback.id (UUID: af3b2fdb-b5ae-49b3-a026-c55e1bf822c0). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.399-0500 I COMMAND [conn177] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.477-0500 I INDEX [conn82] Validation complete for collection local.startup_log (UUID: fb2ea5d2-ac7b-4697-a368-9f5d41483423). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.364-0500 I INDEX [conn129] validating collection config.mongos (UUID: 57207abe-6d8d-4102-a526-bc847dba6c09)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.777-0500 W STORAGE [conn86] Could not complete validation of table:collection-349--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.825-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-62--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3014)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.505-0500 I COMMAND [conn76] CMD: validate test3_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.400-0500 I INDEX [conn177] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.478-0500 I COMMAND [conn82] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.364-0500 I INDEX [conn129] validating index consistency _id_ on collection config.mongos
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.777-0500 I INDEX [conn86] validating the internal structure of index _id_ on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.826-0500 W STORAGE [conn80] Could not complete validation of table:collection-349--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.506-0500 W STORAGE [conn76] Could not complete validation of table:collection-197--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.402-0500 I INDEX [conn177] validating collection local.startup_log (UUID: e8e71921-e80f-42ad-92d0-ad769374a694)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.478-0500 I INDEX [conn82] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.365-0500 I INDEX [conn129] Validation complete for collection config.mongos (UUID: 57207abe-6d8d-4102-a526-bc847dba6c09). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.825-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-62--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3014)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.826-0500 I INDEX [conn80] validating the internal structure of index _id_ on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.506-0500 I INDEX [conn76] validating the internal structure of index _id_ on collection test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.402-0500 I INDEX [conn177] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.480-0500 I INDEX [conn82] validating collection local.system.replset (UUID: 2b695a66-e9c6-4bba-a36e-eb0a5cf356ba)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.365-0500 I COMMAND [conn129] CMD: validate config.settings, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.826-0500 W STORAGE [conn86] Could not complete validation of table:index-350--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.888-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-71--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3014)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.506-0500 W STORAGE [conn76] Could not complete validation of table:index-198--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.402-0500 I INDEX [conn177] Validation complete for collection local.startup_log (UUID: e8e71921-e80f-42ad-92d0-ad769374a694). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.481-0500 I INDEX [conn82] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.365-0500 W STORAGE [conn129] Could not complete validation of table:collection-45-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.826-0500 I INDEX [conn86] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.888-0500 W STORAGE [conn80] Could not complete validation of table:index-350--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.506-0500 I INDEX [conn76] validating the internal structure of index _id_hashed on collection test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.402-0500 I COMMAND [conn177] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.481-0500 I INDEX [conn82] Validation complete for collection local.system.replset (UUID: 2b695a66-e9c6-4bba-a36e-eb0a5cf356ba). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.365-0500 I INDEX [conn129] validating the internal structure of index _id_ on collection config.settings
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.888-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-71--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3014)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.888-0500 I INDEX [conn80] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.954-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-61--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3014)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.956-0500 I INDEX [conn80] validating collection config.cache.chunks.test3_fsmdb0.fsmcoll0 (UUID: d291b2bc-f179-4f06-8164-0b81d0131eb1)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.956-0500 I INDEX [conn80] validating index consistency _id_ on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.365-0500 W STORAGE [conn129] Could not complete validation of table:index-46-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.890-0500 I INDEX [conn86] validating collection config.cache.chunks.test3_fsmdb0.fsmcoll0 (UUID: d291b2bc-f179-4f06-8164-0b81d0131eb1)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.506-0500 W STORAGE [conn76] Could not complete validation of table:index-201--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.403-0500 I INDEX [conn177] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.481-0500 I COMMAND [conn82] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.956-0500 I INDEX [conn80] validating index consistency lastmod_1 on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.365-0500 I INDEX [conn129] validating collection config.settings (UUID: 6d167d1d-0483-49b9-9ac8-ee5b66996698)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.890-0500 I INDEX [conn86] validating index consistency _id_ on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.506-0500 I INDEX [conn76] validating collection test3_fsmdb0.agg_out (UUID: d108d732-d756-4f25-8812-a6483de9ea4c)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.406-0500 I INDEX [conn177] validating collection local.system.replset (UUID: 318b7af2-23ac-427e-bba7-a3e3f5b1e60d)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.482-0500 I INDEX [conn82] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.956-0500 I INDEX [conn80] Validation complete for collection config.cache.chunks.test3_fsmdb0.fsmcoll0 (UUID: d291b2bc-f179-4f06-8164-0b81d0131eb1). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.365-0500 I INDEX [conn129] validating index consistency _id_ on collection config.settings
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.890-0500 I INDEX [conn86] validating index consistency lastmod_1 on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.507-0500 I INDEX [conn76] validating index consistency _id_ on collection test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.406-0500 I INDEX [conn177] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.484-0500 I INDEX [conn82] validating collection local.system.rollback.id (UUID: d6027364-802b-4e8d-ae7f-556bc4252840)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.956-0500 I COMMAND [conn80] command config.$cmd appName: "MongoDB Shell" command: validate { validate: "cache.chunks.test3_fsmdb0.fsmcoll0", full: true, lsid: { id: UUID("13610985-66ad-42da-9287-9b4db3b8e39b") }, $clusterTime: { clusterTime: Timestamp(1574796709, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:1115 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{ data: { bytesRead: 126, timeReadingMicros: 3 }, timeWaitingMicros: { handleLock: 8 } } protocol:op_msg 177ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.365-0500 I INDEX [conn129] Validation complete for collection config.settings (UUID: 6d167d1d-0483-49b9-9ac8-ee5b66996698). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.890-0500 I INDEX [conn86] Validation complete for collection config.cache.chunks.test3_fsmdb0.fsmcoll0 (UUID: d291b2bc-f179-4f06-8164-0b81d0131eb1). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.508-0500 I INDEX [conn76] validating index consistency _id_hashed on collection test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.406-0500 I INDEX [conn177] Validation complete for collection local.system.replset (UUID: 318b7af2-23ac-427e-bba7-a3e3f5b1e60d). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.484-0500 I INDEX [conn82] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:53.956-0500 I COMMAND [conn80] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.366-0500 I COMMAND [conn129] CMD: validate config.shards, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.890-0500 I COMMAND [conn86] command config.$cmd appName: "MongoDB Shell" command: validate { validate: "cache.chunks.test3_fsmdb0.fsmcoll0", full: true, lsid: { id: UUID("062a8369-aa85-4710-98c0-d8991d6c8cc4") }, $clusterTime: { clusterTime: Timestamp(1574796709, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:1115 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{ data: { bytesRead: 126, timeReadingMicros: 3 }, timeWaitingMicros: { handleLock: 8 } } protocol:op_msg 174ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.508-0500 I INDEX [conn76] Validation complete for collection test3_fsmdb0.agg_out (UUID: d108d732-d756-4f25-8812-a6483de9ea4c). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.407-0500 I COMMAND [conn177] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.484-0500 I INDEX [conn82] Validation complete for collection local.system.rollback.id (UUID: d6027364-802b-4e8d-ae7f-556bc4252840). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.020-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-64--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3071)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.366-0500 W STORAGE [conn129] Could not complete validation of table:collection-25-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:53.891-0500 I COMMAND [conn86] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.509-0500 I COMMAND [conn76] CMD: validate test3_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.407-0500 I INDEX [conn177] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.409-0500 I INDEX [conn177] validating collection local.system.rollback.id (UUID: 2d9a033a-73d1-44ef-b7d1-30b6243b0419)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.409-0500 I INDEX [conn177] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.409-0500 I INDEX [conn177] Validation complete for collection local.system.rollback.id (UUID: 2d9a033a-73d1-44ef-b7d1-30b6243b0419). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.020-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-61--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3014)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.509-0500 W STORAGE [conn76] Could not complete validation of table:collection-165--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.486-0500 I COMMAND [conn82] CMD: validate test3_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.020-0500 W STORAGE [conn80] Could not complete validation of table:collection-29--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.366-0500 I INDEX [conn129] validating the internal structure of index _id_ on collection config.shards
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.411-0500 I COMMAND [conn177] CMD: validate test3_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.020-0500 W STORAGE [conn86] Could not complete validation of table:collection-29--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.509-0500 I INDEX [conn76] validating the internal structure of index _id_ on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.486-0500 W STORAGE [conn82] Could not complete validation of table:collection-197--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.020-0500 I INDEX [conn80] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.367-0500 I INDEX [conn129] validating the internal structure of index host_1 on collection config.shards
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.411-0500 W STORAGE [conn177] Could not complete validation of table:collection-335-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.411-0500 I INDEX [conn177] validating the internal structure of index _id_ on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.411-0500 W STORAGE [conn177] Could not complete validation of table:index-336-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.486-0500 I INDEX [conn82] validating the internal structure of index _id_ on collection test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.153-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-73--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3071)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.369-0500 I INDEX [conn129] validating collection config.shards (UUID: ed6a2b77-0788-4ad3-a1b0-ccd61535c24f)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.020-0500 I INDEX [conn86] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.509-0500 W STORAGE [conn76] Could not complete validation of table:index-166--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.411-0500 I INDEX [conn177] validating the internal structure of index _id_hashed on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.411-0500 W STORAGE [conn177] Could not complete validation of table:index-337-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.412-0500 I INDEX [conn177] validating collection test3_fsmdb0.fsmcoll0 (UUID: 81145456-1c0e-4ef0-89a6-ab06e3485635)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.153-0500 W STORAGE [conn80] Could not complete validation of table:index-30--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.087-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-64--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3071)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.509-0500 I INDEX [conn76] validating the internal structure of index _id_hashed on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.486-0500 W STORAGE [conn82] Could not complete validation of table:index-198--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.369-0500 I INDEX [conn129] validating index consistency _id_ on collection config.shards
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.413-0500 I INDEX [conn177] validating index consistency _id_ on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.154-0500 I INDEX [conn80] validating collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.087-0500 W STORAGE [conn86] Could not complete validation of table:index-30--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.509-0500 W STORAGE [conn76] Could not complete validation of table:index-167--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.486-0500 I INDEX [conn82] validating the internal structure of index _id_hashed on collection test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.369-0500 I INDEX [conn129] validating index consistency host_1 on collection config.shards
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.413-0500 I INDEX [conn177] validating index consistency _id_hashed on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.154-0500 I INDEX [conn80] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.087-0500 I INDEX [conn86] validating collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.510-0500 I INDEX [conn76] validating collection test3_fsmdb0.fsmcoll0 (UUID: 81145456-1c0e-4ef0-89a6-ab06e3485635)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.487-0500 W STORAGE [conn82] Could not complete validation of table:index-201--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.369-0500 I INDEX [conn129] Validation complete for collection config.shards (UUID: ed6a2b77-0788-4ad3-a1b0-ccd61535c24f). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.414-0500 I INDEX [conn177] Validation complete for collection test3_fsmdb0.fsmcoll0 (UUID: 81145456-1c0e-4ef0-89a6-ab06e3485635). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.154-0500 I INDEX [conn80] Validation complete for collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.087-0500 I INDEX [conn86] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.511-0500 I INDEX [conn76] validating index consistency _id_ on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.487-0500 I INDEX [conn82] validating collection test3_fsmdb0.agg_out (UUID: d108d732-d756-4f25-8812-a6483de9ea4c)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.369-0500 I COMMAND [conn129] CMD: validate config.system.sessions, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:53.415-0500 I NETWORK [conn177] end connection 127.0.0.1:39746 (39 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.154-0500 I COMMAND [conn80] command config.$cmd appName: "MongoDB Shell" command: validate { validate: "cache.collections", full: true, lsid: { id: UUID("13610985-66ad-42da-9287-9b4db3b8e39b") }, $clusterTime: { clusterTime: Timestamp(1574796709, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:1056 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 197ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.087-0500 I INDEX [conn86] Validation complete for collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.511-0500 I INDEX [conn76] validating index consistency _id_hashed on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.488-0500 I INDEX [conn82] validating index consistency _id_ on collection test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.369-0500 W STORAGE [conn129] Could not complete validation of table:collection-53-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.154-0500 I COMMAND [conn80] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.088-0500 I COMMAND [conn86] command config.$cmd appName: "MongoDB Shell" command: validate { validate: "cache.collections", full: true, lsid: { id: UUID("062a8369-aa85-4710-98c0-d8991d6c8cc4") }, $clusterTime: { clusterTime: Timestamp(1574796709, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:1056 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 197ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.511-0500 I INDEX [conn76] Validation complete for collection test3_fsmdb0.fsmcoll0 (UUID: 81145456-1c0e-4ef0-89a6-ab06e3485635). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.488-0500 I INDEX [conn82] validating index consistency _id_hashed on collection test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.369-0500 I INDEX [conn129] validating the internal structure of index _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.202-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-63--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3071)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.088-0500 I COMMAND [conn86] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:53.513-0500 I NETWORK [conn76] end connection 127.0.0.1:35780 (10 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.489-0500 I INDEX [conn82] Validation complete for collection test3_fsmdb0.agg_out (UUID: d108d732-d756-4f25-8812-a6483de9ea4c). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.369-0500 W STORAGE [conn129] Could not complete validation of table:index-54-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.202-0500 W STORAGE [conn80] Could not complete validation of table:collection-27--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.153-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-73--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3071)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.489-0500 I COMMAND [conn82] CMD: validate test3_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.369-0500 I INDEX [conn129] validating collection config.system.sessions (UUID: 9014747b-5aa2-462f-9e13-1e6b27298390)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.202-0500 I INDEX [conn80] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.154-0500 W STORAGE [conn86] Could not complete validation of table:collection-27--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.490-0500 W STORAGE [conn82] Could not complete validation of table:collection-165--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.370-0500 I INDEX [conn129] validating index consistency _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.265-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-66--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3579)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.154-0500 I INDEX [conn86] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.490-0500 I INDEX [conn82] validating the internal structure of index _id_ on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.370-0500 I INDEX [conn129] Validation complete for collection config.system.sessions (UUID: 9014747b-5aa2-462f-9e13-1e6b27298390). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.265-0500 W STORAGE [conn80] Could not complete validation of table:index-28--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.202-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-63--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3071)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.490-0500 W STORAGE [conn82] Could not complete validation of table:index-166--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.370-0500 I COMMAND [conn129] CMD: validate config.tags, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.265-0500 I INDEX [conn80] validating collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.202-0500 W STORAGE [conn86] Could not complete validation of table:index-28--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.490-0500 I INDEX [conn82] validating the internal structure of index _id_hashed on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.370-0500 W STORAGE [conn129] Could not complete validation of table:collection-35-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.265-0500 I INDEX [conn80] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.202-0500 I INDEX [conn86] validating collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.490-0500 W STORAGE [conn82] Could not complete validation of table:index-167--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.370-0500 I INDEX [conn129] validating the internal structure of index _id_ on collection config.tags
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.265-0500 I INDEX [conn80] Validation complete for collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.202-0500 I INDEX [conn86] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.490-0500 I INDEX [conn82] validating collection test3_fsmdb0.fsmcoll0 (UUID: 81145456-1c0e-4ef0-89a6-ab06e3485635)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.370-0500 W STORAGE [conn129] Could not complete validation of table:index-36-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.265-0500 I COMMAND [conn80] command config.$cmd appName: "MongoDB Shell" command: validate { validate: "cache.databases", full: true, lsid: { id: UUID("13610985-66ad-42da-9287-9b4db3b8e39b") }, $clusterTime: { clusterTime: Timestamp(1574796709, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:1054 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 110ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.203-0500 I INDEX [conn86] Validation complete for collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.492-0500 I INDEX [conn82] validating index consistency _id_ on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.370-0500 I INDEX [conn129] validating the internal structure of index ns_1_min_1 on collection config.tags
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.266-0500 I COMMAND [conn80] CMD: validate config.system.sessions, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.203-0500 I COMMAND [conn86] command config.$cmd appName: "MongoDB Shell" command: validate { validate: "cache.databases", full: true, lsid: { id: UUID("062a8369-aa85-4710-98c0-d8991d6c8cc4") }, $clusterTime: { clusterTime: Timestamp(1574796709, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:1054 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 114ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.492-0500 I INDEX [conn82] validating index consistency _id_hashed on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.370-0500 W STORAGE [conn129] Could not complete validation of table:index-37-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.314-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-75--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3579)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.203-0500 I COMMAND [conn86] CMD: validate config.system.sessions, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.492-0500 I INDEX [conn82] Validation complete for collection test3_fsmdb0.fsmcoll0 (UUID: 81145456-1c0e-4ef0-89a6-ab06e3485635). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.370-0500 I INDEX [conn129] validating the internal structure of index ns_1_tag_1 on collection config.tags
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.314-0500 I INDEX [conn80] validating the internal structure of index _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.265-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-66--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3579)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:53.494-0500 I NETWORK [conn82] end connection 127.0.0.1:52416 (10 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.370-0500 W STORAGE [conn129] Could not complete validation of table:index-38-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.376-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-65--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3579)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.266-0500 I INDEX [conn86] validating the internal structure of index _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.370-0500 I INDEX [conn129] validating collection config.tags (UUID: d225b508-e40e-4c3c-a716-26adc4561055)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.378-0500 I INDEX [conn80] validating the internal structure of index lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.314-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-75--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3579)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.371-0500 I INDEX [conn129] validating index consistency _id_ on collection config.tags
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.315-0500 I INDEX [conn86] validating the internal structure of index lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.371-0500 I INDEX [conn129] validating index consistency ns_1_min_1 on collection config.tags
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.376-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-65--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796670, 3579)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.371-0500 I INDEX [conn129] validating index consistency ns_1_tag_1 on collection config.tags
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.371-0500 I INDEX [conn129] Validation complete for collection config.tags (UUID: d225b508-e40e-4c3c-a716-26adc4561055). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.371-0500 I COMMAND [conn129] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.371-0500 W STORAGE [conn129] Could not complete validation of table:collection-15-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.371-0500 I INDEX [conn129] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.371-0500 W STORAGE [conn129] Could not complete validation of table:index-16-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.371-0500 I INDEX [conn129] validating collection config.transactions (UUID: c2741992-901b-4092-a01f-3dfe88ab21c5)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.371-0500 I INDEX [conn129] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.371-0500 I INDEX [conn129] Validation complete for collection config.transactions (UUID: c2741992-901b-4092-a01f-3dfe88ab21c5). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.372-0500 I COMMAND [conn129] CMD: validate config.version, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.373-0500 I INDEX [conn129] validating the internal structure of index _id_ on collection config.version
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.375-0500 I INDEX [conn129] validating collection config.version (UUID: d52b8328-6d55-4f54-8cfd-e715a58e3315)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.375-0500 I INDEX [conn129] validating index consistency _id_ on collection config.version
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.375-0500 I INDEX [conn129] Validation complete for collection config.version (UUID: d52b8328-6d55-4f54-8cfd-e715a58e3315). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.377-0500 I COMMAND [conn129] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.377-0500 W STORAGE [conn129] Could not complete validation of table:collection-10-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.377-0500 I INDEX [conn129] validating collection local.oplog.rs (UUID: 5bb0c359-7cb9-48f8-8ff8-4b4c84c12ec5)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.377-0500 I INDEX [conn129] Validation complete for collection local.oplog.rs (UUID: 5bb0c359-7cb9-48f8-8ff8-4b4c84c12ec5). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.378-0500 I COMMAND [conn129] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.379-0500 I INDEX [conn129] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.381-0500 I INDEX [conn129] validating collection local.replset.election (UUID: 5f00e271-c3c6-4d7b-9d39-1c8e9e8a77d4)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.381-0500 I INDEX [conn129] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.381-0500 I INDEX [conn129] Validation complete for collection local.replset.election (UUID: 5f00e271-c3c6-4d7b-9d39-1c8e9e8a77d4). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.381-0500 I COMMAND [conn129] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.382-0500 I INDEX [conn129] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.384-0500 I INDEX [conn129] validating collection local.replset.minvalid (UUID: ce934bfb-84f4-4d44-a963-37c09c6c95a6)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.384-0500 I INDEX [conn129] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.384-0500 I INDEX [conn129] Validation complete for collection local.replset.minvalid (UUID: ce934bfb-84f4-4d44-a963-37c09c6c95a6). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.385-0500 I COMMAND [conn129] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.386-0500 I INDEX [conn129] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.387-0500 I INDEX [conn129] validating collection local.replset.oplogTruncateAfterPoint (UUID: b5258dce-fb89-4436-a191-b8586ea2e6c0)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.387-0500 I INDEX [conn129] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.388-0500 I INDEX [conn129] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: b5258dce-fb89-4436-a191-b8586ea2e6c0). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.388-0500 I COMMAND [conn129] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.389-0500 I INDEX [conn129] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.391-0500 I INDEX [conn129] validating collection local.startup_log (UUID: a1488758-c116-4144-adba-02b8f3b8144d)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.391-0500 I INDEX [conn129] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.391-0500 I INDEX [conn129] Validation complete for collection local.startup_log (UUID: a1488758-c116-4144-adba-02b8f3b8144d). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.391-0500 I COMMAND [conn129] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.392-0500 I INDEX [conn129] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.394-0500 I INDEX [conn129] validating collection local.system.replset (UUID: ea98bf03-b956-4e01-b9a4-857e601cceda)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.394-0500 I INDEX [conn129] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.394-0500 I INDEX [conn129] Validation complete for collection local.system.replset (UUID: ea98bf03-b956-4e01-b9a4-857e601cceda). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.395-0500 I COMMAND [conn129] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.396-0500 I INDEX [conn129] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.397-0500 I INDEX [conn129] validating collection local.system.rollback.id (UUID: 0ad52f2a-9d3e-4f9f-b91b-17a9c570ab7e)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.397-0500 I INDEX [conn129] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.398-0500 I INDEX [conn129] Validation complete for collection local.system.rollback.id (UUID: 0ad52f2a-9d3e-4f9f-b91b-17a9c570ab7e). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:53.399-0500 I NETWORK [conn129] end connection 127.0.0.1:57004 (34 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.424-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-78--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313) with drop timestamp Timestamp(1574796670, 4653)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.424-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-78--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313) with drop timestamp Timestamp(1574796670, 4653)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.425-0500 I INDEX [conn86] validating collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.425-0500 I INDEX [conn86] validating index consistency _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.425-0500 I INDEX [conn86] validating index consistency lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.425-0500 I INDEX [conn86] Validation complete for collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.426-0500 I COMMAND [conn86] command config.$cmd appName: "MongoDB Shell" command: validate { validate: "system.sessions", full: true, lsid: { id: UUID("062a8369-aa85-4710-98c0-d8991d6c8cc4") }, $clusterTime: { clusterTime: Timestamp(1574796709, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:593 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{ data: { bytesRead: 4388, timeReadingMicros: 12 }, timeWaitingMicros: { schemaLock: 46744 } } protocol:op_msg 222ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.426-0500 I COMMAND [conn86] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.519-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-87--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313) with drop timestamp Timestamp(1574796670, 4653)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.520-0500 I INDEX [conn80] validating collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.520-0500 I INDEX [conn80] validating index consistency _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.520-0500 I INDEX [conn80] validating index consistency lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.520-0500 I INDEX [conn80] Validation complete for collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.520-0500 I COMMAND [conn80] command config.$cmd appName: "MongoDB Shell" command: validate { validate: "system.sessions", full: true, lsid: { id: UUID("13610985-66ad-42da-9287-9b4db3b8e39b") }, $clusterTime: { clusterTime: Timestamp(1574796709, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:593 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{ data: { bytesRead: 4387, timeReadingMicros: 12 }, timeWaitingMicros: { schemaLock: 93159 } } protocol:op_msg 254ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.520-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-87--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313) with drop timestamp Timestamp(1574796670, 4653)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.521-0500 W STORAGE [conn86] Could not complete validation of table:collection-21--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.521-0500 I INDEX [conn86] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.521-0500 I COMMAND [conn80] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.521-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-77--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313) with drop timestamp Timestamp(1574796670, 4653)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.522-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-77--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.f4cd9bb8-1b13-49f8-8879-5a62d5a96313) with drop timestamp Timestamp(1574796670, 4653)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.523-0500 I INDEX [conn86] validating collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.524-0500 I INDEX [conn86] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.524-0500 I INDEX [conn86] Validation complete for collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.525-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-82--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad) with drop timestamp Timestamp(1574796671, 3)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:31:54.703-0500 JSTest jstests/hooks/run_validate_collections.js finished.
[executor:fsm_workload_test:job0] 2019-11-26T14:31:55.617-0500 agg_out:ValidateCollections ran in 5.33 seconds: no failures detected.
[executor:fsm_workload_test:job0] 2019-11-26T14:31:55.617-0500 Running agg_out:CleanupConcurrencyWorkloads...
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.525-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-82--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad) with drop timestamp Timestamp(1574796671, 3)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.592-0500 I INDEX [conn181] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:54.614-0500 I NETWORK [conn153] end connection 127.0.0.1:45598 (3 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:54.693-0500 I NETWORK [conn128] end connection 127.0.0.1:56972 (33 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:54.693-0500 I NETWORK [conn176] end connection 127.0.0.1:39720 (38 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:54.702-0500 I NETWORK [conn81] end connection 127.0.0.1:52388 (9 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:54.702-0500 I NETWORK [conn75] end connection 127.0.0.1:35746 (9 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.526-0500 I COMMAND [conn86] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.525-0500 W STORAGE [conn80] Could not complete validation of table:collection-21--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:54.653-0500 I NETWORK [conn150] end connection 127.0.0.1:45592 (2 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.594-0500 I INDEX [conn181] validating collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:55.620-0500 I NETWORK [listener] connection accepted from 127.0.0.1:58758 #48 (1 connection now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:54.702-0500 I NETWORK [conn127] end connection 127.0.0.1:56970 (32 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:54.702-0500 I NETWORK [conn175] end connection 127.0.0.1:39716 (37 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.526-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-91--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad) with drop timestamp Timestamp(1574796671, 3)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.525-0500 I INDEX [conn80] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:54.689-0500 I NETWORK [conn151] end connection 127.0.0.1:45594 (1 connection now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.594-0500 I INDEX [conn181] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:55.500-0500 I NETWORK [conn18] end connection 127.0.0.1:55578 (31 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:55.621-0500 I NETWORK [conn48] received client metadata from 127.0.0.1:58758 conn48: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.534-0500 W STORAGE [conn86] Could not complete validation of table:collection-16--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.526-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-91--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad) with drop timestamp Timestamp(1574796671, 3)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:54.692-0500 I NETWORK [conn147] end connection 127.0.0.1:45566 (0 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.594-0500 I INDEX [conn181] Validation complete for collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5). No corruption found.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:55.622-0500 I NETWORK [listener] connection accepted from 127.0.0.1:58762 #49 (2 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.534-0500 I INDEX [conn86] validating collection local.oplog.rs (UUID: 88962763-38f7-4965-bfd6-b2a62304ae0e)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.528-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-81--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad) with drop timestamp Timestamp(1574796671, 3)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:55.500-0500 I CONNPOOL [ShardRegistry] Ending idle connection to host localhost:20000 because the pool meets constraints; 3 connections to that host remain open
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.594-0500 I COMMAND [conn181] command admin.$cmd appName: "MongoDB Shell" command: validate { validate: "system.version", full: true, lsid: { id: UUID("dde9e419-c5ac-45c6-82b2-fb794d6355b6") }, $clusterTime: { clusterTime: Timestamp(1574796709, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } dataThroughputLastSecond: 4.93407e-05 MB/sec dataThroughputAverage: 4.93407e-05 MB/sec numYields:0 reslen:546 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { W: 1 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{ data: { bytesRead: 436, timeReadingMicros: 7 } } protocol:op_msg 1297ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:55.622-0500 I NETWORK [conn49] received client metadata from 127.0.0.1:58762 conn49: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.535-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-81--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.a2c4d579-7b07-4450-b3d6-32c61b9ccaad) with drop timestamp Timestamp(1574796671, 3)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.529-0500 I INDEX [conn80] validating collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.596-0500 I COMMAND [conn181] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:55.620-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45618 #155 (1 connection now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.536-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-86--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440) with drop timestamp Timestamp(1574796671, 5)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.529-0500 I INDEX [conn80] validating index consistency _id_ on collection config.transactions
[CleanupConcurrencyWorkloads:job0:agg_out:CleanupConcurrencyWorkloads] 2019-11-26T14:31:55.624-0500 Dropping all databases except for ['config', 'local', '$external', 'admin']
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.597-0500 I INDEX [conn181] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[CleanupConcurrencyWorkloads:job0:agg_out:CleanupConcurrencyWorkloads] 2019-11-26T14:31:55.625-0500 Dropping database test3_fsmdb0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:55.621-0500 I NETWORK [conn155] received client metadata from 127.0.0.1:45618 conn155: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.538-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-95--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440) with drop timestamp Timestamp(1574796671, 5)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.529-0500 I INDEX [conn80] Validation complete for collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.598-0500 I INDEX [conn181] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.541-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-85--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440) with drop timestamp Timestamp(1574796671, 5)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.530-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-86--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440) with drop timestamp Timestamp(1574796671, 5)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.600-0500 I INDEX [conn181] validating collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.542-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-90--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0) with drop timestamp Timestamp(1574796671, 1013)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.531-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-95--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440) with drop timestamp Timestamp(1574796671, 5)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.600-0500 I INDEX [conn181] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.544-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-97--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0) with drop timestamp Timestamp(1574796671, 1013)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.533-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-85--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.b9691f9a-8076-4b98-bd27-b653412fe440) with drop timestamp Timestamp(1574796671, 5)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.600-0500 I INDEX [conn181] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.546-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-89--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0) with drop timestamp Timestamp(1574796671, 1013)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.533-0500 I COMMAND [conn80] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.600-0500 I INDEX [conn181] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.547-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-94--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4) with drop timestamp Timestamp(1574796671, 1014)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.534-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-90--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0) with drop timestamp Timestamp(1574796671, 1013)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.601-0500 I COMMAND [conn181] CMD: validate config.cache.chunks.test3_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.549-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-99--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4) with drop timestamp Timestamp(1574796671, 1014)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.534-0500 W STORAGE [conn80] Could not complete validation of table:collection-16--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.601-0500 W STORAGE [conn181] Could not complete validation of table:collection-205--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.550-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-93--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4) with drop timestamp Timestamp(1574796671, 1014)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.534-0500 I INDEX [conn80] validating collection local.oplog.rs (UUID: 6d43bede-f05f-41b1-b7ac-5a32b66b8140)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.601-0500 I INDEX [conn181] validating the internal structure of index _id_ on collection config.cache.chunks.test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.552-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-80--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 1521)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.536-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-97--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0) with drop timestamp Timestamp(1574796671, 1013)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.601-0500 W STORAGE [conn181] Could not complete validation of table:index-207--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.555-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-83--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 1521)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.539-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-89--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.032f265b-cc76-4def-9705-64af12fe30f0) with drop timestamp Timestamp(1574796671, 1013)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.601-0500 I INDEX [conn181] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.556-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-79--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 1521)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.541-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-94--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4) with drop timestamp Timestamp(1574796671, 1014)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.601-0500 W STORAGE [conn181] Could not complete validation of table:index-210--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.558-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-102--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 2027)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.542-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-99--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4) with drop timestamp Timestamp(1574796671, 1014)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.601-0500 I INDEX [conn181] validating collection config.cache.chunks.test3_fsmdb0.agg_out (UUID: 4c26dac0-af8d-4579-bbb5-32356c1d2f49)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.560-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-107--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 2027)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.544-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-93--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.f6bb8b94-0874-4be3-8ae7-982aee34e6a4) with drop timestamp Timestamp(1574796671, 1014)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.601-0500 I INDEX [conn181] validating index consistency _id_ on collection config.cache.chunks.test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.561-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-101--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 2027)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.546-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-80--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 1521)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.601-0500 I INDEX [conn181] validating index consistency lastmod_1 on collection config.cache.chunks.test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.563-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-104--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 2530)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.547-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-83--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 1521)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.601-0500 I INDEX [conn181] Validation complete for collection config.cache.chunks.test3_fsmdb0.agg_out (UUID: 4c26dac0-af8d-4579-bbb5-32356c1d2f49). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.565-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-109--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 2530)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.549-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-79--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 1521)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:55.627-0500 I SHARDING [conn22] distributed lock 'test3_fsmdb0' acquired for 'dropDatabase', ts : 5ddd7dab5cde74b6784bb8bc
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.602-0500 I COMMAND [conn181] CMD: validate config.cache.chunks.test3_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.566-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-103--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 2530)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.550-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-102--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 2027)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:55.628-0500 I SHARDING [conn22] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:55.627-0500-5ddd7dab5cde74b6784bb8bf", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55594", time: new Date(1574796715627), what: "dropDatabase.start", ns: "test3_fsmdb0", details: {} }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.602-0500 W STORAGE [conn181] Could not complete validation of table:collection-160--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.569-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-106--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 3036)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.553-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-107--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 2027)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.602-0500 I INDEX [conn181] validating the internal structure of index _id_ on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.571-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-113--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 3036)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.555-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-101--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 2027)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.602-0500 W STORAGE [conn181] Could not complete validation of table:index-161--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.572-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-105--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 3036)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.556-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-104--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 2530)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.602-0500 I INDEX [conn181] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.573-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-112--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796674, 6)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.558-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-109--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 2530)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.602-0500 W STORAGE [conn181] Could not complete validation of table:index-162--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.575-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-119--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796674, 6)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.560-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-103--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 2530)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.602-0500 I INDEX [conn181] validating collection config.cache.chunks.test3_fsmdb0.fsmcoll0 (UUID: a33e44c0-60ea-478a-83bd-e45f3213aca7)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.577-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-111--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796674, 6)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.561-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-106--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 3036)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.602-0500 I INDEX [conn181] validating index consistency _id_ on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.578-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-122--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f) with drop timestamp Timestamp(1574796674, 1015)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.563-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-113--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 3036)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.602-0500 I INDEX [conn181] validating index consistency lastmod_1 on collection config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.579-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-127--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f) with drop timestamp Timestamp(1574796674, 1015)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.565-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-105--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796671, 3036)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.602-0500 I INDEX [conn181] Validation complete for collection config.cache.chunks.test3_fsmdb0.fsmcoll0 (UUID: a33e44c0-60ea-478a-83bd-e45f3213aca7). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.582-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-121--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f) with drop timestamp Timestamp(1574796674, 1015)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.568-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-112--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796674, 6)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.603-0500 I COMMAND [conn181] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.583-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-116--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a) with drop timestamp Timestamp(1574796674, 1518)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.569-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-119--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796674, 6)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.603-0500 W STORAGE [conn181] Could not complete validation of table:collection-18--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.585-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-129--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a) with drop timestamp Timestamp(1574796674, 1518)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.570-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-111--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796674, 6)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.603-0500 I INDEX [conn181] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.587-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-115--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a) with drop timestamp Timestamp(1574796674, 1518)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.571-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-122--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f) with drop timestamp Timestamp(1574796674, 1015)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.603-0500 W STORAGE [conn181] Could not complete validation of table:index-20--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.589-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-124--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954) with drop timestamp Timestamp(1574796674, 1971)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.573-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-127--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f) with drop timestamp Timestamp(1574796674, 1015)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.603-0500 I INDEX [conn181] validating collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.590-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-133--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954) with drop timestamp Timestamp(1574796674, 1971)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.574-0500 I INDEX [conn80] Validation complete for collection local.oplog.rs (UUID: 6d43bede-f05f-41b1-b7ac-5a32b66b8140). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.603-0500 I INDEX [conn181] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.591-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-123--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954) with drop timestamp Timestamp(1574796674, 1971)
[executor:fsm_workload_test:job0] 2019-11-26T14:31:55.691-0500 agg_out:CleanupConcurrencyWorkloads ran in 0.07 seconds: no failures detected.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.575-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-121--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.d2218af1-92fd-422b-a74c-23adcbf6dc5f) with drop timestamp Timestamp(1574796674, 1015)
[executor] 2019-11-26T14:31:56.102-0500 Waiting for threads to complete
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.603-0500 I INDEX [conn181] Validation complete for collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:55.630-0500 I SHARDING [conn22] distributed lock 'test3_fsmdb0.agg_out' acquired for 'dropCollection', ts : 5ddd7dab5cde74b6784bb8c2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:55.633-0500 I COMMAND [conn37] CMD: drop test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.636-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.636-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:55.691-0500 I NETWORK [conn49] end connection 127.0.0.1:58762 (1 connection now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:55.691-0500 I NETWORK [conn155] end connection 127.0.0.1:45618 (0 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.592-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-132--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb) with drop timestamp Timestamp(1574796674, 2528)
[CheckReplDBHashInBackground:job0] Stopping the background check repl dbhash thread.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.575-0500 I COMMAND [conn80] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.604-0500 I COMMAND [conn181] CMD: validate config.cache.databases, full:true
[executor] 2019-11-26T14:31:56.104-0500 Threads are completed!
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:55.630-0500 I SHARDING [conn22] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:55.630-0500-5ddd7dab5cde74b6784bb8c4", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55594", time: new Date(1574796715630), what: "dropCollection.start", ns: "test3_fsmdb0.agg_out", details: {} }
[executor] 2019-11-26T14:31:56.104-0500 Summary of latest execution: All 5 test(s) passed in 12.08 seconds.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:55.650-0500 I COMMAND [conn37] CMD: drop test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.636-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test3_fsmdb0.agg_out (d108d732-d756-4f25-8812-a6483de9ea4c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796715, 7), t: 1 } and commit timestamp Timestamp(1574796715, 7)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.637-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test3_fsmdb0.agg_out (d108d732-d756-4f25-8812-a6483de9ea4c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796715, 7), t: 1 } and commit timestamp Timestamp(1574796715, 7)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:55.691-0500 I NETWORK [conn48] end connection 127.0.0.1:58758 (0 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.594-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-137--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb) with drop timestamp Timestamp(1574796674, 2528)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.577-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-116--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a) with drop timestamp Timestamp(1574796674, 1518)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.605-0500 I INDEX [conn181] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:55.643-0500 I SHARDING [conn22] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:55.643-0500-5ddd7dab5cde74b6784bb8cc", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55594", time: new Date(1574796715643), what: "dropCollection", ns: "test3_fsmdb0.agg_out", details: {} }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:55.650-0500 I STORAGE [conn37] dropCollection: test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.636-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test3_fsmdb0.agg_out (d108d732-d756-4f25-8812-a6483de9ea4c).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.637-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test3_fsmdb0.agg_out (d108d732-d756-4f25-8812-a6483de9ea4c).
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:56.107-0500 I NETWORK [listener] connection accepted from 127.0.0.1:58764 #50 (1 connection now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.107-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45624 #156 (1 connection now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.594-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-131--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb) with drop timestamp Timestamp(1574796674, 2528)
[CheckReplDBHashInBackground:job0] Starting the background check repl dbhash thread.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.577-0500 I INDEX [conn80] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.607-0500 I INDEX [conn181] validating collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:55.646-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7dab5cde74b6784bb8c2' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:55.650-0500 I STORAGE [conn37] Finishing collection drop for test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.637-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.agg_out (d108d732-d756-4f25-8812-a6483de9ea4c)'. Ident: 'index-198--7234316082034423155', commit timestamp: 'Timestamp(1574796715, 7)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.637-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.agg_out (d108d732-d756-4f25-8812-a6483de9ea4c)'. Ident: 'index-198--2310912778499990807', commit timestamp: 'Timestamp(1574796715, 7)'
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:56.107-0500 I NETWORK [conn50] received client metadata from 127.0.0.1:58764 conn50: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.107-0500 I NETWORK [conn156] received client metadata from 127.0.0.1:45624 conn156: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.595-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-118--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796674, 2529)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.578-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-129--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a) with drop timestamp Timestamp(1574796674, 1518)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.607-0500 I INDEX [conn181] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:55.648-0500 I SHARDING [conn22] distributed lock 'test3_fsmdb0.fsmcoll0' acquired for 'dropCollection', ts : 5ddd7dab5cde74b6784bb8cf
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:55.650-0500 I STORAGE [conn37] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635)'. Ident: 'index-336-8224331490264904478', commit timestamp: 'Timestamp(1574796715, 15)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.637-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.agg_out (d108d732-d756-4f25-8812-a6483de9ea4c)'. Ident: 'index-201--7234316082034423155', commit timestamp: 'Timestamp(1574796715, 7)'
[CheckReplDBHashInBackground:job0] Resuming the background check repl dbhash thread.
[executor:fsm_workload_test:job0] 2019-11-26T14:31:56.112-0500 Running agg_out.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval TestData = new Object(); TestData["usingReplicaSetShards"] = true; TestData["runningWithAutoSplit"] = false; TestData["runningWithBalancer"] = false; TestData["fsmWorkloads"] = ["jstests/concurrency/fsm_workloads/agg_out.js"]; TestData["resmokeDbPathPrefix"] = "/home/nz_linux/data/job0/resmoke"; TestData["dbNamePrefix"] = "test4_"; TestData["sameDB"] = false; TestData["sameCollection"] = false; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "resmoke_runner"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); --readMode=commands mongodb://localhost:20007,localhost:20008 jstests/concurrency/fsm_libs/resmoke_runner.js
[fsm_workload_test:agg_out] 2019-11-26T14:31:56.113-0500 Starting FSM workload jstests/concurrency/fsm_workloads/agg_out.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval TestData = new Object(); TestData["usingReplicaSetShards"] = true; TestData["runningWithAutoSplit"] = false; TestData["runningWithBalancer"] = false; TestData["fsmWorkloads"] = ["jstests/concurrency/fsm_workloads/agg_out.js"]; TestData["resmokeDbPathPrefix"] = "/home/nz_linux/data/job0/resmoke"; TestData["dbNamePrefix"] = "test4_"; TestData["sameDB"] = false; TestData["sameCollection"] = false; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "resmoke_runner"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); --readMode=commands mongodb://localhost:20007,localhost:20008 jstests/concurrency/fsm_libs/resmoke_runner.js
[executor:fsm_workload_test:job0] 2019-11-26T14:31:56.113-0500 Running agg_out:CheckReplDBHashInBackground...
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.637-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.agg_out (d108d732-d756-4f25-8812-a6483de9ea4c)'. Ident: 'index-201--2310912778499990807', commit timestamp: 'Timestamp(1574796715, 7)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:56.118-0500 Starting JSTest jstests/hooks/run_check_repl_dbhash_background.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_check_repl_dbhash_background"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_check_repl_dbhash_background.js
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:56.109-0500 I NETWORK [conn50] end connection 127.0.0.1:58764 (0 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.109-0500 I NETWORK [conn156] end connection 127.0.0.1:45624 (0 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.596-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-125--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796674, 2529)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.582-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-115--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.cb61079d-8ea4-4020-a329-c3b4e732245a) with drop timestamp Timestamp(1574796674, 1518)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.607-0500 I INDEX [conn181] Validation complete for collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:55.648-0500 I SHARDING [conn22] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:55.648-0500-5ddd7dab5cde74b6784bb8d1", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55594", time: new Date(1574796715648), what: "dropCollection.start", ns: "test3_fsmdb0.fsmcoll0", details: {} }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:55.650-0500 I STORAGE [conn37] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635)'. Ident: 'index-337-8224331490264904478', commit timestamp: 'Timestamp(1574796715, 15)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.637-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test3_fsmdb0.agg_out'. Ident: collection-197--7234316082034423155, commit timestamp: Timestamp(1574796715, 7)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.637-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test3_fsmdb0.agg_out'. Ident: collection-197--2310912778499990807, commit timestamp: Timestamp(1574796715, 7)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.597-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-117--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796674, 2529)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.583-0500 I INDEX [conn80] validating collection local.replset.election (UUID: bf7b5380-e70a-475e-ad1b-16751bee6907)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.608-0500 I COMMAND [conn181] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:55.665-0500 I SHARDING [conn22] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:55.665-0500-5ddd7dab5cde74b6784bb8da", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55594", time: new Date(1574796715665), what: "dropCollection", ns: "test3_fsmdb0.fsmcoll0", details: {} }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:55.650-0500 I STORAGE [conn37] Deferring table drop for collection 'test3_fsmdb0.fsmcoll0'. Ident: collection-335-8224331490264904478, commit timestamp: Timestamp(1574796715, 15)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.649-0500 I COMMAND [ReplWriterWorker-3] CMD: drop config.cache.chunks.test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.649-0500 I COMMAND [ReplWriterWorker-3] CMD: drop config.cache.chunks.test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.598-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-140--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae) with drop timestamp Timestamp(1574796674, 3539)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.583-0500 I INDEX [conn80] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.608-0500 W STORAGE [conn181] Could not complete validation of table:collection-15--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:55.670-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7dab5cde74b6784bb8cf' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:55.663-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test3_fsmdb0.fsmcoll0 took 0 ms and found the collection is not sharded
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.649-0500 I STORAGE [ReplWriterWorker-3] dropCollection: config.cache.chunks.test3_fsmdb0.agg_out (4c26dac0-af8d-4579-bbb5-32356c1d2f49) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796715, 11), t: 1 } and commit timestamp Timestamp(1574796715, 11)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.649-0500 I STORAGE [ReplWriterWorker-3] dropCollection: config.cache.chunks.test3_fsmdb0.agg_out (4c26dac0-af8d-4579-bbb5-32356c1d2f49) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796715, 11), t: 1 } and commit timestamp Timestamp(1574796715, 11)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.599-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-145--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae) with drop timestamp Timestamp(1574796674, 3539)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.583-0500 I INDEX [conn80] Validation complete for collection local.replset.election (UUID: bf7b5380-e70a-475e-ad1b-16751bee6907). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.608-0500 I INDEX [conn181] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:55.687-0500 I SHARDING [conn22] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:31:55.687-0500-5ddd7dab5cde74b6784bb8e2", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55594", time: new Date(1574796715687), what: "dropDatabase", ns: "test3_fsmdb0", details: {} }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:55.663-0500 I SHARDING [conn37] Updating metadata for collection test3_fsmdb0.fsmcoll0 from collection version: 1|3||5ddd7da0cf8184c2e1493df9, shard version: 1|1||5ddd7da0cf8184c2e1493df9 to collection version: due to UUID change
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.649-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for config.cache.chunks.test3_fsmdb0.agg_out (4c26dac0-af8d-4579-bbb5-32356c1d2f49).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.649-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for config.cache.chunks.test3_fsmdb0.agg_out (4c26dac0-af8d-4579-bbb5-32356c1d2f49).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.600-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-139--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae) with drop timestamp Timestamp(1574796674, 3539)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.603-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-144--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555) with drop timestamp Timestamp(1574796676, 2)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.608-0500 W STORAGE [conn181] Could not complete validation of table:index-16--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:55.690-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7dab5cde74b6784bb8bc' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:55.663-0500 I COMMAND [ShardServerCatalogCacheLoader-0] CMD: drop config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.649-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test3_fsmdb0.agg_out (4c26dac0-af8d-4579-bbb5-32356c1d2f49)'. Ident: 'index-212--7234316082034423155', commit timestamp: 'Timestamp(1574796715, 11)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.649-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test3_fsmdb0.agg_out (4c26dac0-af8d-4579-bbb5-32356c1d2f49)'. Ident: 'index-212--2310912778499990807', commit timestamp: 'Timestamp(1574796715, 11)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.584-0500 I COMMAND [conn80] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.604-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-149--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555) with drop timestamp Timestamp(1574796676, 2)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.608-0500 I INDEX [conn181] validating collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:55.663-0500 I STORAGE [ShardServerCatalogCacheLoader-0] dropCollection: config.cache.chunks.test3_fsmdb0.fsmcoll0 (d291b2bc-f179-4f06-8164-0b81d0131eb1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.649-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test3_fsmdb0.agg_out (4c26dac0-af8d-4579-bbb5-32356c1d2f49)'. Ident: 'index-219--7234316082034423155', commit timestamp: 'Timestamp(1574796715, 11)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.649-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test3_fsmdb0.agg_out (4c26dac0-af8d-4579-bbb5-32356c1d2f49)'. Ident: 'index-219--2310912778499990807', commit timestamp: 'Timestamp(1574796715, 11)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.584-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-124--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954) with drop timestamp Timestamp(1574796674, 1971)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.605-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-143--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555) with drop timestamp Timestamp(1574796676, 2)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.608-0500 I INDEX [conn181] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:55.663-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Finishing collection drop for config.cache.chunks.test3_fsmdb0.fsmcoll0 (d291b2bc-f179-4f06-8164-0b81d0131eb1).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.649-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'config.cache.chunks.test3_fsmdb0.agg_out'. Ident: collection-211--7234316082034423155, commit timestamp: Timestamp(1574796715, 11)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.649-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'config.cache.chunks.test3_fsmdb0.agg_out'. Ident: collection-211--2310912778499990807, commit timestamp: Timestamp(1574796715, 11)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.584-0500 W STORAGE [conn80] Could not complete validation of table:collection-4--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.606-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-148--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896) with drop timestamp Timestamp(1574796676, 507)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.608-0500 I INDEX [conn181] Validation complete for collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d). No corruption found.
[fsm_workload_test:agg_out] 2019-11-26T14:31:56.125-0500 FSM workload jstests/concurrency/fsm_workloads/agg_out.js started with pid 16091.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:55.664-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test3_fsmdb0.fsmcoll0 (d291b2bc-f179-4f06-8164-0b81d0131eb1)'. Ident: 'index-339-8224331490264904478', commit timestamp: 'Timestamp(1574796715, 23)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.658-0500 I COMMAND [ReplWriterWorker-9] CMD: drop test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.658-0500 I COMMAND [ReplWriterWorker-15] CMD: drop test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.584-0500 I INDEX [conn80] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.607-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-155--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896) with drop timestamp Timestamp(1574796676, 507)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.610-0500 I COMMAND [conn181] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:55.664-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test3_fsmdb0.fsmcoll0 (d291b2bc-f179-4f06-8164-0b81d0131eb1)'. Ident: 'index-340-8224331490264904478', commit timestamp: 'Timestamp(1574796715, 23)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.658-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796715, 16), t: 1 } and commit timestamp Timestamp(1574796715, 16)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.129-0500 I NETWORK [conn24] end connection 127.0.0.1:55598 (30 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:56.129-0500 I CONNPOOL [ShardRegistry] Ending idle connection to host localhost:20000 because the pool meets constraints; 3 connections to that host remain open
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.658-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796715, 16), t: 1 } and commit timestamp Timestamp(1574796715, 16)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.586-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-133--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954) with drop timestamp Timestamp(1574796674, 1971)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.608-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-147--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896) with drop timestamp Timestamp(1574796676, 507)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.610-0500 W STORAGE [conn181] Could not complete validation of table:collection-10--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:55.664-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Deferring table drop for collection 'config.cache.chunks.test3_fsmdb0.fsmcoll0'. Ident: collection-338-8224331490264904478, commit timestamp: Timestamp(1574796715, 23)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.658-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.130-0500 I NETWORK [conn25] end connection 127.0.0.1:55600 (29 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:56.130-0500 I CONNPOOL [ShardRegistry] Ending idle connection to host localhost:20000 because the pool meets constraints; 2 connections to that host remain open
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.658-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.587-0500 I INDEX [conn80] validating collection local.replset.minvalid (UUID: 6654b1c2-f323-4c78-9165-5ff31d331960)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.609-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-136--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 510)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.610-0500 I INDEX [conn181] validating collection local.oplog.rs (UUID: f999d0d7-cb6c-4d2c-a5ff-807a7ed09766)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:55.674-0500 I COMMAND [conn37] dropDatabase test3_fsmdb0 - starting
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.658-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635)'. Ident: 'index-166--7234316082034423155', commit timestamp: 'Timestamp(1574796715, 16)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.658-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635)'. Ident: 'index-166--2310912778499990807', commit timestamp: 'Timestamp(1574796715, 16)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.587-0500 I INDEX [conn80] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.610-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-141--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 510)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.623-0500 I INDEX [conn181] Validation complete for collection local.oplog.rs (UUID: f999d0d7-cb6c-4d2c-a5ff-807a7ed09766). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:55.675-0500 I COMMAND [conn37] dropDatabase test3_fsmdb0 - dropped 0 collection(s)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.658-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635)'. Ident: 'index-167--7234316082034423155', commit timestamp: 'Timestamp(1574796715, 16)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.658-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635)'. Ident: 'index-167--2310912778499990807', commit timestamp: 'Timestamp(1574796715, 16)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.587-0500 I INDEX [conn80] Validation complete for collection local.replset.minvalid (UUID: 6654b1c2-f323-4c78-9165-5ff31d331960). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.612-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-135--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 510)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.624-0500 I COMMAND [conn181] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:55.675-0500 I COMMAND [conn37] dropDatabase test3_fsmdb0 - finished
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.658-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test3_fsmdb0.fsmcoll0'. Ident: collection-165--7234316082034423155, commit timestamp: Timestamp(1574796715, 16)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.658-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test3_fsmdb0.fsmcoll0'. Ident: collection-165--2310912778499990807, commit timestamp: Timestamp(1574796715, 16)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.588-0500 I COMMAND [conn80] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.613-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-152--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 1014)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.625-0500 I INDEX [conn181] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:55.686-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test3_fsmdb0 took 0 ms and failed :: caused by :: NamespaceNotFound: database test3_fsmdb0 not found
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.669-0500 I COMMAND [ReplWriterWorker-2] CMD: drop config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.669-0500 I COMMAND [ReplWriterWorker-11] CMD: drop config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.588-0500 I INDEX [conn80] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.614-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-159--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 1014)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.627-0500 I INDEX [conn181] validating collection local.replset.election (UUID: 101a66fe-c3c0-4bee-94b9-e9bb8d04aa79)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:55.686-0500 I SHARDING [conn37] setting this node's cached database version for test3_fsmdb0 to {}
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.669-0500 I STORAGE [ReplWriterWorker-2] dropCollection: config.cache.chunks.test3_fsmdb0.fsmcoll0 (a33e44c0-60ea-478a-83bd-e45f3213aca7) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796715, 25), t: 1 } and commit timestamp Timestamp(1574796715, 25)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.669-0500 I STORAGE [ReplWriterWorker-11] dropCollection: config.cache.chunks.test3_fsmdb0.fsmcoll0 (a33e44c0-60ea-478a-83bd-e45f3213aca7) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796715, 25), t: 1 } and commit timestamp Timestamp(1574796715, 25)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.590-0500 I INDEX [conn80] validating collection local.replset.oplogTruncateAfterPoint (UUID: fe211210-ae1b-4ab2-81d6-86b025cc1404)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.615-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-151--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 1014)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.627-0500 I INDEX [conn181] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.669-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for config.cache.chunks.test3_fsmdb0.fsmcoll0 (a33e44c0-60ea-478a-83bd-e45f3213aca7).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.669-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for config.cache.chunks.test3_fsmdb0.fsmcoll0 (a33e44c0-60ea-478a-83bd-e45f3213aca7).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.590-0500 I INDEX [conn80] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.616-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-154--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 1520)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.627-0500 I INDEX [conn181] Validation complete for collection local.replset.election (UUID: 101a66fe-c3c0-4bee-94b9-e9bb8d04aa79). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.669-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test3_fsmdb0.fsmcoll0 (a33e44c0-60ea-478a-83bd-e45f3213aca7)'. Ident: 'index-170--7234316082034423155', commit timestamp: 'Timestamp(1574796715, 25)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.669-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test3_fsmdb0.fsmcoll0 (a33e44c0-60ea-478a-83bd-e45f3213aca7)'. Ident: 'index-170--2310912778499990807', commit timestamp: 'Timestamp(1574796715, 25)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.591-0500 I INDEX [conn80] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: fe211210-ae1b-4ab2-81d6-86b025cc1404). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.617-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-163--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 1520)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.628-0500 I COMMAND [conn181] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.669-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test3_fsmdb0.fsmcoll0 (a33e44c0-60ea-478a-83bd-e45f3213aca7)'. Ident: 'index-171--7234316082034423155', commit timestamp: 'Timestamp(1574796715, 25)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.669-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test3_fsmdb0.fsmcoll0 (a33e44c0-60ea-478a-83bd-e45f3213aca7)'. Ident: 'index-171--2310912778499990807', commit timestamp: 'Timestamp(1574796715, 25)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.591-0500 I COMMAND [conn80] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.618-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-153--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 1520)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.629-0500 I INDEX [conn181] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.669-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'config.cache.chunks.test3_fsmdb0.fsmcoll0'. Ident: collection-169--7234316082034423155, commit timestamp: Timestamp(1574796715, 25)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.669-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'config.cache.chunks.test3_fsmdb0.fsmcoll0'. Ident: collection-169--2310912778499990807, commit timestamp: Timestamp(1574796715, 25)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:56.139-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js started with pid 16094.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.592-0500 I INDEX [conn80] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.619-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-162--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b) with drop timestamp Timestamp(1574796676, 3095)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.631-0500 I INDEX [conn181] validating collection local.replset.minvalid (UUID: 5dfed1a1-c7a1-4f91-a3da-2544e54d2e9a)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.672-0500 I COMMAND [ReplWriterWorker-0] dropDatabase test3_fsmdb0 - starting
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.672-0500 I COMMAND [ReplWriterWorker-4] dropDatabase test3_fsmdb0 - starting
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.594-0500 I INDEX [conn80] validating collection local.startup_log (UUID: 7b6988ea-0c65-41a6-9855-5680c2c711a1)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.621-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-171--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b) with drop timestamp Timestamp(1574796676, 3095)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.631-0500 I INDEX [conn181] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.672-0500 I COMMAND [ReplWriterWorker-0] dropDatabase test3_fsmdb0 - dropped 0 collection(s)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.672-0500 I COMMAND [ReplWriterWorker-4] dropDatabase test3_fsmdb0 - dropped 0 collection(s)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.594-0500 I INDEX [conn80] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.622-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-161--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b) with drop timestamp Timestamp(1574796676, 3095)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.631-0500 I INDEX [conn181] Validation complete for collection local.replset.minvalid (UUID: 5dfed1a1-c7a1-4f91-a3da-2544e54d2e9a). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.672-0500 I COMMAND [ReplWriterWorker-0] dropDatabase test3_fsmdb0 - finished
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.672-0500 I COMMAND [ReplWriterWorker-4] dropDatabase test3_fsmdb0 - finished
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.594-0500 I INDEX [conn80] Validation complete for collection local.startup_log (UUID: 7b6988ea-0c65-41a6-9855-5680c2c711a1). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.624-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-168--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7) with drop timestamp Timestamp(1574796676, 3224)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.631-0500 I COMMAND [conn181] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:55.689-0500 I SHARDING [ReplWriterWorker-11] setting this node's cached database version for test3_fsmdb0 to {}
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:55.689-0500 I SHARDING [ReplWriterWorker-1] setting this node's cached database version for test3_fsmdb0 to {}
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.594-0500 I COMMAND [conn80] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.625-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-177--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7) with drop timestamp Timestamp(1574796676, 3224)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.632-0500 I INDEX [conn181] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.595-0500 I INDEX [conn80] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.626-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-167--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7) with drop timestamp Timestamp(1574796676, 3224)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.634-0500 I INDEX [conn181] validating collection local.replset.oplogTruncateAfterPoint (UUID: 31ce824c-ef86-4223-a4be-3069dae7b5f2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.597-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-123--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.5dbd970d-b36c-4930-ac6f-27b86cff2954) with drop timestamp Timestamp(1574796674, 1971)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.628-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-170--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990) with drop timestamp Timestamp(1574796676, 3292)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.634-0500 I INDEX [conn181] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.599-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-132--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb) with drop timestamp Timestamp(1574796674, 2528)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.629-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-175--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990) with drop timestamp Timestamp(1574796676, 3292)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.634-0500 I INDEX [conn181] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 31ce824c-ef86-4223-a4be-3069dae7b5f2). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.600-0500 I INDEX [conn80] validating collection local.system.replset (UUID: 920cbf66-0930-4ef5-82e9-10d7319f0fda)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.630-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-169--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990) with drop timestamp Timestamp(1574796676, 3292)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.635-0500 I COMMAND [conn181] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.600-0500 I INDEX [conn80] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.632-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-158--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 3540)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.635-0500 I INDEX [conn181] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.600-0500 I INDEX [conn80] Validation complete for collection local.system.replset (UUID: 920cbf66-0930-4ef5-82e9-10d7319f0fda). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.633-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-165--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 3540)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.637-0500 I INDEX [conn181] validating collection local.startup_log (UUID: fd9e05bb-cd6c-441c-9265-3783d4065b03)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.600-0500 I COMMAND [conn80] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.634-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-157--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 3540)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.637-0500 I INDEX [conn181] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.601-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-137--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb) with drop timestamp Timestamp(1574796674, 2528)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.634-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-180--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef) with drop timestamp Timestamp(1574796676, 4046)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.637-0500 I INDEX [conn181] Validation complete for collection local.startup_log (UUID: fd9e05bb-cd6c-441c-9265-3783d4065b03). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.602-0500 I INDEX [conn80] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.635-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-183--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef) with drop timestamp Timestamp(1574796676, 4046)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.638-0500 I COMMAND [conn181] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.603-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-131--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.1f87075f-cafa-4e0a-9cf9-e0e1432a71fb) with drop timestamp Timestamp(1574796674, 2528)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.636-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-179--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef) with drop timestamp Timestamp(1574796676, 4046)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.639-0500 I INDEX [conn181] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.605-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-118--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796674, 2529)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.638-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-174--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 4555)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.641-0500 I INDEX [conn181] validating collection local.system.replset (UUID: 3eb8c3e8-f477-448c-9a25-5db5ef40b0d6)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.606-0500 I INDEX [conn80] validating collection local.system.rollback.id (UUID: 9434a858-83b3-4d87-8d66-64bde405790b)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.639-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-181--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 4555)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.641-0500 I INDEX [conn181] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.606-0500 I INDEX [conn80] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.641-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-173--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 4555)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.641-0500 I INDEX [conn181] Validation complete for collection local.system.replset (UUID: 3eb8c3e8-f477-448c-9a25-5db5ef40b0d6). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.606-0500 I INDEX [conn80] Validation complete for collection local.system.rollback.id (UUID: 9434a858-83b3-4d87-8d66-64bde405790b). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.643-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-190--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 5562)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.641-0500 I COMMAND [conn181] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.608-0500 I COMMAND [conn80] CMD: validate test3_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.644-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-193--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 5562)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.642-0500 I INDEX [conn181] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.608-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-125--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796674, 2529)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.644-0500 I INDEX [conn86] Validation complete for collection local.oplog.rs (UUID: 88962763-38f7-4965-bfd6-b2a62304ae0e). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.644-0500 I INDEX [conn181] validating collection local.system.rollback.id (UUID: 223114bc-2956-4d9b-8f0a-5c567c2cb10e)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.609-0500 W STORAGE [conn80] Could not complete validation of table:collection-345--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.644-0500 I COMMAND [conn86] command local.$cmd appName: "MongoDB Shell" command: validate { validate: "oplog.rs", full: true, lsid: { id: UUID("062a8369-aa85-4710-98c0-d8991d6c8cc4") }, $clusterTime: { clusterTime: Timestamp(1574796709, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:678 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { W: 1 } } } flowControl:{ acquireCount: 1 } storage:{ data: { bytesRead: 41073305, timeReadingMicros: 56062 } } protocol:op_msg 118ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.644-0500 I INDEX [conn181] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.609-0500 I INDEX [conn80] validating the internal structure of index _id_ on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.645-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-189--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 5562)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.644-0500 I INDEX [conn181] Validation complete for collection local.system.rollback.id (UUID: 223114bc-2956-4d9b-8f0a-5c567c2cb10e). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.609-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-117--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796674, 2529)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.645-0500 I COMMAND [conn86] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.646-0500 I COMMAND [conn181] CMD: validate test3_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.609-0500 W STORAGE [conn80] Could not complete validation of table:index-346--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.646-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-188--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 5563)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.646-0500 W STORAGE [conn181] Could not complete validation of table:collection-188--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.610-0500 I INDEX [conn80] validating the internal structure of index _id_hashed on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.647-0500 I INDEX [conn86] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.646-0500 I INDEX [conn181] validating the internal structure of index _id_ on collection test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.610-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-140--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae) with drop timestamp Timestamp(1574796674, 3539)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.648-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-199--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 5563)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.646-0500 W STORAGE [conn181] Could not complete validation of table:index-189--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.610-0500 W STORAGE [conn80] Could not complete validation of table:index-347--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.651-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-187--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 5563)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.646-0500 I INDEX [conn181] validating the internal structure of index _id_hashed on collection test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.611-0500 I INDEX [conn80] validating collection test3_fsmdb0.fsmcoll0 (UUID: 81145456-1c0e-4ef0-89a6-ab06e3485635)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.651-0500 I INDEX [conn86] validating collection local.replset.election (UUID: d0928956-d7fc-46fe-a9bc-1f07f2435457)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.646-0500 W STORAGE [conn181] Could not complete validation of table:index-190--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.611-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-145--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae) with drop timestamp Timestamp(1574796674, 3539)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.651-0500 I INDEX [conn86] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.647-0500 I INDEX [conn181] validating collection test3_fsmdb0.agg_out (UUID: d108d732-d756-4f25-8812-a6483de9ea4c)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.612-0500 I INDEX [conn80] validating index consistency _id_ on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.651-0500 I INDEX [conn86] Validation complete for collection local.replset.election (UUID: d0928956-d7fc-46fe-a9bc-1f07f2435457). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.648-0500 I INDEX [conn181] validating index consistency _id_ on collection test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.612-0500 I INDEX [conn80] validating index consistency _id_hashed on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.652-0500 I COMMAND [conn86] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.648-0500 I INDEX [conn181] validating index consistency _id_hashed on collection test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.612-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-139--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.d83c6fe8-86fd-453f-9329-2febf48e06ae) with drop timestamp Timestamp(1574796674, 3539)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.652-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-186--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 6067)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.648-0500 I INDEX [conn181] Validation complete for collection test3_fsmdb0.agg_out (UUID: d108d732-d756-4f25-8812-a6483de9ea4c). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.612-0500 I INDEX [conn80] Validation complete for collection test3_fsmdb0.fsmcoll0 (UUID: 81145456-1c0e-4ef0-89a6-ab06e3485635). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.652-0500 W STORAGE [conn86] Could not complete validation of table:collection-4--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.649-0500 I COMMAND [conn181] CMD: validate test3_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.613-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-144--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555) with drop timestamp Timestamp(1574796676, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.652-0500 I INDEX [conn86] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.649-0500 W STORAGE [conn181] Could not complete validation of table:collection-156--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.614-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-149--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555) with drop timestamp Timestamp(1574796676, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.654-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-197--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 6067)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.649-0500 I INDEX [conn181] validating the internal structure of index _id_ on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.614-0500 I NETWORK [conn80] end connection 127.0.0.1:53676 (12 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.656-0500 I INDEX [conn86] validating collection local.replset.minvalid (UUID: 6eb6e647-60c7-450a-a905-f04052287b8a)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.649-0500 W STORAGE [conn181] Could not complete validation of table:index-157--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.615-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-143--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.fe2da1db-2174-492d-9bcd-fd2f7a61d555) with drop timestamp Timestamp(1574796676, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.656-0500 I INDEX [conn86] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.649-0500 I INDEX [conn181] validating the internal structure of index _id_hashed on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.617-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-148--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896) with drop timestamp Timestamp(1574796676, 507)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.656-0500 I INDEX [conn86] Validation complete for collection local.replset.minvalid (UUID: 6eb6e647-60c7-450a-a905-f04052287b8a). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.649-0500 W STORAGE [conn181] Could not complete validation of table:index-158--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.619-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-155--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896) with drop timestamp Timestamp(1574796676, 507)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.656-0500 I COMMAND [conn86] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.650-0500 I INDEX [conn181] validating collection test3_fsmdb0.fsmcoll0 (UUID: 81145456-1c0e-4ef0-89a6-ab06e3485635)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.620-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-147--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.4c0ff4fd-34a8-481c-8903-5e78593b8896) with drop timestamp Timestamp(1574796676, 507)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.657-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-185--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 6067)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.651-0500 I INDEX [conn181] validating index consistency _id_ on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.621-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-136--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 510)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.658-0500 I INDEX [conn86] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.651-0500 I INDEX [conn181] validating index consistency _id_hashed on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.622-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-141--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 510)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.659-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-192--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 6573)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.651-0500 I INDEX [conn181] Validation complete for collection test3_fsmdb0.fsmcoll0 (UUID: 81145456-1c0e-4ef0-89a6-ab06e3485635). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.623-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-135--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 510)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.661-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-201--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 6573)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.653-0500 I NETWORK [conn181] end connection 127.0.0.1:47214 (40 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.624-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-152--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 1014)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.662-0500 I INDEX [conn86] validating collection local.replset.oplogTruncateAfterPoint (UUID: 5d41bfc8-ebca-43f3-a038-30023495a91a)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.693-0500 I NETWORK [conn180] end connection 127.0.0.1:47194 (39 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.625-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-159--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 1014)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.662-0500 I INDEX [conn86] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:54.702-0500 I NETWORK [conn179] end connection 127.0.0.1:47192 (38 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.628-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-151--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 1014)
[fsm_workload_test:agg_out] 2019-11-26T14:31:56.148-0500 MongoDB shell version v0.0.0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.662-0500 I INDEX [conn86] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 5d41bfc8-ebca-43f3-a038-30023495a91a). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.629-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-154--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 1520)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.634-0500 I COMMAND [conn55] CMD: drop test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.663-0500 I COMMAND [conn86] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.630-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-163--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 1520)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.634-0500 I STORAGE [conn55] dropCollection: test3_fsmdb0.agg_out (d108d732-d756-4f25-8812-a6483de9ea4c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.663-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-191--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 6573)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.632-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-153--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 1520)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.634-0500 I STORAGE [conn55] Finishing collection drop for test3_fsmdb0.agg_out (d108d732-d756-4f25-8812-a6483de9ea4c).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.663-0500 I INDEX [conn86] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.633-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-162--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b) with drop timestamp Timestamp(1574796676, 3095)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.634-0500 I STORAGE [conn55] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.agg_out (d108d732-d756-4f25-8812-a6483de9ea4c)'. Ident: 'index-189--2588534479858262356', commit timestamp: 'Timestamp(1574796715, 7)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.664-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-196--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 7082)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.634-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-171--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b) with drop timestamp Timestamp(1574796676, 3095)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.634-0500 I STORAGE [conn55] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.agg_out (d108d732-d756-4f25-8812-a6483de9ea4c)'. Ident: 'index-190--2588534479858262356', commit timestamp: 'Timestamp(1574796715, 7)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.666-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-205--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 7082)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.634-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-161--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.afc41594-2b52-4e14-9a42-2fff539e312b) with drop timestamp Timestamp(1574796676, 3095)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.634-0500 I STORAGE [conn55] Deferring table drop for collection 'test3_fsmdb0.agg_out'. Ident: collection-188--2588534479858262356, commit timestamp: Timestamp(1574796715, 7)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.667-0500 I INDEX [conn86] validating collection local.startup_log (UUID: e0cc0511-0005-4584-a461-5ae30058b4c6)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.635-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-168--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7) with drop timestamp Timestamp(1574796676, 3224)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.643-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test3_fsmdb0.agg_out took 1 ms and found the collection is not sharded
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.667-0500 I INDEX [conn86] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.638-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-177--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7) with drop timestamp Timestamp(1574796676, 3224)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.643-0500 I SHARDING [conn55] Updating metadata for collection test3_fsmdb0.agg_out from collection version: 1|0||5ddd7da3cf8184c2e1493fd4, shard version: 1|0||5ddd7da3cf8184c2e1493fd4 to collection version: due to UUID change
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.667-0500 I INDEX [conn86] Validation complete for collection local.startup_log (UUID: e0cc0511-0005-4584-a461-5ae30058b4c6). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.639-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-167--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.190d2893-0362-406e-8119-bb4d653c33e7) with drop timestamp Timestamp(1574796676, 3224)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.643-0500 I COMMAND [ShardServerCatalogCacheLoader-2] CMD: drop config.cache.chunks.test3_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.668-0500 I COMMAND [conn86] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.640-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-170--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990) with drop timestamp Timestamp(1574796676, 3292)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.643-0500 I STORAGE [ShardServerCatalogCacheLoader-2] dropCollection: config.cache.chunks.test3_fsmdb0.agg_out (4c26dac0-af8d-4579-bbb5-32356c1d2f49) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.668-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-195--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 7082)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.641-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-175--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990) with drop timestamp Timestamp(1574796676, 3292)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.643-0500 I STORAGE [ShardServerCatalogCacheLoader-2] Finishing collection drop for config.cache.chunks.test3_fsmdb0.agg_out (4c26dac0-af8d-4579-bbb5-32356c1d2f49).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.669-0500 I INDEX [conn86] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.642-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-169--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.65fba55d-6296-4168-9fe7-10d8e033b990) with drop timestamp Timestamp(1574796676, 3292)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.643-0500 I STORAGE [ShardServerCatalogCacheLoader-2] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test3_fsmdb0.agg_out (4c26dac0-af8d-4579-bbb5-32356c1d2f49)'. Ident: 'index-207--2588534479858262356', commit timestamp: 'Timestamp(1574796715, 11)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.671-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-204--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.643-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-158--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 3540)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.643-0500 I STORAGE [ShardServerCatalogCacheLoader-2] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test3_fsmdb0.agg_out (4c26dac0-af8d-4579-bbb5-32356c1d2f49)'. Ident: 'index-210--2588534479858262356', commit timestamp: 'Timestamp(1574796715, 11)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.673-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-213--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.644-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-165--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 3540)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.643-0500 I STORAGE [ShardServerCatalogCacheLoader-2] Deferring table drop for collection 'config.cache.chunks.test3_fsmdb0.agg_out'. Ident: collection-205--2588534479858262356, commit timestamp: Timestamp(1574796715, 11)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.674-0500 I INDEX [conn86] validating collection local.system.replset (UUID: 3b8c02e8-ec29-4e79-912d-3e315d1d851c)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.646-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-157--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 3540)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.656-0500 I COMMAND [conn55] CMD: drop test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.674-0500 I INDEX [conn86] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.648-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-180--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef) with drop timestamp Timestamp(1574796676, 4046)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.656-0500 I STORAGE [conn55] dropCollection: test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.674-0500 I INDEX [conn86] Validation complete for collection local.system.replset (UUID: 3b8c02e8-ec29-4e79-912d-3e315d1d851c). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.649-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-183--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef) with drop timestamp Timestamp(1574796676, 4046)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.656-0500 I STORAGE [conn55] Finishing collection drop for test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.674-0500 I COMMAND [conn86] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.650-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-179--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.1e2fc57e-6eb2-4a49-829d-1555906f62ef) with drop timestamp Timestamp(1574796676, 4046)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.656-0500 I STORAGE [conn55] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635)'. Ident: 'index-157--2588534479858262356', commit timestamp: 'Timestamp(1574796715, 16)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.675-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-203--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.651-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-174--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 4555)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.656-0500 I STORAGE [conn55] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635)'. Ident: 'index-158--2588534479858262356', commit timestamp: 'Timestamp(1574796715, 16)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.675-0500 I INDEX [conn86] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.653-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-181--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 4555)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.656-0500 I STORAGE [conn55] Deferring table drop for collection 'test3_fsmdb0.fsmcoll0'. Ident: collection-156--2588534479858262356, commit timestamp: Timestamp(1574796715, 16)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.676-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-210--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 3)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.654-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-173--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 4555)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.665-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test3_fsmdb0.fsmcoll0 took 0 ms and found the collection is not sharded
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.679-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-217--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 3)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.655-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-190--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 5562)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.665-0500 I SHARDING [conn55] Updating metadata for collection test3_fsmdb0.fsmcoll0 from collection version: 1|3||5ddd7da0cf8184c2e1493df9, shard version: 1|3||5ddd7da0cf8184c2e1493df9 to collection version: due to UUID change
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.679-0500 I INDEX [conn86] validating collection local.system.rollback.id (UUID: 1099f6d7-f170-471c-a0ac-dc97bd7e42b0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.656-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-193--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 5562)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.665-0500 I COMMAND [ShardServerCatalogCacheLoader-2] CMD: drop config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.680-0500 I INDEX [conn86] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.658-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-189--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 5562)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.665-0500 I STORAGE [ShardServerCatalogCacheLoader-2] dropCollection: config.cache.chunks.test3_fsmdb0.fsmcoll0 (a33e44c0-60ea-478a-83bd-e45f3213aca7) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.680-0500 I INDEX [conn86] Validation complete for collection local.system.rollback.id (UUID: 1099f6d7-f170-471c-a0ac-dc97bd7e42b0). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.660-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-188--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 5563)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.665-0500 I STORAGE [ShardServerCatalogCacheLoader-2] Finishing collection drop for config.cache.chunks.test3_fsmdb0.fsmcoll0 (a33e44c0-60ea-478a-83bd-e45f3213aca7).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.680-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-209--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 3)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.660-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-199--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 5563)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.665-0500 I STORAGE [ShardServerCatalogCacheLoader-2] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test3_fsmdb0.fsmcoll0 (a33e44c0-60ea-478a-83bd-e45f3213aca7)'. Ident: 'index-161--2588534479858262356', commit timestamp: 'Timestamp(1574796715, 25)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.681-0500 I COMMAND [conn86] CMD: validate test3_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.662-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-187--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 5563)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.665-0500 I STORAGE [ShardServerCatalogCacheLoader-2] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test3_fsmdb0.fsmcoll0 (a33e44c0-60ea-478a-83bd-e45f3213aca7)'. Ident: 'index-162--2588534479858262356', commit timestamp: 'Timestamp(1574796715, 25)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.681-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-208--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 574)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.663-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-186--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 6067)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.665-0500 I STORAGE [ShardServerCatalogCacheLoader-2] Deferring table drop for collection 'config.cache.chunks.test3_fsmdb0.fsmcoll0'. Ident: collection-160--2588534479858262356, commit timestamp: Timestamp(1574796715, 25)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.681-0500 W STORAGE [conn86] Could not complete validation of table:collection-345--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.664-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-197--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 6067)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.670-0500 I COMMAND [conn55] dropDatabase test3_fsmdb0 - starting
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.681-0500 I INDEX [conn86] validating the internal structure of index _id_ on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.665-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-185--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 6067)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.670-0500 I COMMAND [conn55] dropDatabase test3_fsmdb0 - dropped 0 collection(s)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.683-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-219--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 574)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.666-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-192--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 6573)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.670-0500 I COMMAND [conn55] dropDatabase test3_fsmdb0 - finished
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.683-0500 W STORAGE [conn86] Could not complete validation of table:index-346--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.668-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-201--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 6573)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.687-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test3_fsmdb0 took 0 ms and failed :: caused by :: NamespaceNotFound: database test3_fsmdb0 not found
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.683-0500 I INDEX [conn86] validating the internal structure of index _id_hashed on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.669-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-191--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 6573)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:55.687-0500 I SHARDING [conn55] setting this node's cached database version for test3_fsmdb0 to {}
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.685-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-207--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 574)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.670-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-196--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 7082)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.685-0500 W STORAGE [conn86] Could not complete validation of table:index-347--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.671-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-205--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 7082)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.686-0500 I INDEX [conn86] validating collection test3_fsmdb0.fsmcoll0 (UUID: 81145456-1c0e-4ef0-89a6-ab06e3485635)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.672-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-195--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796676, 7082)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.686-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-212--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 1015)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.674-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-204--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.687-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-221--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 1015)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.675-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-213--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.687-0500 I INDEX [conn86] validating index consistency _id_ on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.676-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-203--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.687-0500 I INDEX [conn86] validating index consistency _id_hashed on collection test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.678-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-210--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 3)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.688-0500 I INDEX [conn86] Validation complete for collection test3_fsmdb0.fsmcoll0 (UUID: 81145456-1c0e-4ef0-89a6-ab06e3485635). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.679-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-217--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 3)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.688-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-211--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 1015)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.681-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-209--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 3)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.689-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-216--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 1523)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.682-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-208--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 574)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.689-0500 I NETWORK [conn86] end connection 127.0.0.1:52784 (11 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.683-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-219--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 574)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.690-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-229--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 1523)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.685-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-207--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 574)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.691-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-215--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 1523)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.686-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-212--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 1015)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.692-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-224--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2030)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.687-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-221--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 1015)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.694-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-233--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2030)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.689-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-211--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 1015)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.695-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-223--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2030)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.690-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-216--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 1523)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.696-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-226--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2597)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.691-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-229--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 1523)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.697-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-237--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2597)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.693-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-215--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 1523)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.698-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-225--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2597)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.694-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-224--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2030)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.700-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-228--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 3102)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.695-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-233--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2030)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.700-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-239--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 3102)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.696-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-223--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2030)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.702-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-227--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 3102)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.697-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-226--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2597)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.702-0500 I NETWORK [conn85] end connection 127.0.0.1:52748 (10 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.700-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-237--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2597)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.704-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-236--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3) with drop timestamp Timestamp(1574796680, 4431)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.701-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-225--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 2597)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.705-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-245--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3) with drop timestamp Timestamp(1574796680, 4431)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.702-0500 I NETWORK [conn79] end connection 127.0.0.1:53640 (11 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.707-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-235--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3) with drop timestamp Timestamp(1574796680, 4431)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.702-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-228--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 3102)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.708-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-242--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a) with drop timestamp Timestamp(1574796680, 4560)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.704-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-239--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 3102)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.709-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-251--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a) with drop timestamp Timestamp(1574796680, 4560)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.705-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-227--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796680, 3102)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.711-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-241--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a) with drop timestamp Timestamp(1574796680, 4560)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.706-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-236--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3) with drop timestamp Timestamp(1574796680, 4431)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.712-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-248--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e) with drop timestamp Timestamp(1574796680, 5119)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.707-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-245--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3) with drop timestamp Timestamp(1574796680, 4431)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.713-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-253--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e) with drop timestamp Timestamp(1574796680, 5119)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.708-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-235--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.6f779bea-5847-4773-b92a-69cc2e3a24f3) with drop timestamp Timestamp(1574796680, 4431)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.715-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-247--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e) with drop timestamp Timestamp(1574796680, 5119)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.711-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-242--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a) with drop timestamp Timestamp(1574796680, 4560)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.716-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-250--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8) with drop timestamp Timestamp(1574796680, 5559)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:56.157-0500 MongoDB shell version v0.0.0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.712-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-251--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a) with drop timestamp Timestamp(1574796680, 4560)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.198-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45626 #157 (1 connection now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:56.198-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.216-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57032 #130 (30 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.779-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.220-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39786 #178 (38 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.780-0500 Implicit session: session { "id" : UUID("246c822b-3a86-4323-8254-dde84c2adc17") }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.223-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35820 #77 (10 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.780-0500 Implicit session: session { "id" : UUID("8c963e07-5fe7-4362-a801-b511c428f3db") }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.223-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47264 #182 (39 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.780-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.223-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52466 #83 (10 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:56.241-0500 I NETWORK [listener] connection accepted from 127.0.0.1:58822 #51 (1 connection now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.780-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.717-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-257--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8) with drop timestamp Timestamp(1574796680, 5559)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.780-0500 true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.713-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-241--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.5fdb5170-4332-41c9-876a-16db8e5c6e1a) with drop timestamp Timestamp(1574796680, 4560)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.781-0500 true
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.199-0500 I NETWORK [conn157] received client metadata from 127.0.0.1:45626 conn157: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.781-0500 2019-11-26T14:31:56.214-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.216-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57034 #131 (31 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.781-0500 2019-11-26T14:31:56.216-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.221-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39790 #179 (39 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.781-0500 2019-11-26T14:31:56.216-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.223-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35828 #78 (11 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.782-0500 2019-11-26T14:31:56.216-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:56.241-0500 I NETWORK [conn51] received client metadata from 127.0.0.1:58822 conn51: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.782-0500 2019-11-26T14:31:56.217-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.223-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47266 #183 (40 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.782-0500 2019-11-26T14:31:56.217-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.223-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52470 #84 (11 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.782-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.719-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-249--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8) with drop timestamp Timestamp(1574796680, 5559)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.782-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.783-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.783-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.783-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.783-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.783-0500 [jsTest] New session started with sessionID: { "id" : UUID("33d4a84a-c471-4456-8bb4-2d152c567224") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.715-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-248--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e) with drop timestamp Timestamp(1574796680, 5119)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.783-0500 [jsTest] New session started with sessionID: { "id" : UUID("e5903ba2-5d81-486b-8cee-f545083e7ea9") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.207-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45628 #158 (2 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.783-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.216-0500 I NETWORK [conn130] received client metadata from 127.0.0.1:57032 conn130: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.783-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.221-0500 I NETWORK [conn178] received client metadata from 127.0.0.1:39786 conn178: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.784-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.223-0500 I NETWORK [conn77] received client metadata from 127.0.0.1:35820 conn77: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.784-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:56.760-0500 I NETWORK [listener] connection accepted from 127.0.0.1:58880 #52 (2 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.784-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.223-0500 I NETWORK [conn182] received client metadata from 127.0.0.1:47264 conn182: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.784-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.223-0500 I NETWORK [conn83] received client metadata from 127.0.0.1:52466 conn83: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.784-0500 2019-11-26T14:31:56.220-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.720-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-256--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc) with drop timestamp Timestamp(1574796680, 5560)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.785-0500 2019-11-26T14:31:56.220-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.716-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-253--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e) with drop timestamp Timestamp(1574796680, 5119)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.785-0500 2019-11-26T14:31:56.220-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.208-0500 I NETWORK [conn158] received client metadata from 127.0.0.1:45628 conn158: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.785-0500 2019-11-26T14:31:56.220-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.217-0500 I NETWORK [conn131] received client metadata from 127.0.0.1:57034 conn131: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.785-0500 2019-11-26T14:31:56.220-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.221-0500 I NETWORK [conn179] received client metadata from 127.0.0.1:39790 conn179: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.785-0500 2019-11-26T14:31:56.220-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.223-0500 I NETWORK [conn78] received client metadata from 127.0.0.1:35828 conn78: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.786-0500 2019-11-26T14:31:56.220-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:56.760-0500 I NETWORK [conn52] received client metadata from 127.0.0.1:58880 conn52: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.786-0500 2019-11-26T14:31:56.220-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.223-0500 I NETWORK [conn183] received client metadata from 127.0.0.1:47266 conn183: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.786-0500 2019-11-26T14:31:56.221-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.223-0500 I NETWORK [conn84] received client metadata from 127.0.0.1:52470 conn84: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.786-0500 2019-11-26T14:31:56.221-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.721-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-259--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc) with drop timestamp Timestamp(1574796680, 5560)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.786-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.717-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-247--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.861b1fcd-ea05-462b-8cac-5cacd320134e) with drop timestamp Timestamp(1574796680, 5119)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.786-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.233-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45670 #159 (3 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.787-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.217-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57036 #132 (32 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.787-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.221-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39792 #180 (40 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.787-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.254-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35868 #79 (12 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.787-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:56.762-0500 I NETWORK [listener] connection accepted from 127.0.0.1:58884 #53 (3 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.787-0500 [jsTest] New session started with sessionID: { "id" : UUID("38f9ebd9-73d3-4b7a-8e94-ddd0b74ab86c") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.224-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47274 #184 (41 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.787-0500 [jsTest] New session started with sessionID: { "id" : UUID("61aa30b1-1bf1-4c23-822f-3ca64481415b") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.254-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52506 #85 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.788-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.722-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-255--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc) with drop timestamp Timestamp(1574796680, 5560)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.788-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.718-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-250--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8) with drop timestamp Timestamp(1574796680, 5559)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.788-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.233-0500 I NETWORK [conn159] received client metadata from 127.0.0.1:45670 conn159: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.788-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.217-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57038 #133 (33 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.788-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.221-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39794 #181 (41 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.788-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.254-0500 I NETWORK [conn79] received client metadata from 127.0.0.1:35868 conn79: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.789-0500 2019-11-26T14:31:56.223-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:56.762-0500 I NETWORK [conn53] received client metadata from 127.0.0.1:58884 conn53: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.789-0500 2019-11-26T14:31:56.222-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.224-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47276 #185 (42 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.789-0500 2019-11-26T14:31:56.223-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.254-0500 I NETWORK [conn85] received client metadata from 127.0.0.1:52506 conn85: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.789-0500 2019-11-26T14:31:56.223-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.724-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-232--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.789-0500 2019-11-26T14:31:56.223-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.719-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-257--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8) with drop timestamp Timestamp(1574796680, 5559)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.790-0500 2019-11-26T14:31:56.223-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.240-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45678 #160 (4 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.790-0500 2019-11-26T14:31:56.223-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.217-0500 I NETWORK [conn132] received client metadata from 127.0.0.1:57036 conn132: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.790-0500 2019-11-26T14:31:56.223-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.221-0500 I NETWORK [conn180] received client metadata from 127.0.0.1:39792 conn180: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.790-0500 2019-11-26T14:31:56.223-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.296-0500 I STORAGE [ReplWriterWorker-4] createCollection: test4_fsmdb0.fsmcoll0 with provided UUID: 08555f78-3db2-4ee9-9e10-8c80139ec7dd and options: { uuid: UUID("08555f78-3db2-4ee9-9e10-8c80139ec7dd") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.790-0500 2019-11-26T14:31:56.223-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:56.773-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test4_fsmdb0 from version {} to version { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 } took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.790-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.224-0500 I NETWORK [conn184] received client metadata from 127.0.0.1:47274 conn184: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.791-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.296-0500 I STORAGE [ReplWriterWorker-5] createCollection: test4_fsmdb0.fsmcoll0 with provided UUID: 08555f78-3db2-4ee9-9e10-8c80139ec7dd and options: { uuid: UUID("08555f78-3db2-4ee9-9e10-8c80139ec7dd") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.791-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.726-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-243--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.791-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.721-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-249--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.1ffbe3c7-24c8-451d-a7d3-adf7911b02e8) with drop timestamp Timestamp(1574796680, 5559)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.791-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.240-0500 I NETWORK [conn160] received client metadata from 127.0.0.1:45678 conn160: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.791-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.217-0500 I NETWORK [conn133] received client metadata from 127.0.0.1:57038 conn133: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.792-0500 [jsTest] New session started with sessionID: { "id" : UUID("534f138f-cd9f-4ad2-aa0e-48c0df492849") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.221-0500 I NETWORK [conn181] received client metadata from 127.0.0.1:39794 conn181: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.792-0500 [jsTest] New session started with sessionID: { "id" : UUID("f4bd1496-95f5-45d6-82b4-c87a5de2ff12") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.311-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test4_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.792-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:56.775-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.fsmcoll0 to version 1|3||5ddd7daccf8184c2e1494359 took 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.792-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.224-0500 I NETWORK [conn185] received client metadata from 127.0.0.1:47276 conn185: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.792-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.311-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test4_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.792-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.727-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-231--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.793-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.723-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-256--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc) with drop timestamp Timestamp(1574796680, 5560)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.793-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.303-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45706 #161 (5 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.793-0500 Skipping data consistency checks for 1-node CSRS: { "type" : "replica set", "primary" : "localhost:20000", "nodes" : [ "localhost:20000" ] }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.235-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57074 #134 (34 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.793-0500 setting random seed: 69551526
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.237-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39816 #182 (42 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.793-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.315-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35888 #80 (13 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.793-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:56.968-0500 I COMMAND [conn52] command test4_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a") }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 195ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.794-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.239-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47284 #186 (43 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.794-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.314-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52524 #86 (13 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.794-0500 Implicit session: session { "id" : UUID("4864891c-c2b1-41a5-bf0d-b2f177d16ecc") }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.728-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-262--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 506)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.794-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.724-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-259--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc) with drop timestamp Timestamp(1574796680, 5560)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.794-0500 Implicit session: session { "id" : UUID("21764d7a-08a2-4abd-a402-58c4be45e728") }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.303-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45708 #162 (6 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.795-0500 [jsTest] New session started with sessionID: { "id" : UUID("c4321d09-9482-4636-a869-233b549f8b9f") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.235-0500 I NETWORK [conn134] received client metadata from 127.0.0.1:57074 conn134: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.795-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.237-0500 I NETWORK [conn182] received client metadata from 127.0.0.1:39816 conn182: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.795-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.315-0500 I NETWORK [conn80] received client metadata from 127.0.0.1:35888 conn80: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.795-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:57.781-0500 I COMMAND [conn53] command test4_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70") }, $db: "test4_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6\", to: \"test4_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:714 protocol:op_msg 1008ms
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.795-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.239-0500 I NETWORK [conn186] received client metadata from 127.0.0.1:47284 conn186: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.795-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.314-0500 I NETWORK [conn86] received client metadata from 127.0.0.1:52524 conn86: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.796-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.729-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-265--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 506)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.796-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.726-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-255--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.fbb70fe6-2f99-4a1d-a5db-49b439fd6cbc) with drop timestamp Timestamp(1574796680, 5560)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.796-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.303-0500 I NETWORK [conn161] received client metadata from 127.0.0.1:45706 conn161: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.796-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.243-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57084 #135 (35 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.796-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.246-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39828 #183 (43 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.796-0500 [jsTest] New session started with sessionID: { "id" : UUID("663266ff-5c17-495d-93fa-8ce76683230d") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.326-0500 W CONTROL [conn80] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 77 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.797-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.251-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47304 #187 (44 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.797-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.325-0500 W CONTROL [conn86] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 124 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.797-0500 [jsTest] New session started with sessionID: { "id" : UUID("80a7cf5e-1d82-4475-aa45-ac26cf5705b0") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.730-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-261--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 506)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.797-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.727-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-232--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.797-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.304-0500 I NETWORK [conn162] received client metadata from 127.0.0.1:45708 conn162: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.798-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.798-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.243-0500 I NETWORK [conn135] received client metadata from 127.0.0.1:57084 conn135: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.798-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.247-0500 I NETWORK [conn183] received client metadata from 127.0.0.1:39828 conn183: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.798-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.330-0500 I INDEX [ReplWriterWorker-9] index build: starting on test4_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.798-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.251-0500 I NETWORK [conn187] received client metadata from 127.0.0.1:47304 conn187: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.798-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.330-0500 I INDEX [ReplWriterWorker-15] index build: starting on test4_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.799-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.732-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-264--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2213)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.799-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.728-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-243--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.799-0500 [jsTest] New session started with sessionID: { "id" : UUID("db17e775-88d0-404d-a74b-992b10a3890d") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.331-0500 I NETWORK [conn161] end connection 127.0.0.1:45706 (5 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.799-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.245-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57086 #136 (36 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.799-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.248-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39830 #184 (44 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.800-0500 [jsTest] New session started with sessionID: { "id" : UUID("f7cb5042-8acf-4214-82ca-bc73902d31e6") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.330-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.800-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.253-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47306 #188 (45 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.800-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.330-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.800-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.733-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-273--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2213)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.800-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.729-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-231--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.800-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.339-0500 I NETWORK [conn162] end connection 127.0.0.1:45708 (4 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.801-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.245-0500 I NETWORK [conn136] received client metadata from 127.0.0.1:57086 conn136: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.801-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.249-0500 I NETWORK [conn184] received client metadata from 127.0.0.1:39830 conn184: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.801-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.330-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 6e4ef99a-0960-49c1-a46d-fbc5a975015f: test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.801-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.253-0500 I NETWORK [conn188] received client metadata from 127.0.0.1:47306 conn188: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.801-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.330-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: f67726dd-85d4-42e5-ae00-597fb45aed06: test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.802-0500 [jsTest] New session started with sessionID: { "id" : UUID("2405e975-c78f-44f4-85e1-cf47189c1796") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.734-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-263--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2213)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.802-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.731-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-262--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 506)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.802-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.342-0500 I NETWORK [conn158] end connection 127.0.0.1:45628 (3 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.802-0500 [jsTest] New session started with sessionID: { "id" : UUID("6722a14e-9604-4443-8595-de9acd3bee84") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.266-0500 I SHARDING [conn17] distributed lock 'test4_fsmdb0' acquired for 'dropCollection', ts : 5ddd7dac5cde74b6784bb8fe
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.802-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.250-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39836 #185 (45 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.803-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.330-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.803-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.255-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47312 #189 (46 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.803-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.330-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.803-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.736-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-268--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2214)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.803-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.733-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-265--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 506)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.803-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.437-0500 I COMMAND [conn160] command test4_fsmdb0.fsmcoll0 appName: "MongoDB Shell" command: shardCollection { shardCollection: "test4_fsmdb0.fsmcoll0", key: { _id: "hashed" }, lsid: { id: UUID("3e50cee9-4d66-4935-b89a-35b9db2732f4") }, $clusterTime: { clusterTime: Timestamp(1574796716, 10), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:245 protocol:op_msg 157ms
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.804-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.268-0500 I SHARDING [conn17] distributed lock 'test4_fsmdb0.fsmcoll0' acquired for 'dropCollection', ts : 5ddd7dac5cde74b6784bb900
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.804-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.250-0500 I NETWORK [conn185] received client metadata from 127.0.0.1:39836 conn185: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.804-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.331-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.804-0500 [jsTest] New session started with sessionID: { "id" : UUID("f652a0dd-ae08-47d8-a0aa-513a1d530dc4") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.255-0500 I NETWORK [conn189] received client metadata from 127.0.0.1:47312 conn189: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.804-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.330-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.804-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.737-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-281--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2214)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.805-0500 [jsTest] New session started with sessionID: { "id" : UUID("c01020e5-51ae-417e-b76c-341ee1f9d8ec") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.734-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-261--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 506)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.805-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.533-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test4_fsmdb0 from version {} to version { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 } took 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.805-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.269-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dac5cde74b6784bb900' unlocked.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.805-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.311-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39854 #186 (46 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.805-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.334-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.805-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.276-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test4_fsmdb0 from version {} to version { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 } took 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.806-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.333-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.806-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.738-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-267--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2214)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.806-0500 Recreating replica set from config {
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.736-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-264--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2213)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.806-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.535-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.fsmcoll0 to version 1|3||5ddd7daccf8184c2e1494359 took 1 ms
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.806-0500 "_id" : "config-rs",
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.270-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dac5cde74b6784bb8fe' unlocked.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.806-0500 [jsTest] New session started with sessionID: { "id" : UUID("ea95f11e-2583-4def-b00f-d6b4e683b37d") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.311-0500 I NETWORK [conn186] received client metadata from 127.0.0.1:39854 conn186: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.807-0500 "version" : 1,
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.335-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 6e4ef99a-0960-49c1-a46d-fbc5a975015f: test4_fsmdb0.fsmcoll0 ( 08555f78-3db2-4ee9-9e10-8c80139ec7dd ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.807-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.277-0500 I SHARDING [conn55] setting this node's cached database version for test4_fsmdb0 to { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.807-0500 "configsvr" : true,
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.334-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: f67726dd-85d4-42e5-ae00-597fb45aed06: test4_fsmdb0.fsmcoll0 ( 08555f78-3db2-4ee9-9e10-8c80139ec7dd ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.807-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.738-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-270--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2343)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.807-0500 "protocolVersion" : NumberLong(1),
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.737-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-273--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2213)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.807-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.622-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.agg_out took 0 ms and found the collection is not sharded
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.807-0500 "writeConcernMajorityJournalDefault" : true,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.272-0500 I SHARDING [conn17] distributed lock 'test4_fsmdb0' acquired for 'enableSharding', ts : 5ddd7dac5cde74b6784bb908
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.808-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.314-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39858 #187 (47 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.808-0500 "members" : [
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.338-0500 W CONTROL [conn80] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 79 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.808-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.284-0500 I STORAGE [conn55] createCollection: test4_fsmdb0.fsmcoll0 with provided UUID: 08555f78-3db2-4ee9-9e10-8c80139ec7dd and options: { uuid: UUID("08555f78-3db2-4ee9-9e10-8c80139ec7dd") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.808-0500 {
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.337-0500 W CONTROL [conn86] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 126 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.808-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.739-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-279--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2343)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.808-0500 "_id" : 0,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.738-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-263--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2213)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.809-0500 [jsTest] New session started with sessionID: { "id" : UUID("66ab480e-6e88-4611-9272-2037c999c43b") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.708-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45726 #163 (4 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.809-0500 "host" : "localhost:20000",
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.275-0500 I SHARDING [conn17] Registering new database { _id: "test4_fsmdb0", primary: "shard-rs1", partitioned: false, version: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 } } in sharding catalog
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.809-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.314-0500 I NETWORK [conn187] received client metadata from 127.0.0.1:39858 conn187: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.809-0500 "arbiterOnly" : false,
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.340-0500 I NETWORK [conn80] end connection 127.0.0.1:35888 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.809-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.295-0500 I INDEX [conn55] index build: done building index _id_ on ns test4_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.809-0500 "buildIndexes" : true,
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.339-0500 I NETWORK [conn86] end connection 127.0.0.1:52524 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.810-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.740-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-269--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2343)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.810-0500 "hidden" : false,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.739-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-268--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2214)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.810-0500 Running data consistency checks for replica set: shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.708-0500 I NETWORK [conn163] received client metadata from 127.0.0.1:45726 conn163: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.810-0500 "priority" : 1,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.277-0500 I SHARDING [conn17] Enabling sharding for database [test4_fsmdb0] in config db
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.810-0500 Running data consistency checks for replica set: shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.324-0500 W CONTROL [conn187] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 327 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.810-0500 "tags" : {
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.349-0500 I NETWORK [conn78] end connection 127.0.0.1:35828 (11 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.811-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.295-0500 I INDEX [conn55] Registering index build: 561f7892-1757-4a88-9456-1358b7e9eb0d
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.811-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.349-0500 I NETWORK [conn84] end connection 127.0.0.1:52470 (11 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.811-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.741-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-276--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c) with drop timestamp Timestamp(1574796683, 3029)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.811-0500 },
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.739-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-281--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2214)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.811-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.717-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45728 #164 (5 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.811-0500 "slaveDelay" : NumberLong(0),
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.278-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dac5cde74b6784bb908' unlocked.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.812-0500 [jsTest] New session started with sessionID: { "id" : UUID("1be8c0d9-e3cf-424f-a4fa-8d03ac69c971") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.329-0500 W CONTROL [conn187] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 327 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.812-0500 "votes" : 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.406-0500 I STORAGE [ReplWriterWorker-0] createCollection: config.cache.chunks.test4_fsmdb0.fsmcoll0 with provided UUID: c7f3cab2-be92-4a48-8ca9-60ce74a83411 and options: { uuid: UUID("c7f3cab2-be92-4a48-8ca9-60ce74a83411") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.812-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.310-0500 I INDEX [conn55] index build: starting on test4_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.812-0500 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.406-0500 I STORAGE [ReplWriterWorker-4] createCollection: config.cache.chunks.test4_fsmdb0.fsmcoll0 with provided UUID: c7f3cab2-be92-4a48-8ca9-60ce74a83411 and options: { uuid: UUID("c7f3cab2-be92-4a48-8ca9-60ce74a83411") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.812-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.743-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-285--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c) with drop timestamp Timestamp(1574796683, 3029)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.812-0500 ],
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.740-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-267--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2214)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.812-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.717-0500 I NETWORK [conn164] received client metadata from 127.0.0.1:45728 conn164: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.813-0500 "settings" : {
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.281-0500 I SHARDING [conn17] distributed lock 'test4_fsmdb0' acquired for 'shardCollection', ts : 5ddd7dac5cde74b6784bb911
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.813-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.331-0500 I NETWORK [conn186] end connection 127.0.0.1:39854 (46 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.813-0500 "chainingAllowed" : true,
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.420-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns config.cache.chunks.test4_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.813-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.310-0500 I INDEX [conn55] build may temporarily use up to 500 megabytes of RAM
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.813-0500 "heartbeatIntervalMillis" : 2000,
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.424-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns config.cache.chunks.test4_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.813-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.745-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-275--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c) with drop timestamp Timestamp(1574796683, 3029)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.814-0500 "heartbeatTimeoutSecs" : 10,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.743-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-270--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2343)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.814-0500 [jsTest] New session started with sessionID: { "id" : UUID("e7d48e62-b109-4794-ba8e-c98e9279b0a4") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.720-0500 I NETWORK [conn163] end connection 127.0.0.1:45726 (4 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.814-0500 "electionTimeoutMillis" : 86400000,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.282-0500 I SHARDING [conn17] distributed lock 'test4_fsmdb0.fsmcoll0' acquired for 'shardCollection', ts : 5ddd7dac5cde74b6784bb913
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.814-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.332-0500 I NETWORK [conn187] end connection 127.0.0.1:39858 (45 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.814-0500 "catchUpTimeoutMillis" : -1,
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.440-0500 I INDEX [ReplWriterWorker-7] index build: starting on config.cache.chunks.test4_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.815-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.310-0500 I STORAGE [conn55] Index build initialized: 561f7892-1757-4a88-9456-1358b7e9eb0d: test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd ): indexes: 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.815-0500 "catchUpTakeoverDelayMillis" : 30000,
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.441-0500 I INDEX [ReplWriterWorker-7] index build: starting on config.cache.chunks.test4_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.815-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.746-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-278--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88) with drop timestamp Timestamp(1574796683, 3034)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.815-0500 "getLastErrorModes" : {
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.744-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-279--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2343)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.815-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.748-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45730 #165 (5 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.815-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.342-0500 I NETWORK [conn133] end connection 127.0.0.1:57038 (35 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.816-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.335-0500 I STORAGE [conn73] createCollection: test4_fsmdb0.fsmcoll0 with provided UUID: 08555f78-3db2-4ee9-9e10-8c80139ec7dd and options: { uuid: UUID("08555f78-3db2-4ee9-9e10-8c80139ec7dd") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.816-0500 },
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.440-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.816-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.310-0500 I INDEX [conn55] Waiting for index build to complete: 561f7892-1757-4a88-9456-1358b7e9eb0d
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.816-0500 "getLastErrorDefaults" : {
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.442-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.816-0500 [jsTest] New session started with sessionID: { "id" : UUID("5b7f7200-2bfc-4bb3-8d83-44a8a8bbf9b7") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.747-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-287--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88) with drop timestamp Timestamp(1574796683, 3034)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.816-0500 "w" : 1,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.745-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-269--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 2343)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.816-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.748-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45731 #166 (6 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.817-0500 "wtimeout" : 0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.349-0500 I NETWORK [conn131] end connection 127.0.0.1:57034 (34 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.817-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.342-0500 I NETWORK [conn181] end connection 127.0.0.1:39794 (44 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.817-0500 },
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.440-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 032f8ba0-7393-4a2f-88de-cd55de347613: config.cache.chunks.test4_fsmdb0.fsmcoll0 (c7f3cab2-be92-4a48-8ca9-60ce74a83411 ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.817-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.310-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.817-0500 "replicaSetId" : ObjectId("5ddd7d655cde74b6784bb14d")
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.442-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: ab46244b-52f6-4df4-99f4-64039a2b571d: config.cache.chunks.test4_fsmdb0.fsmcoll0 (c7f3cab2-be92-4a48-8ca9-60ce74a83411 ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.817-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.748-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-277--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88) with drop timestamp Timestamp(1574796683, 3034)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.818-0500 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.746-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-276--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c) with drop timestamp Timestamp(1574796683, 3029)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.818-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.748-0500 I NETWORK [conn165] received client metadata from 127.0.0.1:45730 conn165: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.818-0500 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.383-0500 D4 TXN [conn52] New transaction started with txnNumber: 0 on session with lsid fe0d839f-a967-4beb-abc7-1341e0642327
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.818-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.345-0500 I INDEX [conn73] index build: done building index _id_ on ns test4_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.818-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.440-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.818-0500 [jsTest] New session started with sessionID: { "id" : UUID("a4ef1273-cfb0-43f4-a31f-3bd66c98624a") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.311-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.818-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.442-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.819-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.749-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-272--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 3540)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.819-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.747-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-285--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c) with drop timestamp Timestamp(1574796683, 3029)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.819-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.748-0500 I NETWORK [conn166] received client metadata from 127.0.0.1:45731 conn166: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.819-0500 [jsTest] New session started with sessionID: { "id" : UUID("d93f2b01-3450-4a47-862f-0d0ad34dafdc") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.433-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test4_fsmdb0 from version {} to version { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 } took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.819-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.349-0500 I NETWORK [conn179] end connection 127.0.0.1:39790 (43 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.819-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.441-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.819-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.311-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47318 #190 (47 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.819-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.442-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.820-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.751-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-283--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 3540)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.820-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.748-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-275--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.9a16799f-71ce-4a7a-9884-e7750979fc9c) with drop timestamp Timestamp(1574796683, 3029)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.820-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.749-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45734 #167 (7 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.820-0500 Recreating replica set from config {
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.434-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.fsmcoll0 to version 1|3||5ddd7daccf8184c2e1494359 took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.820-0500 [jsTest] New session started with sessionID: { "id" : UUID("6164e761-6940-45fa-8acb-a2427294d3f9") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.352-0500 I INDEX [conn73] index build: done building index _id_hashed on ns test4_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.820-0500 "_id" : "shard-rs0",
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.442-0500 I SHARDING [ReplWriterWorker-3] Marking collection config.cache.chunks.test4_fsmdb0.fsmcoll0 as collection version:
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.820-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.311-0500 I NETWORK [conn190] received client metadata from 127.0.0.1:47318 conn190: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.820-0500 "version" : 2,
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.443-0500 I SHARDING [ReplWriterWorker-3] Marking collection config.cache.chunks.test4_fsmdb0.fsmcoll0 as collection version:
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.821-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.752-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-271--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 3540)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.821-0500 "protocolVersion" : NumberLong(1),
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.749-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-278--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88) with drop timestamp Timestamp(1574796683, 3034)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.821-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.749-0500 I NETWORK [conn167] received client metadata from 127.0.0.1:45734 conn167: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.821-0500 "writeConcernMajorityJournalDefault" : true,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.435-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dac5cde74b6784bb913' unlocked.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.821-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.353-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test4_fsmdb0 from version {} to version { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 } took 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.821-0500 "members" : [
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.444-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 4 side writes (inserted: 4, deleted: 0) for 'lastmod_1' in 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.821-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.313-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test4_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.821-0500 {
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.445-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: drain applied 4 side writes (inserted: 4, deleted: 0) for 'lastmod_1' in 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.822-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.753-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-290--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 4178)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.822-0500 "_id" : 0,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.750-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-287--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88) with drop timestamp Timestamp(1574796683, 3034)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.822-0500 [jsTest] New session started with sessionID: { "id" : UUID("b4229bf2-04c9-4ccf-8ea4-f4d6ab7186b1") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.749-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45736 #168 (8 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.822-0500 "host" : "localhost:20001",
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.437-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dac5cde74b6784bb911' unlocked.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.822-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.354-0500 I SHARDING [conn73] Marking collection test4_fsmdb0.fsmcoll0 as collection version:
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.822-0500 "arbiterOnly" : false,
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.444-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.test4_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.822-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.313-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47322 #191 (48 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.823-0500 "buildIndexes" : true,
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.445-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index lastmod_1 on ns config.cache.chunks.test4_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.823-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.823-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.823-0500 "priority" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.823-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.823-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.823-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.823-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.823-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.823-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.823-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.823-0500 "_id" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500 "host" : "localhost:20002",
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500 "priority" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500 "_id" : 2,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500 "host" : "localhost:20003",
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500 "priority" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.824-0500 ],
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500 "settings" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500 "chainingAllowed" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500 "heartbeatIntervalMillis" : 2000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500 "heartbeatTimeoutSecs" : 10,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500 "electionTimeoutMillis" : 86400000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500 "catchUpTimeoutMillis" : -1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500 "catchUpTakeoverDelayMillis" : 30000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500 "getLastErrorModes" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500 "getLastErrorDefaults" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500 "w" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500 "wtimeout" : 0
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500 "replicaSetId" : ObjectId("5ddd7d683bbfe7fa5630d3b8")
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500 [jsTest] New session started with sessionID: { "id" : UUID("f285c2b7-a164-422c-a09b-f40eb11a9183") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.825-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.826-0500 Recreating replica set from config {
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.826-0500 "_id" : "shard-rs1",
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.826-0500 "version" : 2,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.826-0500 "protocolVersion" : NumberLong(1),
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.826-0500 "writeConcernMajorityJournalDefault" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.826-0500 "members" : [
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.826-0500 {
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.756-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-295--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 4178)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.826-0500 "_id" : 0,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.752-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-277--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.fbd36e90-50bf-4c7c-a74e-012366c85f88) with drop timestamp Timestamp(1574796683, 3034)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.826-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js finished.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.749-0500 I NETWORK [conn168] received client metadata from 127.0.0.1:45736 conn168: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.437-0500 I COMMAND [conn17] command admin.$cmd appName: "MongoDB Shell" command: _configsvrShardCollection { _configsvrShardCollection: "test4_fsmdb0.fsmcoll0", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("3e50cee9-4d66-4935-b89a-35b9db2732f4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1574796716, 10), signature: { hash: BinData(0, 66C38E54DEAE5E92FDAAA24F5665FFCF0FE11F41), keyId: 6763700092420489256 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45678", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796716, 10), t: 1 } }, $db: "admin" } numYields:0 reslen:587 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 6 } }, Global: { acquireCount: { r: 2, w: 4 } }, Database: { acquireCount: { r: 2, w: 4 } }, Collection: { acquireCount: { r: 2, w: 4 } }, Mutex: { acquireCount: { r: 10, W: 1 } } } flowControl:{ acquireCount: 4 } storage:{} protocol:op_msg 157ms
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.826-0500 "host" : "localhost:20004",
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.389-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.fsmcoll0 to version 1|3||5ddd7daccf8184c2e1494359 took 1 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.827-0500 Starting JSTest jstests/hooks/run_check_repl_dbhash_background.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_check_repl_dbhash_background"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_check_repl_dbhash_background.js
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.445-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 032f8ba0-7393-4a2f-88de-cd55de347613: config.cache.chunks.test4_fsmdb0.fsmcoll0 ( c7f3cab2-be92-4a48-8ca9-60ce74a83411 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.828-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.828-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.828-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.828-0500 "priority" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.828-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.828-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.828-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.828-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.828-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.828-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.828-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.828-0500 "_id" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.828-0500 "host" : "localhost:20005",
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500 "priority" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500 "_id" : 2,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500 "host" : "localhost:20006",
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500 "priority" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.829-0500 ],
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500 "settings" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500 "chainingAllowed" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500 "heartbeatIntervalMillis" : 2000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500 "heartbeatTimeoutSecs" : 10,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500 "electionTimeoutMillis" : 86400000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500 "catchUpTimeoutMillis" : -1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500 "catchUpTakeoverDelayMillis" : 30000,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500 "getLastErrorModes" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500 "getLastErrorDefaults" : {
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500 "w" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500 "wtimeout" : 0
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500 "replicaSetId" : ObjectId("5ddd7d6bcf8184c2e1492eba")
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500 [jsTest] New session started with sessionID: { "id" : UUID("25b49edb-fa37-4d95-89f8-cc250a098a58") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.830-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500 [jsTest] New session started with sessionID: { "id" : UUID("18aa4eea-9419-461d-8017-f5b71ad4166a") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500 [jsTest] New session started with sessionID: { "id" : UUID("4c98e61f-883d-413d-979d-a8864ebabcb4") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500 [jsTest] New session started with sessionID: { "id" : UUID("b1a67bdd-b8cd-40f5-acd6-efb9becafb98") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.831-0500 [jsTest] New session started with sessionID: { "id" : UUID("63da5c7f-81c8-4bb2-a77d-6a87c88babd2") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500 [jsTest] New session started with sessionID: { "id" : UUID("1835bac2-0b12-4723-8145-3983f966f412") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500 [jsTest] Workload(s) started: jstests/concurrency/fsm_workloads/agg_out.js
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500 [jsTest] New session started with sessionID: { "id" : UUID("3e50cee9-4d66-4935-b89a-35b9db2732f4") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.832-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 Using 5 threads (requested 5)
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 Implicit session: session { "id" : UUID("fe60c746-120b-48c8-b5ea-01c7637d8759") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 Implicit session: session { "id" : UUID("5f02947c-845a-45ba-a12b-96ab912cb804") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 Implicit session: session { "id" : UUID("341898b6-115b-46a8-8bb5-8f42e5742d9b") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 Implicit session: session { "id" : UUID("ccc55c7c-8176-464a-962c-840cafd2f99c") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 Implicit session: session { "id" : UUID("f6a2ab45-7c4a-4243-b1a8-fedaff90a52f") }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 [tid:2] setting random seed: 3290546049
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 [tid:0] setting random seed: 3761882155
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 [tid:3] setting random seed: 2861304758
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 [tid:1] setting random seed: 1001160670
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 [tid:4] setting random seed: 131553252
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 [tid:2]
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.833-0500 [jsTest] New session started with sessionID: { "id" : UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.834-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.834-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.834-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.834-0500 [tid:1]
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.834-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.314-0500 I NETWORK [conn191] received client metadata from 127.0.0.1:47322 conn191: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.834-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.838-0500 [jsTest] New session started with sessionID: { "id" : UUID("3a910b40-797a-442e-8f27-720741d58d70") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.838-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.838-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.838-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.838-0500 [tid:3]
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.838-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.838-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.838-0500 [jsTest] New session started with sessionID: { "id" : UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.838-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.838-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.838-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.838-0500 [tid:0]
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.838-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.838-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.838-0500 [jsTest] New session started with sessionID: { "id" : UUID("9d61435e-d844-47c7-b952-b761253a3458") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.838-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.838-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.838-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.839-0500 [tid:4]
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.839-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.839-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.839-0500 [jsTest] New session started with sessionID: { "id" : UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.839-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.839-0500
[fsm_workload_test:agg_out] 2019-11-26T14:31:57.839-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.447-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: ab46244b-52f6-4df4-99f4-64039a2b571d: config.cache.chunks.test4_fsmdb0.fsmcoll0 ( c7f3cab2-be92-4a48-8ca9-60ce74a83411 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.756-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-289--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 4178)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.753-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-272--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 3540)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.760-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45740 #169 (9 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.439-0500 I SHARDING [conn17] distributed lock 'test4_fsmdb0' acquired for 'enableSharding', ts : 5ddd7dac5cde74b6784bb932
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.389-0500 I SHARDING [conn63] Updating metadata for collection test4_fsmdb0.fsmcoll0 from collection version: to collection version: 1|3||5ddd7daccf8184c2e1494359, shard version: 1|1||5ddd7daccf8184c2e1494359 due to version change
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.601-0500 I STORAGE [ReplWriterWorker-5] createCollection: test4_fsmdb0.agg_out with provided UUID: 779cf8b3-6313-4651-b1cd-c5e10b7b79fc and options: { uuid: UUID("779cf8b3-6313-4651-b1cd-c5e10b7b79fc") }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.314-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 561f7892-1757-4a88-9456-1358b7e9eb0d: test4_fsmdb0.fsmcoll0 ( 08555f78-3db2-4ee9-9e10-8c80139ec7dd ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.601-0500 I STORAGE [ReplWriterWorker-15] createCollection: test4_fsmdb0.agg_out with provided UUID: 779cf8b3-6313-4651-b1cd-c5e10b7b79fc and options: { uuid: UUID("779cf8b3-6313-4651-b1cd-c5e10b7b79fc") }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.757-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-292--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5052)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.754-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-283--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 3540)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.760-0500 I NETWORK [conn169] received client metadata from 127.0.0.1:45740 conn169: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.439-0500 I SHARDING [conn17] Enabling sharding for database [test4_fsmdb0] in config db
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.390-0500 I STORAGE [ShardServerCatalogCacheLoader-0] createCollection: config.cache.chunks.test4_fsmdb0.fsmcoll0 with provided UUID: 647e6274-b0dc-4671-90c7-65b5ed709ba8 and options: { uuid: UUID("647e6274-b0dc-4671-90c7-65b5ed709ba8") }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.616-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.315-0500 I INDEX [conn55] Index build completed: 561f7892-1757-4a88-9456-1358b7e9eb0d
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.616-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.758-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-301--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5052)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.756-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-271--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 3540)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.762-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45744 #170 (10 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.443-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dac5cde74b6784bb932' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.408-0500 I INDEX [ShardServerCatalogCacheLoader-0] index build: done building index _id_ on ns config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.646-0500 I STORAGE [TimestampMonitor] Removing drop-pending idents with drop timestamps before timestamp Timestamp(1574796709, 8)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.325-0500 W CONTROL [conn191] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 82 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.632-0500 I STORAGE [TimestampMonitor] Removing drop-pending idents with drop timestamps before timestamp Timestamp(1574796709, 8)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.759-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-291--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5052)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.756-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-290--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 4178)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.762-0500 I NETWORK [conn170] received client metadata from 127.0.0.1:45744 conn170: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.445-0500 I SHARDING [conn17] distributed lock 'test4_fsmdb0' acquired for 'shardCollection', ts : 5ddd7dac5cde74b6784bb938
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.408-0500 I INDEX [ShardServerCatalogCacheLoader-0] Registering index build: 50bfd88a-c587-454c-b702-dcbf86547a07
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.646-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-42--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 1030)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.333-0500 I SHARDING [conn55] CMD: shardcollection: { _shardsvrShardCollection: "test4_fsmdb0.fsmcoll0", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("3e50cee9-4d66-4935-b89a-35b9db2732f4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796716, 12), signature: { hash: BinData(0, 66C38E54DEAE5E92FDAAA24F5665FFCF0FE11F41), keyId: 6763700092420489256 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45678", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796716, 12), t: 1 } }, $db: "admin" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.632-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-42--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 1030)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.760-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-294--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5053)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.757-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-295--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 4178)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.763-0500 I NETWORK [conn167] end connection 127.0.0.1:45734 (9 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.446-0500 I SHARDING [conn17] distributed lock 'test4_fsmdb0.fsmcoll0' acquired for 'shardCollection', ts : 5ddd7dac5cde74b6784bb93a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.424-0500 I INDEX [ShardServerCatalogCacheLoader-0] index build: starting on config.cache.chunks.test4_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.650-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-43--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 1030)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.333-0500 I SHARDING [conn55] about to log metadata event into changelog: { _id: "nz_desktop:20004-2019-11-26T14:31:56.333-0500-5ddd7daccf8184c2e1494357", server: "nz_desktop:20004", shard: "shard-rs1", clientAddr: "127.0.0.1:46028", time: new Date(1574796716333), what: "shardCollection.start", ns: "test4_fsmdb0.fsmcoll0", details: { shardKey: { _id: "hashed" }, collection: "test4_fsmdb0.fsmcoll0", uuid: UUID("08555f78-3db2-4ee9-9e10-8c80139ec7dd"), empty: true, fromMapReduce: false, primary: "shard-rs1:shard-rs1/localhost:20004,localhost:20005,localhost:20006", numChunks: 4 } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.633-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-43--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 1030)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.635-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-41--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 1030)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.758-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-289--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 4178)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.763-0500 I NETWORK [conn168] end connection 127.0.0.1:45736 (8 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.447-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test4_fsmdb0 from version {} to version { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.424-0500 I INDEX [ShardServerCatalogCacheLoader-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.657-0500 I INDEX [ReplWriterWorker-0] index build: starting on test4_fsmdb0.agg_out properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.337-0500 W CONTROL [conn191] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 84 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.762-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-305--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5053)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.636-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-46--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 1540)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.760-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-292--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5052)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.765-0500 I NETWORK [conn166] end connection 127.0.0.1:45731 (7 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.448-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.fsmcoll0 to version 1|3||5ddd7daccf8184c2e1494359 took 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.424-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Index build initialized: 50bfd88a-c587-454c-b702-dcbf86547a07: config.cache.chunks.test4_fsmdb0.fsmcoll0 (647e6274-b0dc-4671-90c7-65b5ed709ba8 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.657-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.339-0500 I NETWORK [conn190] end connection 127.0.0.1:47318 (47 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.763-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-293--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5053)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.637-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-55--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 1540)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.762-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-301--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5052)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.765-0500 I NETWORK [conn165] end connection 127.0.0.1:45730 (6 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.449-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dac5cde74b6784bb93a' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.424-0500 I INDEX [ShardServerCatalogCacheLoader-0] Waiting for index build to complete: 50bfd88a-c587-454c-b702-dcbf86547a07
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.657-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 44ba457c-8e99-4aa7-b702-913de3c10d1f: test4_fsmdb0.agg_out (779cf8b3-6313-4651-b1cd-c5e10b7b79fc ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.339-0500 I NETWORK [conn191] end connection 127.0.0.1:47322 (46 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.765-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-298--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5624)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.638-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-45--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 1540)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.763-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-291--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5052)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.965-0500 I COMMAND [conn169] command test4_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51") }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 192ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.451-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dac5cde74b6784bb938' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.424-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.657-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.343-0500 I NETWORK [conn185] end connection 127.0.0.1:47276 (45 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.767-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-303--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5624)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.640-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-48--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 2301)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.764-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-294--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5053)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.969-0500 I COMMAND [conn170] command test4_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6") }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 196ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.587-0500 I SHARDING [conn52] distributed lock 'test4_fsmdb0' acquired for 'createCollection', ts : 5ddd7dac5cde74b6784bb949
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.425-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.658-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-41--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 1030)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.349-0500 I NETWORK [conn183] end connection 127.0.0.1:47266 (44 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.768-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-297--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5624)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.641-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-57--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 2301)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.765-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-305--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5053)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:56.970-0500 I COMMAND [conn164] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458") }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 197ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.588-0500 I SHARDING [conn52] distributed lock 'test4_fsmdb0.agg_out' acquired for 'createCollection', ts : 5ddd7dac5cde74b6784bb94b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.428-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index lastmod_1 on ns config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.658-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.388-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.fsmcoll0 to version 1|3||5ddd7daccf8184c2e1494359 took 1 ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.769-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-300--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 6064)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.642-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-47--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 2301)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.766-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-293--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5053)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.619-0500 I SHARDING [conn52] distributed lock with ts: 5ddd7dac5cde74b6784bb94b' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.429-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 50bfd88a-c587-454c-b702-dcbf86547a07: config.cache.chunks.test4_fsmdb0.fsmcoll0 ( 647e6274-b0dc-4671-90c7-65b5ed709ba8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.661-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.388-0500 I SHARDING [conn55] Marking collection test4_fsmdb0.fsmcoll0 as collection version: 1|3||5ddd7daccf8184c2e1494359, shard version: 1|3||5ddd7daccf8184c2e1494359
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.770-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-309--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 6064)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.644-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-50--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3613)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.767-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-298--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5624)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:56.620-0500 I SHARDING [conn52] distributed lock with ts: 5ddd7dac5cde74b6784bb949' unlocked.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.429-0500 I INDEX [ShardServerCatalogCacheLoader-0] Index build completed: 50bfd88a-c587-454c-b702-dcbf86547a07
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.663-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-46--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 1540)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.388-0500 I STORAGE [ShardServerCatalogCacheLoader-2] createCollection: config.cache.chunks.test4_fsmdb0.fsmcoll0 with provided UUID: c7f3cab2-be92-4a48-8ca9-60ce74a83411 and options: { uuid: UUID("c7f3cab2-be92-4a48-8ca9-60ce74a83411") }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.772-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-299--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 6064)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.653-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-61--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3613)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.768-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-303--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5624)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.846-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js started with pid 16189.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:57.350-0500 I NETWORK [conn32] end connection 127.0.0.1:55636 (33 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:57.869-0500 MongoDB shell version v0.0.0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:56.429-0500 I SHARDING [ShardServerCatalogCacheLoader-0] Marking collection config.cache.chunks.test4_fsmdb0.fsmcoll0 as collection version:
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.860-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.860-0500 Implicit session: session { "id" : UUID("72b5fa74-ab10-49c7-a513-18fa35aa5acb") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.860-0500 MongoDB server version: 0.0.0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.860-0500 true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.860-0500 2019-11-26T14:31:57.931-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.860-0500 2019-11-26T14:31:57.931-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.860-0500 2019-11-26T14:31:57.936-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.860-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.860-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.860-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.860-0500 [jsTest] New session started with sessionID: { "id" : UUID("be90170b-6ffb-4d47-957e-6cc64895a7cd") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500 2019-11-26T14:31:57.940-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500 2019-11-26T14:31:57.940-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500 2019-11-26T14:31:57.940-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500 2019-11-26T14:31:57.940-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500 2019-11-26T14:31:57.941-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500 [jsTest] New session started with sessionID: { "id" : UUID("bd4f8823-83e1-477b-8d14-221843ddc79c") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500 2019-11-26T14:31:57.946-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500 2019-11-26T14:31:57.946-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500 2019-11-26T14:31:57.946-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500 2019-11-26T14:31:57.946-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500 2019-11-26T14:31:57.947-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.861-0500 [jsTest] New session started with sessionID: { "id" : UUID("df109370-4e93-4c59-9e40-7020916301fa") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.862-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.862-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.862-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.862-0500 Skipping data consistency checks for 1-node CSRS: { "type" : "replica set", "primary" : "localhost:20000", "nodes" : [ "localhost:20000" ] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.862-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.862-0500 Implicit session: session { "id" : UUID("6489efda-9781-465e-9d6e-d48bfaaa7084") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.862-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.666-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 44ba457c-8e99-4aa7-b702-913de3c10d1f: test4_fsmdb0.agg_out ( 779cf8b3-6313-4651-b1cd-c5e10b7b79fc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.862-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.862-0500 Implicit session: session { "id" : UUID("9a2bdb27-0e57-45f3-a7a9-9d4c3b275c72") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.862-0500 MongoDB server version: 0.0.0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.862-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.862-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.862-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.862-0500 [jsTest] New session started with sessionID: { "id" : UUID("ca8121bf-43f8-4d5a-bf92-f0aa2e923b80") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.862-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.862-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.862-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.862-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.862-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.862-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.862-0500 [jsTest] New session started with sessionID: { "id" : UUID("b187a8aa-0bf4-454c-b243-df2f80d48b3e") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500 [jsTest] New session started with sessionID: { "id" : UUID("420a2179-61f4-4280-a3a3-747c6981746c") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500 Running data consistency checks for replica set: shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500 [jsTest] New session started with sessionID: { "id" : UUID("c0ef91a4-03db-4afa-84ac-ce13240a8cc9") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500 [jsTest] New session started with sessionID: { "id" : UUID("91a19d11-a9c5-4d91-9525-8872c09ff1c4") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.863-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.864-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.864-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.864-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.404-0500 I INDEX [ShardServerCatalogCacheLoader-2] index build: done building index _id_ on ns config.cache.chunks.test4_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.864-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.864-0500 [jsTest] New session started with sessionID: { "id" : UUID("28bf7c75-e423-4a19-ba18-23a3cba4c8b9") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.773-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-312--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201) with drop timestamp Timestamp(1574796686, 1011)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.864-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.662-0500 I INDEX [ReplWriterWorker-0] index build: starting on test4_fsmdb0.agg_out properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.864-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.770-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-297--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 5624)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.864-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:57.893-0500 I COMMAND [conn164] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458") }, $clusterTime: { clusterTime: Timestamp(1574796717, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 888ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.864-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:57.893-0500 I COMMAND [conn52] command test4_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a") }, $clusterTime: { clusterTime: Timestamp(1574796716, 3031), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717\", to: \"test4_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:714 protocol:op_msg 923ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.865-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.865-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.865-0500 [jsTest] New session started with sessionID: { "id" : UUID("db795662-f3ac-4cfe-b943-0087ca1d9254") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:57.350-0500 I NETWORK [conn33] end connection 127.0.0.1:55638 (32 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.865-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:57.350-0500 I CONNPOOL [ShardRegistry] Ending idle connection to host localhost:20000 because the pool meets constraints; 3 connections to that host remain open
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.865-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.667-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-55--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 1540)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.865-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.404-0500 I INDEX [ShardServerCatalogCacheLoader-2] Registering index build: 97d29b48-0669-4cbc-a8b4-edbeb69619c3
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.865-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.774-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-319--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201) with drop timestamp Timestamp(1574796686, 1011)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.865-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.662-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.866-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.772-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-300--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 6064)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.866-0500 [jsTest] New session started with sessionID: { "id" : UUID("98eb342c-c9e6-46e0-b0b4-c58e6086b80d") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:57.893-0500 I COMMAND [conn169] command test4_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51") }, $clusterTime: { clusterTime: Timestamp(1574796716, 2850), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05\", to: \"test4_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:714 protocol:op_msg 926ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.866-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:31:57.953-0500 I COMMAND [conn53] command test4_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70") }, $clusterTime: { clusterTime: Timestamp(1574796717, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 170ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.866-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.866-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:57.350-0500 I NETWORK [conn34] end connection 127.0.0.1:55640 (31 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.866-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:57.350-0500 I CONNPOOL [ShardRegistry] Ending idle connection to host localhost:20000 because the pool meets constraints; 2 connections to that host remain open
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.866-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.668-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-45--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 1540)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.867-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.420-0500 I INDEX [ShardServerCatalogCacheLoader-2] index build: starting on config.cache.chunks.test4_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.867-0500 [jsTest] New session started with sessionID: { "id" : UUID("b07e922c-02c8-49ab-897f-7e9671c127c3") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.776-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-311--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201) with drop timestamp Timestamp(1574796686, 1011)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.867-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.662-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 59387fd8-d366-455d-aea9-8c8ac3ddeea0: test4_fsmdb0.agg_out (779cf8b3-6313-4651-b1cd-c5e10b7b79fc ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.867-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.773-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-309--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 6064)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.867-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:57.893-0500 I COMMAND [conn170] command test4_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6") }, $clusterTime: { clusterTime: Timestamp(1574796716, 3032), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996\", to: \"test4_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:714 protocol:op_msg 921ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.867-0500 Running data consistency checks for replica set: shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:57.358-0500 I NETWORK [conn37] end connection 127.0.0.1:55646 (30 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.867-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:57.350-0500 I CONNPOOL [ShardRegistry] Ending idle connection to host localhost:20000 because the pool meets constraints; 1 connections to that host remain open
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.868-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.670-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-48--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 2301)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.868-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.420-0500 I INDEX [ShardServerCatalogCacheLoader-2] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.868-0500 [jsTest] New session started with sessionID: { "id" : UUID("3f42616f-e948-49d9-97ee-c2eb72d5ff98") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.777-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-314--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163) with drop timestamp Timestamp(1574796686, 1018)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.868-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.662-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.868-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.775-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-299--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796683, 6064)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.868-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:57.922-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45770 #171 (7 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.868-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:57.358-0500 I NETWORK [conn38] end connection 127.0.0.1:55648 (29 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.868-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:57.358-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39894 #188 (44 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.869-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.671-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-57--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 2301)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.869-0500 [jsTest] New session started with sessionID: { "id" : UUID("cdc95860-821e-434f-a778-c3f0a4a57ae9") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.420-0500 I STORAGE [ShardServerCatalogCacheLoader-2] Index build initialized: 97d29b48-0669-4cbc-a8b4-edbeb69619c3: config.cache.chunks.test4_fsmdb0.fsmcoll0 (c7f3cab2-be92-4a48-8ca9-60ce74a83411 ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.869-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.778-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-323--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163) with drop timestamp Timestamp(1574796686, 1018)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.869-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.663-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-49--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3613)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.869-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.776-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-312--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201) with drop timestamp Timestamp(1574796686, 1011)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.869-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:57.922-0500 I NETWORK [conn171] received client metadata from 127.0.0.1:45770 conn171: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.869-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:57.392-0500 I NETWORK [conn53] end connection 127.0.0.1:55680 (28 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.870-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:57.359-0500 I NETWORK [conn188] received client metadata from 127.0.0.1:39894 conn188: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.870-0500 [jsTest] New session started with sessionID: { "id" : UUID("086388f9-3be7-40d1-9140-cfac036fe2cd") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.672-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-47--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 2301)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.870-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.870-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:31:59.870-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.420-0500 I INDEX [ShardServerCatalogCacheLoader-2] Waiting for index build to complete: 97d29b48-0669-4cbc-a8b4-edbeb69619c3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.779-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-313--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163) with drop timestamp Timestamp(1574796686, 1018)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.663-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.777-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-319--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201) with drop timestamp Timestamp(1574796686, 1011)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:58.002-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45792 #172 (8 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:57.392-0500 I NETWORK [conn55] end connection 127.0.0.1:55684 (27 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:57.395-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39906 #189 (45 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.673-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-50--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3613)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.420-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.779-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-316--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23) with drop timestamp Timestamp(1574796686, 1720)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.664-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-54--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3614)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.778-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-311--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.5f8d06cd-2259-47ba-ae4b-98a1b2de1201) with drop timestamp Timestamp(1574796686, 1011)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:58.002-0500 I NETWORK [conn172] received client metadata from 127.0.0.1:45792 conn172: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:57.396-0500 I NETWORK [conn189] received client metadata from 127.0.0.1:39906 conn189: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:57.781-0500 I NETWORK [conn51] end connection 127.0.0.1:55676 (26 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.674-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-61--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3613)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.421-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.781-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-327--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23) with drop timestamp Timestamp(1574796686, 1720)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.782-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-315--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23) with drop timestamp Timestamp(1574796686, 1720)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.783-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-322--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368) with drop timestamp Timestamp(1574796686, 2032)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:58.010-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45796 #173 (9 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:57.781-0500 I NETWORK [conn48] end connection 127.0.0.1:55670 (25 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:57.935-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57174 #137 (26 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.676-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-49--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3613)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.423-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index lastmod_1 on ns config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.667-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.779-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-314--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163) with drop timestamp Timestamp(1574796686, 1018)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.785-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-331--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368) with drop timestamp Timestamp(1574796686, 2032)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:58.010-0500 I NETWORK [conn173] received client metadata from 127.0.0.1:45796 conn173: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:57.940-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39918 #190 (46 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:57.936-0500 I NETWORK [conn137] received client metadata from 127.0.0.1:57174 conn137: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.677-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-54--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3614)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.426-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 97d29b48-0669-4cbc-a8b4-edbeb69619c3: config.cache.chunks.test4_fsmdb0.fsmcoll0 ( c7f3cab2-be92-4a48-8ca9-60ce74a83411 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.668-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-65--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3614)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.779-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-323--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163) with drop timestamp Timestamp(1574796686, 1018)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.786-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-321--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368) with drop timestamp Timestamp(1574796686, 2032)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:58.040-0500 I COMMAND [conn164] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458") }, $clusterTime: { clusterTime: Timestamp(1574796717, 2218), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 146ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:58.044-0500 I NETWORK [conn172] end connection 127.0.0.1:45792 (8 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:57.936-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57176 #138 (27 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.679-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-65--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3614)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.426-0500 I INDEX [ShardServerCatalogCacheLoader-2] Index build completed: 97d29b48-0669-4cbc-a8b4-edbeb69619c3
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.669-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 59387fd8-d366-455d-aea9-8c8ac3ddeea0: test4_fsmdb0.agg_out ( 779cf8b3-6313-4651-b1cd-c5e10b7b79fc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.782-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-313--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.6c79a125-4bad-4def-a3e7-0908af735163) with drop timestamp Timestamp(1574796686, 1018)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.787-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-334--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b) with drop timestamp Timestamp(1574796686, 2545)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:31:58.082-0500 I COMMAND [conn169] command test4_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51") }, $clusterTime: { clusterTime: Timestamp(1574796717, 2218), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 188ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:57.940-0500 I NETWORK [conn190] received client metadata from 127.0.0.1:39918 conn190: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:31:57.936-0500 I NETWORK [conn138] received client metadata from 127.0.0.1:57176 conn138: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.680-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-53--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3614)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.426-0500 I SHARDING [ShardServerCatalogCacheLoader-2] Marking collection config.cache.chunks.test4_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.671-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-53--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3614)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.783-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-316--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23) with drop timestamp Timestamp(1574796686, 1720)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.787-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-335--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b) with drop timestamp Timestamp(1574796686, 2545)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:57.941-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39924 #191 (47 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.681-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-52--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3615)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.432-0500 I SHARDING [conn55] Created 4 chunk(s) for: test4_fsmdb0.fsmcoll0, producing collection version 1|3||5ddd7daccf8184c2e1494359
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.672-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-52--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3615)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.785-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-327--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23) with drop timestamp Timestamp(1574796686, 1720)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.788-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-333--4104909142373009110 (ns: test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b) with drop timestamp Timestamp(1574796686, 2545)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.789-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-308--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 5)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.682-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-63--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3615)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.432-0500 I SHARDING [conn55] about to log metadata event into changelog: { _id: "nz_desktop:20004-2019-11-26T14:31:56.432-0500-5ddd7daccf8184c2e149438b", server: "nz_desktop:20004", shard: "shard-rs1", clientAddr: "127.0.0.1:46028", time: new Date(1574796716432), what: "shardCollection.end", ns: "test4_fsmdb0.fsmcoll0", details: { version: "1|3||5ddd7daccf8184c2e1494359" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.673-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-63--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3615)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.786-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-315--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.716f4b91-8864-45ce-84d2-6eb9e0983c23) with drop timestamp Timestamp(1574796686, 1720)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:57.945-0500 I NETWORK [conn191] received client metadata from 127.0.0.1:39924 conn191: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.790-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-317--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 5)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.683-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-51--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3615)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.433-0500 I COMMAND [conn55] command admin.$cmd appName: "MongoDB Shell" command: _shardsvrShardCollection { _shardsvrShardCollection: "test4_fsmdb0.fsmcoll0", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("3e50cee9-4d66-4935-b89a-35b9db2732f4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796716, 12), signature: { hash: BinData(0, 66C38E54DEAE5E92FDAAA24F5665FFCF0FE11F41), keyId: 6763700092420489256 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45678", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796716, 12), t: 1 } }, $db: "admin" } numYields:0 reslen:415 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 9 } }, ReplicationStateTransition: { acquireCount: { w: 15 } }, Global: { acquireCount: { r: 8, w: 7 } }, Database: { acquireCount: { r: 8, w: 7, W: 1 } }, Collection: { acquireCount: { r: 8, w: 3, W: 4 } }, Mutex: { acquireCount: { r: 16, W: 4 } } } flowControl:{ acquireCount: 5 } storage:{} protocol:op_msg 149ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.674-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-51--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 3615)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.787-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-322--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368) with drop timestamp Timestamp(1574796686, 2032)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:58.010-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39936 #192 (48 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.792-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-307--4104909142373009110 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 5)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.684-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-60--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 4055)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.588-0500 I STORAGE [conn55] createCollection: test4_fsmdb0.agg_out with generated UUID: 779cf8b3-6313-4651-b1cd-c5e10b7b79fc and options: {}
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.675-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-60--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 4055)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.787-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-331--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368) with drop timestamp Timestamp(1574796686, 2032)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.788-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-321--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.705db2fd-80cd-48e7-9341-b1797719e368) with drop timestamp Timestamp(1574796686, 2032)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.793-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-326--4104909142373009110 (ns: config.cache.chunks.test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 9)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.687-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-67--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 4055)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.599-0500 I INDEX [conn55] index build: done building index _id_ on ns test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.676-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-67--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 4055)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:58.010-0500 I NETWORK [conn192] received client metadata from 127.0.0.1:39936 conn192: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.789-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-334--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b) with drop timestamp Timestamp(1574796686, 2545)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.792-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-335--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b) with drop timestamp Timestamp(1574796686, 2545)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.688-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-59--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 4055)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.609-0500 I STORAGE [TimestampMonitor] Removing drop-pending idents with drop timestamps before timestamp Timestamp(1574796709, 8)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.677-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-59--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 4055)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:58.012-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39940 #193 (49 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.794-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-329--4104909142373009110 (ns: config.cache.chunks.test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 9)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.793-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-333--8000595249233899911 (ns: test1_fsmdb0.tmp.agg_out.16e42dc2-6e60-4435-9c1d-4819a30a344b) with drop timestamp Timestamp(1574796686, 2545)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.690-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-70--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 4561)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.609-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-165--2588534479858262356 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 1078)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.678-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-70--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 4561)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:58.013-0500 I NETWORK [conn193] received client metadata from 127.0.0.1:39940 conn193: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.796-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-325--4104909142373009110 (ns: config.cache.chunks.test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 9)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.794-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-308--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 5)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.691-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-71--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 4561)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.610-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-166--2588534479858262356 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 1078)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.680-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-71--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 4561)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:58.024-0500 W CONTROL [conn193] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 327 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.797-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-46--4104909142373009110 (ns: test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 15)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.795-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-317--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 5)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.691-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-69--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 4561)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.611-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-164--2588534479858262356 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 1078)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.681-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-69--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 4561)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:58.041-0500 W CONTROL [conn193] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 327 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.798-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-47--4104909142373009110 (ns: test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 15)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.796-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-307--8000595249233899911 (ns: test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 5)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.692-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-74--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 5072)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.613-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-173--2588534479858262356 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 2798)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.682-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-74--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 5072)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:58.044-0500 I NETWORK [conn192] end connection 127.0.0.1:39936 (48 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:31:58.045-0500 I NETWORK [conn193] end connection 127.0.0.1:39940 (47 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.797-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-326--8000595249233899911 (ns: config.cache.chunks.test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 9)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.693-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-75--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 5072)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.614-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-178--2588534479858262356 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 2798)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.683-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-75--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 5072)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.799-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-45--4104909142373009110 (ns: test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 15)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.799-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-329--8000595249233899911 (ns: config.cache.chunks.test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 9)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.695-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-73--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 5072)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.617-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-168--2588534479858262356 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 2798)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.684-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-73--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796658, 5072)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.801-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-50--4104909142373009110 (ns: config.cache.chunks.test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 23)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.800-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-325--8000595249233899911 (ns: config.cache.chunks.test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 9)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.697-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-80--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796660, 4)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.618-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-175--2588534479858262356 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 2968)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.685-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-80--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796660, 4)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.802-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-51--4104909142373009110 (ns: config.cache.chunks.test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 23)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.802-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-46--8000595249233899911 (ns: test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 15)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.698-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-85--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796660, 4)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.619-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-182--2588534479858262356 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 2968)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.687-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-85--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796660, 4)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.805-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-49--4104909142373009110 (ns: config.cache.chunks.test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 23)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.803-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-47--8000595249233899911 (ns: test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 15)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.699-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-79--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796660, 4)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.621-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-170--2588534479858262356 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 2968)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.687-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-79--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796660, 4)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.806-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-338--4104909142373009110 (ns: test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 13)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.805-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-45--8000595249233899911 (ns: test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 15)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.700-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-82--7234316082034423155 (ns: test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266) with drop timestamp Timestamp(1574796660, 1268)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.622-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-174--2588534479858262356 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 3033)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.690-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-82--2310912778499990807 (ns: test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266) with drop timestamp Timestamp(1574796660, 1268)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.806-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-339--4104909142373009110 (ns: test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 13)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.806-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-50--8000595249233899911 (ns: config.cache.chunks.test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 23)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.702-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-91--7234316082034423155 (ns: test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266) with drop timestamp Timestamp(1574796660, 1268)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.622-0500 I INDEX [conn62] Registering index build: d6a9a0c7-228e-4bcc-b0d4-cddaec78687e
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.691-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-91--2310912778499990807 (ns: test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266) with drop timestamp Timestamp(1574796660, 1268)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.807-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-337--4104909142373009110 (ns: test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 13)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.806-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-51--8000595249233899911 (ns: config.cache.chunks.test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 23)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.703-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-81--7234316082034423155 (ns: test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266) with drop timestamp Timestamp(1574796660, 1268)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.622-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-180--2588534479858262356 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 3033)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.691-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-81--2310912778499990807 (ns: test0_fsmdb0.tmp.agg_out.62cd7f07-02a2-4fe1-bd6d-41bf4e1b1266) with drop timestamp Timestamp(1574796660, 1268)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.808-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-342--4104909142373009110 (ns: config.cache.chunks.test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 21)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.807-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-49--8000595249233899911 (ns: config.cache.chunks.test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 23)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.704-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-84--7234316082034423155 (ns: test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095) with drop timestamp Timestamp(1574796660, 1269)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.629-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-169--2588534479858262356 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 3033)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.692-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-84--2310912778499990807 (ns: test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095) with drop timestamp Timestamp(1574796660, 1269)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.810-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-343--4104909142373009110 (ns: config.cache.chunks.test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 21)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.808-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-338--8000595249233899911 (ns: test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 13)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.705-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-93--7234316082034423155 (ns: test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095) with drop timestamp Timestamp(1574796660, 1269)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.634-0500 I INDEX [conn62] index build: starting on test4_fsmdb0.agg_out properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.693-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-93--2310912778499990807 (ns: test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095) with drop timestamp Timestamp(1574796660, 1269)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:54.811-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-341--4104909142373009110 (ns: config.cache.chunks.test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 21)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.809-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-339--8000595249233899911 (ns: test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 13)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.708-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-83--7234316082034423155 (ns: test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095) with drop timestamp Timestamp(1574796660, 1269)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.634-0500 I INDEX [conn62] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.694-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-83--2310912778499990807 (ns: test0_fsmdb0.tmp.agg_out.47f1dd60-9ca9-44ef-801c-7152b0493095) with drop timestamp Timestamp(1574796660, 1269)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.811-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-337--8000595249233899911 (ns: test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 13)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:55.657-0500 I COMMAND [ReplWriterWorker-11] CMD: drop test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.709-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-88--7234316082034423155 (ns: test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2) with drop timestamp Timestamp(1574796660, 1524)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.634-0500 I STORAGE [conn62] Index build initialized: d6a9a0c7-228e-4bcc-b0d4-cddaec78687e: test4_fsmdb0.agg_out (779cf8b3-6313-4651-b1cd-c5e10b7b79fc ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.695-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-88--2310912778499990807 (ns: test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2) with drop timestamp Timestamp(1574796660, 1524)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.813-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-342--8000595249233899911 (ns: config.cache.chunks.test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 21)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:55.657-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796715, 15), t: 1 } and commit timestamp Timestamp(1574796715, 15)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.710-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-97--7234316082034423155 (ns: test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2) with drop timestamp Timestamp(1574796660, 1524)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.634-0500 I INDEX [conn62] Waiting for index build to complete: d6a9a0c7-228e-4bcc-b0d4-cddaec78687e
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.697-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-97--2310912778499990807 (ns: test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2) with drop timestamp Timestamp(1574796660, 1524)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.814-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-343--8000595249233899911 (ns: config.cache.chunks.test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 21)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:55.657-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.711-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-87--7234316082034423155 (ns: test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2) with drop timestamp Timestamp(1574796660, 1524)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.634-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.699-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-87--2310912778499990807 (ns: test0_fsmdb0.tmp.agg_out.93a65d49-e71e-4e2c-8b33-3312c18791c2) with drop timestamp Timestamp(1574796660, 1524)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:54.814-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-341--8000595249233899911 (ns: config.cache.chunks.test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 21)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:55.657-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635)'. Ident: 'index-346--4104909142373009110', commit timestamp: 'Timestamp(1574796715, 15)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.712-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-96--7234316082034423155 (ns: test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37) with drop timestamp Timestamp(1574796660, 2031)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.634-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-176--2588534479858262356 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 3089)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.700-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-96--2310912778499990807 (ns: test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37) with drop timestamp Timestamp(1574796660, 2031)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:55.657-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635)'. Ident: 'index-347--4104909142373009110', commit timestamp: 'Timestamp(1574796715, 15)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:55.654-0500 I COMMAND [ReplWriterWorker-13] CMD: drop test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.713-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-103--7234316082034423155 (ns: test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37) with drop timestamp Timestamp(1574796660, 2031)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.635-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.701-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-103--2310912778499990807 (ns: test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37) with drop timestamp Timestamp(1574796660, 2031)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:55.657-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test3_fsmdb0.fsmcoll0'. Ident: collection-345--4104909142373009110, commit timestamp: Timestamp(1574796715, 15)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:55.654-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796715, 15), t: 1 } and commit timestamp Timestamp(1574796715, 15)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.713-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-95--7234316082034423155 (ns: test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37) with drop timestamp Timestamp(1574796660, 2031)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.637-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-184--2588534479858262356 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 3089)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.702-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-95--2310912778499990807 (ns: test0_fsmdb0.tmp.agg_out.6974e3bc-4e7c-45da-812a-7f2281bb8e37) with drop timestamp Timestamp(1574796660, 2031)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:55.672-0500 I COMMAND [ReplWriterWorker-6] CMD: drop config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:55.654-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.714-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-106--7234316082034423155 (ns: test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0) with drop timestamp Timestamp(1574796660, 2533)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.639-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.703-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-106--2310912778499990807 (ns: test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0) with drop timestamp Timestamp(1574796660, 2533)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:55.672-0500 I STORAGE [ReplWriterWorker-6] dropCollection: config.cache.chunks.test3_fsmdb0.fsmcoll0 (d291b2bc-f179-4f06-8164-0b81d0131eb1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796715, 23), t: 1 } and commit timestamp Timestamp(1574796715, 23)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:55.654-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635)'. Ident: 'index-346--8000595249233899911', commit timestamp: 'Timestamp(1574796715, 15)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.717-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-107--7234316082034423155 (ns: test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0) with drop timestamp Timestamp(1574796660, 2533)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.641-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-171--2588534479858262356 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 3089)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.705-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-107--2310912778499990807 (ns: test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0) with drop timestamp Timestamp(1574796660, 2533)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:55.672-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for config.cache.chunks.test3_fsmdb0.fsmcoll0 (d291b2bc-f179-4f06-8164-0b81d0131eb1).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:55.654-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test3_fsmdb0.fsmcoll0 (81145456-1c0e-4ef0-89a6-ab06e3485635)'. Ident: 'index-347--8000595249233899911', commit timestamp: 'Timestamp(1574796715, 15)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.718-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-105--7234316082034423155 (ns: test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0) with drop timestamp Timestamp(1574796660, 2533)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.642-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: d6a9a0c7-228e-4bcc-b0d4-cddaec78687e: test4_fsmdb0.agg_out ( 779cf8b3-6313-4651-b1cd-c5e10b7b79fc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.706-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-105--2310912778499990807 (ns: test0_fsmdb0.tmp.agg_out.a9f3eadd-dde9-4a0c-8b6e-b5fd6aecdab0) with drop timestamp Timestamp(1574796660, 2533)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:55.672-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test3_fsmdb0.fsmcoll0 (d291b2bc-f179-4f06-8164-0b81d0131eb1)'. Ident: 'index-350--4104909142373009110', commit timestamp: 'Timestamp(1574796715, 23)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:55.654-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test3_fsmdb0.fsmcoll0'. Ident: collection-345--8000595249233899911, commit timestamp: Timestamp(1574796715, 15)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.719-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-78--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796667, 6)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.642-0500 I INDEX [conn62] Index build completed: d6a9a0c7-228e-4bcc-b0d4-cddaec78687e
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.708-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-78--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796667, 6)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:55.672-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test3_fsmdb0.fsmcoll0 (d291b2bc-f179-4f06-8164-0b81d0131eb1)'. Ident: 'index-351--4104909142373009110', commit timestamp: 'Timestamp(1574796715, 23)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:55.670-0500 I COMMAND [ReplWriterWorker-10] CMD: drop config.cache.chunks.test3_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.720-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-89--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796667, 6)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.643-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-177--2588534479858262356 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796707, 505)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.710-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-89--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796667, 6)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:55.672-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'config.cache.chunks.test3_fsmdb0.fsmcoll0'. Ident: collection-349--4104909142373009110, commit timestamp: Timestamp(1574796715, 23)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:55.670-0500 I STORAGE [ReplWriterWorker-10] dropCollection: config.cache.chunks.test3_fsmdb0.fsmcoll0 (d291b2bc-f179-4f06-8164-0b81d0131eb1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796715, 23), t: 1 } and commit timestamp Timestamp(1574796715, 23)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.722-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-77--7234316082034423155 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796667, 6)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.644-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-186--2588534479858262356 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796707, 505)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.711-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-77--2310912778499990807 (ns: test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796667, 6)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:55.682-0500 I COMMAND [ReplWriterWorker-2] dropDatabase test3_fsmdb0 - starting
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:55.670-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for config.cache.chunks.test3_fsmdb0.fsmcoll0 (d291b2bc-f179-4f06-8164-0b81d0131eb1).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.723-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-100--7234316082034423155 (ns: config.cache.chunks.test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796667, 10)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.646-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-172--2588534479858262356 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796707, 505)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.712-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-100--2310912778499990807 (ns: config.cache.chunks.test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796667, 10)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:55.682-0500 I COMMAND [ReplWriterWorker-2] dropDatabase test3_fsmdb0 - dropped 0 collection(s)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:55.670-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test3_fsmdb0.fsmcoll0 (d291b2bc-f179-4f06-8164-0b81d0131eb1)'. Ident: 'index-350--8000595249233899911', commit timestamp: 'Timestamp(1574796715, 23)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.724-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-101--7234316082034423155 (ns: config.cache.chunks.test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796667, 10)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.647-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-193--2588534479858262356 (ns: test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f) with drop timestamp Timestamp(1574796707, 1024)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.713-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-101--2310912778499990807 (ns: config.cache.chunks.test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796667, 10)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:55.682-0500 I COMMAND [ReplWriterWorker-2] dropDatabase test3_fsmdb0 - finished
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:55.670-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test3_fsmdb0.fsmcoll0 (d291b2bc-f179-4f06-8164-0b81d0131eb1)'. Ident: 'index-351--8000595249233899911', commit timestamp: 'Timestamp(1574796715, 23)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.725-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-99--7234316082034423155 (ns: config.cache.chunks.test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796667, 10)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.650-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-194--2588534479858262356 (ns: test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f) with drop timestamp Timestamp(1574796707, 1024)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.713-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-99--2310912778499990807 (ns: config.cache.chunks.test0_fsmdb0.agg_out) with drop timestamp Timestamp(1574796667, 10)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:55.692-0500 I SHARDING [ReplWriterWorker-8] setting this node's cached database version for test3_fsmdb0 to {}
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:55.670-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'config.cache.chunks.test3_fsmdb0.fsmcoll0'. Ident: collection-349--8000595249233899911, commit timestamp: Timestamp(1574796715, 23)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.727-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-34--7234316082034423155 (ns: test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 15)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.651-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-192--2588534479858262356 (ns: test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f) with drop timestamp Timestamp(1574796707, 1024)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.714-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-34--2310912778499990807 (ns: test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 15)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.220-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52816 #87 (11 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:55.680-0500 I COMMAND [ReplWriterWorker-9] dropDatabase test3_fsmdb0 - starting
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.728-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-35--7234316082034423155 (ns: test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 15)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.652-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-200--2588534479858262356 (ns: test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f) with drop timestamp Timestamp(1574796707, 2484)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.715-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-35--2310912778499990807 (ns: test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 15)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.221-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52818 #88 (12 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:55.680-0500 I COMMAND [ReplWriterWorker-9] dropDatabase test3_fsmdb0 - dropped 0 collection(s)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.730-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-33--7234316082034423155 (ns: test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 15)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.654-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-202--2588534479858262356 (ns: test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f) with drop timestamp Timestamp(1574796707, 2484)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.717-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-33--2310912778499990807 (ns: test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 15)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.221-0500 I NETWORK [conn87] received client metadata from 127.0.0.1:52816 conn87: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:55.680-0500 I COMMAND [ReplWriterWorker-9] dropDatabase test3_fsmdb0 - finished
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.731-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-38--7234316082034423155 (ns: config.cache.chunks.test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 23)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.655-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-196--2588534479858262356 (ns: test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f) with drop timestamp Timestamp(1574796707, 2484)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.718-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-38--2310912778499990807 (ns: config.cache.chunks.test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 23)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.221-0500 I NETWORK [conn88] received client metadata from 127.0.0.1:52818 conn88: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.249-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52866 #89 (13 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.732-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-39--7234316082034423155 (ns: config.cache.chunks.test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 23)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.657-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-199--2588534479858262356 (ns: test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d) with drop timestamp Timestamp(1574796707, 2485)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.720-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-39--2310912778499990807 (ns: config.cache.chunks.test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 23)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:55.689-0500 I SHARDING [ReplWriterWorker-15] setting this node's cached database version for test3_fsmdb0 to {}
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.249-0500 I NETWORK [conn89] received client metadata from 127.0.0.1:52866 conn89: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.733-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-37--7234316082034423155 (ns: config.cache.chunks.test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 23)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.658-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-204--2588534479858262356 (ns: test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d) with drop timestamp Timestamp(1574796707, 2485)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.721-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-37--2310912778499990807 (ns: config.cache.chunks.test0_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796667, 23)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.220-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53702 #81 (12 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.314-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52896 #90 (14 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.735-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-118--7234316082034423155 (ns: config.cache.chunks.test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 11)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.660-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-195--2588534479858262356 (ns: test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d) with drop timestamp Timestamp(1574796707, 2485)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.722-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-118--2310912778499990807 (ns: config.cache.chunks.test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 11)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.220-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53710 #82 (13 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.314-0500 I NETWORK [conn90] received client metadata from 127.0.0.1:52896 conn90: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.736-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-119--7234316082034423155 (ns: config.cache.chunks.test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 11)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.663-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-201--2588534479858262356 (ns: test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc) with drop timestamp Timestamp(1574796707, 2539)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.723-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-119--2310912778499990807 (ns: config.cache.chunks.test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 11)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.221-0500 I NETWORK [conn81] received client metadata from 127.0.0.1:53702 conn81: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.325-0500 W CONTROL [conn90] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 718 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.738-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-117--7234316082034423155 (ns: config.cache.chunks.test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 11)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.664-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-208--2588534479858262356 (ns: test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc) with drop timestamp Timestamp(1574796707, 2539)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.725-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-117--2310912778499990807 (ns: config.cache.chunks.test1_fsmdb0.agg_out) with drop timestamp Timestamp(1574796692, 11)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.221-0500 I NETWORK [conn82] received client metadata from 127.0.0.1:53710 conn82: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.329-0500 W CONTROL [conn90] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 718 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.740-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-110--7234316082034423155 (ns: test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 16)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.666-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-197--2588534479858262356 (ns: test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc) with drop timestamp Timestamp(1574796707, 2539)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.726-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-110--2310912778499990807 (ns: test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 16)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.249-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53756 #83 (14 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.332-0500 I NETWORK [conn90] end connection 127.0.0.1:52896 (13 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.741-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-111--7234316082034423155 (ns: test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 16)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.667-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-213--2588534479858262356 (ns: test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7) with drop timestamp Timestamp(1574796707, 3056)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.727-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-111--2310912778499990807 (ns: test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 16)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.250-0500 I NETWORK [conn83] received client metadata from 127.0.0.1:53756 conn83: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.349-0500 I NETWORK [conn88] end connection 127.0.0.1:52818 (12 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.741-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-109--7234316082034423155 (ns: test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 16)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.668-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-214--2588534479858262356 (ns: test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7) with drop timestamp Timestamp(1574796707, 3056)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.730-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-109--2310912778499990807 (ns: test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 16)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.315-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53788 #84 (15 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.362-0500 I STORAGE [ReplWriterWorker-3] createCollection: test4_fsmdb0.fsmcoll0 with provided UUID: 08555f78-3db2-4ee9-9e10-8c80139ec7dd and options: { uuid: UUID("08555f78-3db2-4ee9-9e10-8c80139ec7dd") }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.743-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-114--7234316082034423155 (ns: config.cache.chunks.test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 25)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.670-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-212--2588534479858262356 (ns: test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7) with drop timestamp Timestamp(1574796707, 3056)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.731-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-114--2310912778499990807 (ns: config.cache.chunks.test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 25)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.315-0500 I NETWORK [conn84] received client metadata from 127.0.0.1:53788 conn84: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.376-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.744-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-115--7234316082034423155 (ns: config.cache.chunks.test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 25)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.774-0500 I STORAGE [conn88] createCollection: test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452 with generated UUID: 48ad36b6-4a15-4dee-b50c-18d3ffa2d97d and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.732-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-115--2310912778499990807 (ns: config.cache.chunks.test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 25)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.326-0500 W CONTROL [conn84] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 323 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.399-0500 I INDEX [ReplWriterWorker-9] index build: starting on test4_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.745-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-113--7234316082034423155 (ns: config.cache.chunks.test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 25)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.774-0500 I STORAGE [conn85] createCollection: test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d with generated UUID: c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.733-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-113--2310912778499990807 (ns: config.cache.chunks.test1_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796692, 25)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.330-0500 W CONTROL [conn84] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 323 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.399-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.746-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-130--7234316082034423155 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 1469)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.774-0500 I STORAGE [conn77] createCollection: test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14 with generated UUID: c467fb4e-6e10-4d08-8479-385968423996 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.734-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-130--2310912778499990807 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 1469)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.332-0500 I NETWORK [conn84] end connection 127.0.0.1:53788 (14 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.399-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 2fcfbf98-7c5a-4f08-8fa8-afa43941b8cb: test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.749-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-131--7234316082034423155 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 1469)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.776-0500 I STORAGE [conn82] createCollection: test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 with generated UUID: 48e9cd12-2254-4a56-81bf-eca579fd6a89 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.735-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-131--2310912778499990807 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 1469)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.349-0500 I STORAGE [ReplWriterWorker-2] createCollection: test4_fsmdb0.fsmcoll0 with provided UUID: 08555f78-3db2-4ee9-9e10-8c80139ec7dd and options: { uuid: UUID("08555f78-3db2-4ee9-9e10-8c80139ec7dd") }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.399-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.750-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-129--7234316082034423155 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 1469)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.776-0500 I STORAGE [conn84] createCollection: test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9 with generated UUID: 5a0b3bde-63ad-432d-a4d5-484683e44103 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.737-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-129--2310912778499990807 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 1469)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.349-0500 I NETWORK [conn82] end connection 127.0.0.1:53710 (13 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.399-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.751-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-134--7234316082034423155 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 1590)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.814-0500 I INDEX [conn88] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.738-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-134--2310912778499990807 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 1590)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.360-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.402-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.752-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-145--7234316082034423155 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 1590)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.815-0500 I INDEX [conn88] Registering index build: 41dd9381-5328-4304-b7a7-f38472d1c93e
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.740-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-145--2310912778499990807 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 1590)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.379-0500 I INDEX [ReplWriterWorker-7] index build: starting on test4_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.406-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 2fcfbf98-7c5a-4f08-8fa8-afa43941b8cb: test4_fsmdb0.fsmcoll0 ( 08555f78-3db2-4ee9-9e10-8c80139ec7dd ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.753-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-133--7234316082034423155 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 1590)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.821-0500 I INDEX [conn85] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.741-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-133--2310912778499990807 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 1590)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.379-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.428-0500 I STORAGE [ReplWriterWorker-2] createCollection: config.cache.chunks.test4_fsmdb0.fsmcoll0 with provided UUID: 647e6274-b0dc-4671-90c7-65b5ed709ba8 and options: { uuid: UUID("647e6274-b0dc-4671-90c7-65b5ed709ba8") }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.754-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-136--7234316082034423155 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 2222)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.822-0500 I INDEX [conn85] Registering index build: 218b06d4-6ea5-4b9c-b32e-812e6e50a4a4
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.742-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-136--2310912778499990807 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 2222)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.379-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 3fb952dc-c1df-4f80-a251-9986e4338f27: test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.444-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.756-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-143--7234316082034423155 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 2222)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.829-0500 I INDEX [conn77] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.743-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-143--2310912778499990807 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 2222)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.379-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.461-0500 I INDEX [ReplWriterWorker-3] index build: starting on config.cache.chunks.test4_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.757-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-135--7234316082034423155 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 2222)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.829-0500 I INDEX [conn77] Registering index build: 9ac6a0ac-c5d2-4499-96f5-d4cc973b257c
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.745-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-135--2310912778499990807 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796694, 2222)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.380-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.461-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.759-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-140--7234316082034423155 (ns: test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9) with drop timestamp Timestamp(1574796694, 3097)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.837-0500 I INDEX [conn82] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.746-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-140--2310912778499990807 (ns: test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9) with drop timestamp Timestamp(1574796694, 3097)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.383-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.461-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: eacce81b-5e36-4b5b-9f4f-4d2203fb31c6: config.cache.chunks.test4_fsmdb0.fsmcoll0 (647e6274-b0dc-4671-90c7-65b5ed709ba8 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.761-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-149--7234316082034423155 (ns: test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9) with drop timestamp Timestamp(1574796694, 3097)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.837-0500 I INDEX [conn82] Registering index build: d727b5e6-44c4-486e-b473-271cf1d66c8b
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.747-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-149--2310912778499990807 (ns: test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9) with drop timestamp Timestamp(1574796694, 3097)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.384-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 3fb952dc-c1df-4f80-a251-9986e4338f27: test4_fsmdb0.fsmcoll0 ( 08555f78-3db2-4ee9-9e10-8c80139ec7dd ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.462-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.762-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-139--7234316082034423155 (ns: test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9) with drop timestamp Timestamp(1574796694, 3097)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.844-0500 I INDEX [conn84] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.749-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-139--2310912778499990807 (ns: test2_fsmdb0.tmp.agg_out.8f5b1b50-6825-4b9a-9fd7-96248dac23d9) with drop timestamp Timestamp(1574796694, 3097)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.410-0500 I STORAGE [ReplWriterWorker-9] createCollection: config.cache.chunks.test4_fsmdb0.fsmcoll0 with provided UUID: 647e6274-b0dc-4671-90c7-65b5ed709ba8 and options: { uuid: UUID("647e6274-b0dc-4671-90c7-65b5ed709ba8") }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.462-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.763-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-142--7234316082034423155 (ns: test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220) with drop timestamp Timestamp(1574796694, 3098)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.845-0500 I INDEX [conn84] Registering index build: 0e978665-a3a6-4c80-9ffa-250e027c4e44
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.751-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-142--2310912778499990807 (ns: test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220) with drop timestamp Timestamp(1574796694, 3098)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.427-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.463-0500 I SHARDING [ReplWriterWorker-9] Marking collection config.cache.chunks.test4_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.764-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-151--7234316082034423155 (ns: test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220) with drop timestamp Timestamp(1574796694, 3098)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.858-0500 I INDEX [conn88] index build: starting on test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.753-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-151--2310912778499990807 (ns: test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220) with drop timestamp Timestamp(1574796694, 3098)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.448-0500 I INDEX [ReplWriterWorker-2] index build: starting on config.cache.chunks.test4_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.465-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: drain applied 1 side writes (inserted: 1, deleted: 0) for 'lastmod_1' in 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.765-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-141--7234316082034423155 (ns: test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220) with drop timestamp Timestamp(1574796694, 3098)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.858-0500 I INDEX [conn88] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.754-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-141--2310912778499990807 (ns: test2_fsmdb0.tmp.agg_out.02e3b25b-1113-4f8c-8c44-4e57667db220) with drop timestamp Timestamp(1574796694, 3098)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.448-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.465-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: drain applied 3 side writes (inserted: 3, deleted: 0) for 'lastmod_1' in 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.766-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-154--7234316082034423155 (ns: test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946) with drop timestamp Timestamp(1574796695, 5)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.858-0500 I STORAGE [conn88] Index build initialized: 41dd9381-5328-4304-b7a7-f38472d1c93e: test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452 (48ad36b6-4a15-4dee-b50c-18d3ffa2d97d ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.755-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-154--2310912778499990807 (ns: test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946) with drop timestamp Timestamp(1574796695, 5)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.448-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: a0e78248-9bcf-4d46-bab5-17cb68208c16: config.cache.chunks.test4_fsmdb0.fsmcoll0 (647e6274-b0dc-4671-90c7-65b5ed709ba8 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.465-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index lastmod_1 on ns config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.767-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-161--7234316082034423155 (ns: test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946) with drop timestamp Timestamp(1574796695, 5)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.858-0500 I INDEX [conn88] Waiting for index build to complete: 41dd9381-5328-4304-b7a7-f38472d1c93e
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.756-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-161--2310912778499990807 (ns: test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946) with drop timestamp Timestamp(1574796695, 5)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.449-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:56.466-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: eacce81b-5e36-4b5b-9f4f-4d2203fb31c6: config.cache.chunks.test4_fsmdb0.fsmcoll0 ( 647e6274-b0dc-4671-90c7-65b5ed709ba8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.769-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-153--7234316082034423155 (ns: test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946) with drop timestamp Timestamp(1574796695, 5)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.873-0500 I INDEX [conn85] index build: starting on test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.758-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-153--2310912778499990807 (ns: test2_fsmdb0.tmp.agg_out.c09b42f9-38e6-4346-ab24-eae4b59a1946) with drop timestamp Timestamp(1574796695, 5)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.450-0500 I SHARDING [ReplWriterWorker-7] Marking collection config.cache.chunks.test4_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:57.358-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52930 #91 (13 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.771-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-160--7234316082034423155 (ns: test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e) with drop timestamp Timestamp(1574796695, 509)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.873-0500 I INDEX [conn85] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.759-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-160--2310912778499990807 (ns: test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e) with drop timestamp Timestamp(1574796695, 509)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.450-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:57.359-0500 I NETWORK [conn91] received client metadata from 127.0.0.1:52930 conn91: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.772-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-163--7234316082034423155 (ns: test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e) with drop timestamp Timestamp(1574796695, 509)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.873-0500 I STORAGE [conn85] Index build initialized: 218b06d4-6ea5-4b9c-b32e-812e6e50a4a4: test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d (c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.760-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-163--2310912778499990807 (ns: test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e) with drop timestamp Timestamp(1574796695, 509)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.454-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 4 side writes (inserted: 4, deleted: 0) for 'lastmod_1' in 0 ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:57.395-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52942 #92 (14 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.774-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-159--7234316082034423155 (ns: test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e) with drop timestamp Timestamp(1574796695, 509)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.873-0500 I INDEX [conn85] Waiting for index build to complete: 218b06d4-6ea5-4b9c-b32e-812e6e50a4a4
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.762-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-159--2310912778499990807 (ns: test2_fsmdb0.tmp.agg_out.c0413006-d486-48c8-bcd5-1247ba88e78e) with drop timestamp Timestamp(1574796695, 509)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.454-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:57.396-0500 I NETWORK [conn92] received client metadata from 127.0.0.1:52942 conn92: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.776-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-138--7234316082034423155 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796701, 5)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.874-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.763-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-138--2310912778499990807 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796701, 5)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:56.455-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: a0e78248-9bcf-4d46-bab5-17cb68208c16: config.cache.chunks.test4_fsmdb0.fsmcoll0 ( 647e6274-b0dc-4671-90c7-65b5ed709ba8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:57.940-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52954 #93 (15 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.777-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-147--7234316082034423155 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796701, 5)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.874-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.764-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-147--2310912778499990807 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796701, 5)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:57.358-0500 I CONNPOOL [ShardRegistry] Ending idle connection to host localhost:20000 because the pool meets constraints; 2 connections to that host remain open
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:57.941-0500 I NETWORK [conn93] received client metadata from 127.0.0.1:52954 conn93: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.779-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-137--7234316082034423155 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796701, 5)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.874-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.765-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-137--2310912778499990807 (ns: test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796701, 5)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:57.358-0500 I CONNPOOL [ShardRegistry] Ending idle connection to host localhost:20000 because the pool meets constraints; 1 connections to that host remain open
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:58.013-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52976 #94 (16 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.779-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-156--7234316082034423155 (ns: config.cache.chunks.test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796701, 9)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.875-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.766-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-156--2310912778499990807 (ns: config.cache.chunks.test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796701, 9)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:57.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:58.013-0500 I NETWORK [conn94] received client metadata from 127.0.0.1:52976 conn94: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.782-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-157--7234316082034423155 (ns: config.cache.chunks.test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796701, 9)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.884-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.768-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-157--2310912778499990807 (ns: config.cache.chunks.test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796701, 9)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:57.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:58.024-0500 W CONTROL [conn94] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 718 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.783-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-155--7234316082034423155 (ns: config.cache.chunks.test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796701, 9)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.887-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.769-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-155--2310912778499990807 (ns: config.cache.chunks.test2_fsmdb0.agg_out) with drop timestamp Timestamp(1574796701, 9)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:57.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:58.042-0500 W CONTROL [conn94] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 718 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.785-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-122--7234316082034423155 (ns: test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 14)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.894-0500 I INDEX [conn77] index build: starting on test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.770-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-122--2310912778499990807 (ns: test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 14)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:57.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:31:58.045-0500 I NETWORK [conn94] end connection 127.0.0.1:52976 (15 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.786-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-123--7234316082034423155 (ns: test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 14)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.894-0500 I INDEX [conn77] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.772-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-123--2310912778499990807 (ns: test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 14)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:57.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.787-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-121--7234316082034423155 (ns: test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 14)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.895-0500 I STORAGE [conn77] Index build initialized: 9ac6a0ac-c5d2-4499-96f5-d4cc973b257c: test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14 (c467fb4e-6e10-4d08-8479-385968423996 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.773-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-121--2310912778499990807 (ns: test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 14)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:57.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.789-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-126--7234316082034423155 (ns: config.cache.chunks.test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 23)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.895-0500 I INDEX [conn77] Waiting for index build to complete: 9ac6a0ac-c5d2-4499-96f5-d4cc973b257c
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.774-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-126--2310912778499990807 (ns: config.cache.chunks.test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 23)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:57.358-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53820 #86 (14 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.790-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-127--7234316082034423155 (ns: config.cache.chunks.test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 23)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.895-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 41dd9381-5328-4304-b7a7-f38472d1c93e: test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452 ( 48ad36b6-4a15-4dee-b50c-18d3ffa2d97d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.776-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-127--2310912778499990807 (ns: config.cache.chunks.test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 23)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:57.359-0500 I NETWORK [conn86] received client metadata from 127.0.0.1:53820 conn86: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.792-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-125--7234316082034423155 (ns: config.cache.chunks.test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 23)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.899-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 218b06d4-6ea5-4b9c-b32e-812e6e50a4a4: test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d ( c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.777-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-125--2310912778499990807 (ns: config.cache.chunks.test2_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796701, 23)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:57.359-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.794-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-174--7234316082034423155 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 1078)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.912-0500 I INDEX [conn82] index build: starting on test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.778-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-174--2310912778499990807 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 1078)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:57.359-0500 I SHARDING [updateShardIdentityConfigString] Updating config server with confirmed set shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.795-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-175--7234316082034423155 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 1078)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.912-0500 I INDEX [conn82] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.779-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-175--2310912778499990807 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 1078)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:57.395-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53832 #92 (15 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.797-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-173--7234316082034423155 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 1078)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.912-0500 I STORAGE [conn82] Index build initialized: d727b5e6-44c4-486e-b473-271cf1d66c8b: test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 (48e9cd12-2254-4a56-81bf-eca579fd6a89 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.780-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-173--2310912778499990807 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 1078)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:57.396-0500 I NETWORK [conn92] received client metadata from 127.0.0.1:53832 conn92: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.798-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-178--7234316082034423155 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 2798)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.912-0500 I INDEX [conn82] Waiting for index build to complete: d727b5e6-44c4-486e-b473-271cf1d66c8b
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.783-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-178--2310912778499990807 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 2798)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:57.781-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.799-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-187--7234316082034423155 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 2798)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.912-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.785-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-187--2310912778499990807 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 2798)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:57.781-0500 I SHARDING [updateShardIdentityConfigString] Updating config server with confirmed set shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.801-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-177--7234316082034423155 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 2798)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.913-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.786-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-177--2310912778499990807 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 2798)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:57.940-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53844 #93 (16 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.802-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-182--7234316082034423155 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 2968)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.916-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.787-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-182--2310912778499990807 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 2968)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:57.941-0500 I NETWORK [conn93] received client metadata from 127.0.0.1:53844 conn93: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.804-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-189--7234316082034423155 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 2968)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.924-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 9ac6a0ac-c5d2-4499-96f5-d4cc973b257c: test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14 ( c467fb4e-6e10-4d08-8479-385968423996 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.789-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-189--2310912778499990807 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 2968)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:58.014-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53866 #94 (17 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.807-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-181--7234316082034423155 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 2968)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.932-0500 I INDEX [conn84] index build: starting on test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.790-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-181--2310912778499990807 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 2968)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:58.014-0500 I NETWORK [conn94] received client metadata from 127.0.0.1:53866 conn94: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.809-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-180--7234316082034423155 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 3033)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.932-0500 I INDEX [conn84] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.792-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-180--2310912778499990807 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 3033)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:58.025-0500 W CONTROL [conn94] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 323 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.810-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-191--7234316082034423155 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 3033)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.932-0500 I STORAGE [conn84] Index build initialized: 0e978665-a3a6-4c80-9ffa-250e027c4e44: test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9 (5a0b3bde-63ad-432d-a4d5-484683e44103 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.792-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-191--2310912778499990807 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 3033)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:58.043-0500 W CONTROL [conn94] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 323 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.812-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-179--7234316082034423155 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 3033)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.932-0500 I INDEX [conn85] Index build completed: 218b06d4-6ea5-4b9c-b32e-812e6e50a4a4
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.795-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-179--2310912778499990807 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 3033)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:31:58.045-0500 I NETWORK [conn94] end connection 127.0.0.1:53866 (16 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.813-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-184--7234316082034423155 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 3089)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.932-0500 I INDEX [conn84] Waiting for index build to complete: 0e978665-a3a6-4c80-9ffa-250e027c4e44
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.797-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-184--2310912778499990807 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 3089)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.814-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-193--7234316082034423155 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 3089)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.932-0500 I INDEX [conn88] Index build completed: 41dd9381-5328-4304-b7a7-f38472d1c93e
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.798-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-193--2310912778499990807 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 3089)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.816-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-183--7234316082034423155 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 3089)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.932-0500 I INDEX [conn77] Index build completed: 9ac6a0ac-c5d2-4499-96f5-d4cc973b257c
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.799-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-183--2310912778499990807 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796704, 3089)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.817-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-186--7234316082034423155 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796707, 505)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.932-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.801-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-186--2310912778499990807 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796707, 505)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.819-0500 I STORAGE [ReplWriterWorker-4] createCollection: test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452 with provided UUID: 48ad36b6-4a15-4dee-b50c-18d3ffa2d97d and options: { uuid: UUID("48ad36b6-4a15-4dee-b50c-18d3ffa2d97d"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.932-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.802-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-195--2310912778499990807 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796707, 505)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.820-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-195--7234316082034423155 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796707, 505)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.932-0500 I COMMAND [conn85] command test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796716, 566), signature: { hash: BinData(0, 66C38E54DEAE5E92FDAAA24F5665FFCF0FE11F41), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45728", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796716, 558), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 110ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.803-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-185--2310912778499990807 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796707, 505)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.827-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-185--7234316082034423155 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796707, 505)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.932-0500 I COMMAND [conn77] command test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14 appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796716, 566), signature: { hash: BinData(0, 66C38E54DEAE5E92FDAAA24F5665FFCF0FE11F41), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796716, 558), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 102ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.804-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-200--2310912778499990807 (ns: test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f) with drop timestamp Timestamp(1574796707, 1024)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.835-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.932-0500 I COMMAND [conn88] command test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452 appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796716, 566), signature: { hash: BinData(0, 66C38E54DEAE5E92FDAAA24F5665FFCF0FE11F41), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45740", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796716, 558), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 117ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.807-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-209--2310912778499990807 (ns: test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f) with drop timestamp Timestamp(1574796707, 1024)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.836-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-200--7234316082034423155 (ns: test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f) with drop timestamp Timestamp(1574796707, 1024)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.933-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.809-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-199--2310912778499990807 (ns: test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f) with drop timestamp Timestamp(1574796707, 1024)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.837-0500 I STORAGE [ReplWriterWorker-13] createCollection: test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d with provided UUID: c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2 and options: { uuid: UUID("c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.933-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.810-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-206--2310912778499990807 (ns: test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f) with drop timestamp Timestamp(1574796707, 2484)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.838-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-209--7234316082034423155 (ns: test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f) with drop timestamp Timestamp(1574796707, 1024)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.936-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.811-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-215--2310912778499990807 (ns: test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f) with drop timestamp Timestamp(1574796707, 2484)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.847-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-199--7234316082034423155 (ns: test3_fsmdb0.tmp.agg_out.381c2985-dab8-4509-841f-996b78c5e70f) with drop timestamp Timestamp(1574796707, 1024)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.939-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.812-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-205--2310912778499990807 (ns: test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f) with drop timestamp Timestamp(1574796707, 2484)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.853-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.940-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 0e978665-a3a6-4c80-9ffa-250e027c4e44: test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9 ( 5a0b3bde-63ad-432d-a4d5-484683e44103 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.814-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-204--2310912778499990807 (ns: test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d) with drop timestamp Timestamp(1574796707, 2485)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.854-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-206--7234316082034423155 (ns: test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f) with drop timestamp Timestamp(1574796707, 2484)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.940-0500 I INDEX [conn84] Index build completed: 0e978665-a3a6-4c80-9ffa-250e027c4e44
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.816-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-213--2310912778499990807 (ns: test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d) with drop timestamp Timestamp(1574796707, 2485)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.855-0500 I STORAGE [ReplWriterWorker-2] createCollection: test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14 with provided UUID: c467fb4e-6e10-4d08-8479-385968423996 and options: { uuid: UUID("c467fb4e-6e10-4d08-8479-385968423996"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.942-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: d727b5e6-44c4-486e-b473-271cf1d66c8b: test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 ( 48e9cd12-2254-4a56-81bf-eca579fd6a89 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.817-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-203--2310912778499990807 (ns: test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d) with drop timestamp Timestamp(1574796707, 2485)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.856-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-215--7234316082034423155 (ns: test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f) with drop timestamp Timestamp(1574796707, 2484)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.942-0500 I INDEX [conn82] Index build completed: d727b5e6-44c4-486e-b473-271cf1d66c8b
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.819-0500 I STORAGE [ReplWriterWorker-13] createCollection: test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452 with provided UUID: 48ad36b6-4a15-4dee-b50c-18d3ffa2d97d and options: { uuid: UUID("48ad36b6-4a15-4dee-b50c-18d3ffa2d97d"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.867-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-205--7234316082034423155 (ns: test3_fsmdb0.tmp.agg_out.b78c20cf-c96b-4d54-b200-dd20cb73ef2f) with drop timestamp Timestamp(1574796707, 2484)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.942-0500 I COMMAND [conn82] command test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796716, 566), signature: { hash: BinData(0, 66C38E54DEAE5E92FDAAA24F5665FFCF0FE11F41), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58884", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796716, 558), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 105ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.820-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-208--2310912778499990807 (ns: test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc) with drop timestamp Timestamp(1574796707, 2539)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.874-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.964-0500 I COMMAND [conn77] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.830-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-217--2310912778499990807 (ns: test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc) with drop timestamp Timestamp(1574796707, 2539)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.875-0500 I STORAGE [ReplWriterWorker-8] createCollection: test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 with provided UUID: 48e9cd12-2254-4a56-81bf-eca579fd6a89 and options: { uuid: UUID("48e9cd12-2254-4a56-81bf-eca579fd6a89"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.964-0500 I STORAGE [conn77] dropCollection: test4_fsmdb0.agg_out (779cf8b3-6313-4651-b1cd-c5e10b7b79fc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796716, 2618), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.838-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.876-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-204--7234316082034423155 (ns: test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d) with drop timestamp Timestamp(1574796707, 2485)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.964-0500 I STORAGE [conn77] Finishing collection drop for test4_fsmdb0.agg_out (779cf8b3-6313-4651-b1cd-c5e10b7b79fc).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.839-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-207--2310912778499990807 (ns: test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc) with drop timestamp Timestamp(1574796707, 2539)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.886-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-213--7234316082034423155 (ns: test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d) with drop timestamp Timestamp(1574796707, 2485)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.964-0500 I STORAGE [conn77] renameCollection: renaming collection 48ad36b6-4a15-4dee-b50c-18d3ffa2d97d from test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.840-0500 I STORAGE [ReplWriterWorker-4] createCollection: test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d with provided UUID: c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2 and options: { uuid: UUID("c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.893-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.964-0500 I STORAGE [conn77] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (779cf8b3-6313-4651-b1cd-c5e10b7b79fc)'. Ident: 'index-225--2588534479858262356', commit timestamp: 'Timestamp(1574796716, 2618)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.841-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-222--2310912778499990807 (ns: test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7) with drop timestamp Timestamp(1574796707, 3056)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.894-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-203--7234316082034423155 (ns: test3_fsmdb0.tmp.agg_out.b9956c37-43c6-4857-8739-a238a711d06d) with drop timestamp Timestamp(1574796707, 2485)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.964-0500 I STORAGE [conn77] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (779cf8b3-6313-4651-b1cd-c5e10b7b79fc)'. Ident: 'index-226--2588534479858262356', commit timestamp: 'Timestamp(1574796716, 2618)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.849-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-223--2310912778499990807 (ns: test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7) with drop timestamp Timestamp(1574796707, 3056)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.895-0500 I STORAGE [ReplWriterWorker-5] createCollection: test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9 with provided UUID: 5a0b3bde-63ad-432d-a4d5-484683e44103 and options: { uuid: UUID("5a0b3bde-63ad-432d-a4d5-484683e44103"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.964-0500 I STORAGE [conn77] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-224--2588534479858262356, commit timestamp: Timestamp(1574796716, 2618)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.858-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.897-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-208--7234316082034423155 (ns: test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc) with drop timestamp Timestamp(1574796707, 2539)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.965-0500 I COMMAND [conn65] command test4_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 174576267326476275, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7376461100232760212, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796716773), clusterTime: Timestamp(1574796716, 561) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796716, 561), signature: { hash: BinData(0, 66C38E54DEAE5E92FDAAA24F5665FFCF0FE11F41), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45740", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796716, 558), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 191ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.859-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-221--2310912778499990807 (ns: test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7) with drop timestamp Timestamp(1574796707, 3056)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.906-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-217--7234316082034423155 (ns: test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc) with drop timestamp Timestamp(1574796707, 2539)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.967-0500 I COMMAND [conn85] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.860-0500 I STORAGE [ReplWriterWorker-6] createCollection: test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14 with provided UUID: c467fb4e-6e10-4d08-8479-385968423996 and options: { uuid: UUID("c467fb4e-6e10-4d08-8479-385968423996"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.913-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.967-0500 I STORAGE [conn85] dropCollection: test4_fsmdb0.agg_out (48ad36b6-4a15-4dee-b50c-18d3ffa2d97d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796716, 2967), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.877-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.913-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-207--7234316082034423155 (ns: test3_fsmdb0.tmp.agg_out.c4038b0a-8ca8-48e7-9575-4cbf43dd4acc) with drop timestamp Timestamp(1574796707, 2539)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.967-0500 I STORAGE [conn85] Finishing collection drop for test4_fsmdb0.agg_out (48ad36b6-4a15-4dee-b50c-18d3ffa2d97d).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.879-0500 I STORAGE [ReplWriterWorker-3] createCollection: test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 with provided UUID: 48e9cd12-2254-4a56-81bf-eca579fd6a89 and options: { uuid: UUID("48e9cd12-2254-4a56-81bf-eca579fd6a89"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.917-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-222--7234316082034423155 (ns: test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7) with drop timestamp Timestamp(1574796707, 3056)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.967-0500 I STORAGE [conn85] renameCollection: renaming collection 5a0b3bde-63ad-432d-a4d5-484683e44103 from test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.894-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.926-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-223--7234316082034423155 (ns: test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7) with drop timestamp Timestamp(1574796707, 3056)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.967-0500 I STORAGE [conn85] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (48ad36b6-4a15-4dee-b50c-18d3ffa2d97d)'. Ident: 'index-233--2588534479858262356', commit timestamp: 'Timestamp(1574796716, 2967)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.895-0500 I STORAGE [ReplWriterWorker-15] createCollection: test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9 with provided UUID: 5a0b3bde-63ad-432d-a4d5-484683e44103 and options: { uuid: UUID("5a0b3bde-63ad-432d-a4d5-484683e44103"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.934-0500 I INDEX [ReplWriterWorker-15] index build: starting on test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.967-0500 I STORAGE [conn85] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (48ad36b6-4a15-4dee-b50c-18d3ffa2d97d)'. Ident: 'index-238--2588534479858262356', commit timestamp: 'Timestamp(1574796716, 2967)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.910-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.934-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.967-0500 I STORAGE [conn85] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-228--2588534479858262356, commit timestamp: Timestamp(1574796716, 2967)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.931-0500 I INDEX [ReplWriterWorker-12] index build: starting on test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.934-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 94f78683-accf-4b8a-8f77-512e475f9b45: test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452 (48ad36b6-4a15-4dee-b50c-18d3ffa2d97d ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.968-0500 I COMMAND [conn80] command test4_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2000969426510569146, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5754930058148564337, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796716775), clusterTime: Timestamp(1574796716, 558) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796716, 564), signature: { hash: BinData(0, 66C38E54DEAE5E92FDAAA24F5665FFCF0FE11F41), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58880", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796716, 558), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 192ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.931-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.934-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.969-0500 I COMMAND [conn88] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.931-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 80fbc7a2-7c4b-44d2-a4ea-e9f34a0ff79b: test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452 (48ad36b6-4a15-4dee-b50c-18d3ffa2d97d ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.934-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-221--7234316082034423155 (ns: test3_fsmdb0.tmp.agg_out.1926e9e2-cbe1-4be8-9820-21bc1ea525e7) with drop timestamp Timestamp(1574796707, 3056)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.969-0500 I STORAGE [conn88] dropCollection: test4_fsmdb0.agg_out (5a0b3bde-63ad-432d-a4d5-484683e44103) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796716, 3032), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.931-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.935-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.969-0500 I STORAGE [conn88] Finishing collection drop for test4_fsmdb0.agg_out (5a0b3bde-63ad-432d-a4d5-484683e44103).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.931-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.940-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.969-0500 I STORAGE [conn88] renameCollection: renaming collection c467fb4e-6e10-4d08-8479-385968423996 from test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.934-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.948-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 94f78683-accf-4b8a-8f77-512e475f9b45: test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452 ( 48ad36b6-4a15-4dee-b50c-18d3ffa2d97d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.969-0500 I STORAGE [conn88] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (5a0b3bde-63ad-432d-a4d5-484683e44103)'. Ident: 'index-237--2588534479858262356', commit timestamp: 'Timestamp(1574796716, 3032)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.943-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 80fbc7a2-7c4b-44d2-a4ea-e9f34a0ff79b: test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452 ( 48ad36b6-4a15-4dee-b50c-18d3ffa2d97d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.956-0500 I INDEX [ReplWriterWorker-13] index build: starting on test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.969-0500 I STORAGE [conn88] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (5a0b3bde-63ad-432d-a4d5-484683e44103)'. Ident: 'index-246--2588534479858262356', commit timestamp: 'Timestamp(1574796716, 3032)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.950-0500 I INDEX [ReplWriterWorker-4] index build: starting on test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.956-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.969-0500 I STORAGE [conn88] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-232--2588534479858262356, commit timestamp: Timestamp(1574796716, 3032)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.950-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.956-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 553984df-dfe1-4534-867e-62931d94fafc: test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d (c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.969-0500 I COMMAND [conn84] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.950-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 502d60c3-07a7-44b3-8651-b17923d8b643: test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d (c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.956-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.969-0500 I COMMAND [conn64] command test4_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5140709281891837241, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4552382752585746483, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796716772), clusterTime: Timestamp(1574796716, 561) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796716, 561), signature: { hash: BinData(0, 66C38E54DEAE5E92FDAAA24F5665FFCF0FE11F41), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796716, 558), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 195ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.950-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.957-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.969-0500 I STORAGE [conn84] dropCollection: test4_fsmdb0.agg_out (c467fb4e-6e10-4d08-8479-385968423996) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796716, 3033), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.951-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.959-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.969-0500 I STORAGE [conn84] Finishing collection drop for test4_fsmdb0.agg_out (c467fb4e-6e10-4d08-8479-385968423996).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.953-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.968-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 553984df-dfe1-4534-867e-62931d94fafc: test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d ( c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.969-0500 I STORAGE [conn84] renameCollection: renaming collection c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2 from test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.955-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 502d60c3-07a7-44b3-8651-b17923d8b643: test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d ( c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.973-0500 I INDEX [ReplWriterWorker-7] index build: starting on test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.969-0500 I STORAGE [conn84] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (c467fb4e-6e10-4d08-8479-385968423996)'. Ident: 'index-235--2588534479858262356', commit timestamp: 'Timestamp(1574796716, 3033)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.970-0500 I INDEX [ReplWriterWorker-7] index build: starting on test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.973-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.969-0500 I STORAGE [conn84] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (c467fb4e-6e10-4d08-8479-385968423996)'. Ident: 'index-242--2588534479858262356', commit timestamp: 'Timestamp(1574796716, 3033)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.970-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.973-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 4fa6b1e0-b0f9-49d7-83e6-0f55e3243387: test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14 (c467fb4e-6e10-4d08-8479-385968423996 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.969-0500 I STORAGE [conn84] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-230--2588534479858262356, commit timestamp: Timestamp(1574796716, 3033)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.970-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 6ea1bf3d-df52-4bca-85b9-d20f785db877: test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14 (c467fb4e-6e10-4d08-8479-385968423996 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.973-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.970-0500 I COMMAND [conn62] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8116706477089093425, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5521422448327998858, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796716773), clusterTime: Timestamp(1574796716, 561) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796716, 561), signature: { hash: BinData(0, 66C38E54DEAE5E92FDAAA24F5665FFCF0FE11F41), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45728", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796716, 558), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 196ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.970-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.974-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.970-0500 I STORAGE [conn84] createCollection: test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 with generated UUID: e5f250ab-d270-4ffc-975d-3368e7f2ea4a and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.971-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.977-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.971-0500 I STORAGE [conn88] createCollection: test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 with generated UUID: 5d97bf05-c8f9-4b41-8a9f-f27badb021e1 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.974-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.985-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 4fa6b1e0-b0f9-49d7-83e6-0f55e3243387: test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14 ( c467fb4e-6e10-4d08-8479-385968423996 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:56.995-0500 I INDEX [conn84] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.983-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 6ea1bf3d-df52-4bca-85b9-d20f785db877: test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14 ( c467fb4e-6e10-4d08-8479-385968423996 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.994-0500 I INDEX [ReplWriterWorker-4] index build: starting on test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.003-0500 I INDEX [conn88] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.990-0500 I INDEX [ReplWriterWorker-13] index build: starting on test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.994-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.004-0500 I STORAGE [conn85] createCollection: test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 with generated UUID: 27269583-4a64-45c0-b2a4-50850668c464 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.990-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.994-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 5e4498bb-4544-43a5-94e5-e2906c150b55: test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9 (5a0b3bde-63ad-432d-a4d5-484683e44103 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.018-0500 I INDEX [conn85] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.990-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: bead5f6e-e195-4b2b-9c60-7ea5fdff1b2b: test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9 (5a0b3bde-63ad-432d-a4d5-484683e44103 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.994-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.018-0500 I INDEX [conn85] Registering index build: 59dc3cca-ff64-4f62-b1a4-1b7832d8cdb2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.990-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.995-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.018-0500 I INDEX [conn84] Registering index build: cf9622dc-db2e-4a04-903a-e88adfe5efcc
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.991-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:56.997-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.019-0500 I INDEX [conn88] Registering index build: c280221a-744b-4c4c-936c-eedb3fdaff21
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:56.994-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.006-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 5e4498bb-4544-43a5-94e5-e2906c150b55: test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9 ( 5a0b3bde-63ad-432d-a4d5-484683e44103 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.019-0500 I COMMAND [conn82] CMD: drop test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.001-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: bead5f6e-e195-4b2b-9c60-7ea5fdff1b2b: test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9 ( 5a0b3bde-63ad-432d-a4d5-484683e44103 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.013-0500 I INDEX [ReplWriterWorker-2] index build: starting on test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.020-0500 I STORAGE [conn77] createCollection: test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85 with generated UUID: c5575831-c760-48cd-9685-6531387fa44c and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.009-0500 I INDEX [ReplWriterWorker-6] index build: starting on test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.013-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.032-0500 I INDEX [conn85] index build: starting on test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.009-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.013-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 3252e979-5f35-49dc-8eb2-bc0f6ab5c8aa: test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 (48e9cd12-2254-4a56-81bf-eca579fd6a89 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.032-0500 I INDEX [conn85] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.009-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 901055eb-8336-4e4e-ba50-293c86c41845: test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 (48e9cd12-2254-4a56-81bf-eca579fd6a89 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.013-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.032-0500 I STORAGE [conn85] Index build initialized: 59dc3cca-ff64-4f62-b1a4-1b7832d8cdb2: test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 (27269583-4a64-45c0-b2a4-50850668c464 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.009-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.014-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.032-0500 I INDEX [conn85] Waiting for index build to complete: 59dc3cca-ff64-4f62-b1a4-1b7832d8cdb2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.009-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.016-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.032-0500 I STORAGE [conn82] dropCollection: test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 (48e9cd12-2254-4a56-81bf-eca579fd6a89) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.012-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.020-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 3252e979-5f35-49dc-8eb2-bc0f6ab5c8aa: test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 ( 48e9cd12-2254-4a56-81bf-eca579fd6a89 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.032-0500 I STORAGE [conn82] Finishing collection drop for test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 (48e9cd12-2254-4a56-81bf-eca579fd6a89).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.015-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 901055eb-8336-4e4e-ba50-293c86c41845: test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 ( 48e9cd12-2254-4a56-81bf-eca579fd6a89 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.027-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452 (48ad36b6-4a15-4dee-b50c-18d3ffa2d97d) to test4_fsmdb0.agg_out and drop 779cf8b3-6313-4651-b1cd-c5e10b7b79fc.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.032-0500 I STORAGE [conn82] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 (48e9cd12-2254-4a56-81bf-eca579fd6a89)'. Ident: 'index-236--2588534479858262356', commit timestamp: 'Timestamp(1574796717, 5)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.022-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452 (48ad36b6-4a15-4dee-b50c-18d3ffa2d97d) to test4_fsmdb0.agg_out and drop 779cf8b3-6313-4651-b1cd-c5e10b7b79fc.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.027-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test4_fsmdb0.agg_out (779cf8b3-6313-4651-b1cd-c5e10b7b79fc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796716, 2618), t: 1 } and commit timestamp Timestamp(1574796716, 2618)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.032-0500 I STORAGE [conn82] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 (48e9cd12-2254-4a56-81bf-eca579fd6a89)'. Ident: 'index-244--2588534479858262356', commit timestamp: 'Timestamp(1574796717, 5)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.022-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test4_fsmdb0.agg_out (779cf8b3-6313-4651-b1cd-c5e10b7b79fc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796716, 2618), t: 1 } and commit timestamp Timestamp(1574796716, 2618)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.027-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test4_fsmdb0.agg_out (779cf8b3-6313-4651-b1cd-c5e10b7b79fc).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.032-0500 I STORAGE [conn82] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6'. Ident: collection-231--2588534479858262356, commit timestamp: Timestamp(1574796717, 5)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.022-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test4_fsmdb0.agg_out (779cf8b3-6313-4651-b1cd-c5e10b7b79fc).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.027-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 48ad36b6-4a15-4dee-b50c-18d3ffa2d97d from test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.033-0500 I COMMAND [conn81] command test4_fsmdb0.agg_out appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3622280866374553587, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4941731172173353359, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796716775), clusterTime: Timestamp(1574796716, 558) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796716, 564), signature: { hash: BinData(0, 66C38E54DEAE5E92FDAAA24F5665FFCF0FE11F41), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58884", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796716, 558), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6\", to: \"test4_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:884 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 257ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.022-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 48ad36b6-4a15-4dee-b50c-18d3ffa2d97d from test4_fsmdb0.tmp.agg_out.a76297f8-a7c5-4751-a7d3-f97ed9015452 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.027-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (779cf8b3-6313-4651-b1cd-c5e10b7b79fc)'. Ident: 'index-234--7234316082034423155', commit timestamp: 'Timestamp(1574796716, 2618)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.027-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (779cf8b3-6313-4651-b1cd-c5e10b7b79fc)'. Ident: 'index-235--7234316082034423155', commit timestamp: 'Timestamp(1574796716, 2618)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.054-0500 I INDEX [conn77] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.027-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-233--7234316082034423155, commit timestamp: Timestamp(1574796716, 2618)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.063-0500 I INDEX [conn84] index build: starting on test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.030-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9 (5a0b3bde-63ad-432d-a4d5-484683e44103) to test4_fsmdb0.agg_out and drop 48ad36b6-4a15-4dee-b50c-18d3ffa2d97d.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.358-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47354 #192 (45 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.022-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (779cf8b3-6313-4651-b1cd-c5e10b7b79fc)'. Ident: 'index-234--2310912778499990807', commit timestamp: 'Timestamp(1574796716, 2618)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.030-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test4_fsmdb0.agg_out (48ad36b6-4a15-4dee-b50c-18d3ffa2d97d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796716, 2967), t: 1 } and commit timestamp Timestamp(1574796716, 2967)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.391-0500 I CONNPOOL [ShardRegistry] Ending idle connection to host localhost:20000 because the pool meets constraints; 3 connections to that host remain open
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.022-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (779cf8b3-6313-4651-b1cd-c5e10b7b79fc)'. Ident: 'index-235--2310912778499990807', commit timestamp: 'Timestamp(1574796716, 2618)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.030-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test4_fsmdb0.agg_out (48ad36b6-4a15-4dee-b50c-18d3ffa2d97d).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.781-0500 I INDEX [conn84] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.022-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-233--2310912778499990807, commit timestamp: Timestamp(1574796716, 2618)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.030-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection 5a0b3bde-63ad-432d-a4d5-484683e44103 from test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.781-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47366 #193 (46 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.025-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9 (5a0b3bde-63ad-432d-a4d5-484683e44103) to test4_fsmdb0.agg_out and drop 48ad36b6-4a15-4dee-b50c-18d3ffa2d97d.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.030-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (48ad36b6-4a15-4dee-b50c-18d3ffa2d97d)'. Ident: 'index-238--7234316082034423155', commit timestamp: 'Timestamp(1574796716, 2967)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.781-0500 I CONNPOOL [ShardRegistry] Ending idle connection to host localhost:20000 because the pool meets constraints; 2 connections to that host remain open
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.026-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test4_fsmdb0.agg_out (48ad36b6-4a15-4dee-b50c-18d3ffa2d97d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796716, 2967), t: 1 } and commit timestamp Timestamp(1574796716, 2967)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.030-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (48ad36b6-4a15-4dee-b50c-18d3ffa2d97d)'. Ident: 'index-247--7234316082034423155', commit timestamp: 'Timestamp(1574796716, 2967)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.781-0500 I COMMAND [conn77] command test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85 appName: "tid:0" command: create { create: "tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85", temp: true, validationLevel: "strict", validationAction: "error", databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796717, 2), signature: { hash: BinData(0, B155B59C95953FCA2BC7F7AB81A2463789C1D0B7), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45728", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796716, 558), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 760ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.026-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test4_fsmdb0.agg_out (48ad36b6-4a15-4dee-b50c-18d3ffa2d97d).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.030-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-237--7234316082034423155, commit timestamp: Timestamp(1574796716, 2967)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.781-0500 I NETWORK [conn192] received client metadata from 127.0.0.1:47354 conn192: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.026-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 5a0b3bde-63ad-432d-a4d5-484683e44103 from test4_fsmdb0.tmp.agg_out.7f6084d5-281e-41a0-a59f-43436c2f3bf9 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.032-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14 (c467fb4e-6e10-4d08-8479-385968423996) to test4_fsmdb0.agg_out and drop 5a0b3bde-63ad-432d-a4d5-484683e44103.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.781-0500 I STORAGE [conn84] Index build initialized: cf9622dc-db2e-4a04-903a-e88adfe5efcc: test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 (e5f250ab-d270-4ffc-975d-3368e7f2ea4a ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.026-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (48ad36b6-4a15-4dee-b50c-18d3ffa2d97d)'. Ident: 'index-238--2310912778499990807', commit timestamp: 'Timestamp(1574796716, 2967)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.032-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test4_fsmdb0.agg_out (5a0b3bde-63ad-432d-a4d5-484683e44103) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796716, 3032), t: 1 } and commit timestamp Timestamp(1574796716, 3032)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.781-0500 I INDEX [conn84] Waiting for index build to complete: cf9622dc-db2e-4a04-903a-e88adfe5efcc
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.026-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (48ad36b6-4a15-4dee-b50c-18d3ffa2d97d)'. Ident: 'index-247--2310912778499990807', commit timestamp: 'Timestamp(1574796716, 2967)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.032-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test4_fsmdb0.agg_out (5a0b3bde-63ad-432d-a4d5-484683e44103).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.781-0500 I NETWORK [conn193] received client metadata from 127.0.0.1:47366 conn193: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.026-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-237--2310912778499990807, commit timestamp: Timestamp(1574796716, 2967)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.032-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection c467fb4e-6e10-4d08-8479-385968423996 from test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.781-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.028-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14 (c467fb4e-6e10-4d08-8479-385968423996) to test4_fsmdb0.agg_out and drop 5a0b3bde-63ad-432d-a4d5-484683e44103.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.032-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (5a0b3bde-63ad-432d-a4d5-484683e44103)'. Ident: 'index-246--7234316082034423155', commit timestamp: 'Timestamp(1574796716, 3032)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.781-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.028-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test4_fsmdb0.agg_out (5a0b3bde-63ad-432d-a4d5-484683e44103) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796716, 3032), t: 1 } and commit timestamp Timestamp(1574796716, 3032)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.032-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (5a0b3bde-63ad-432d-a4d5-484683e44103)'. Ident: 'index-253--7234316082034423155', commit timestamp: 'Timestamp(1574796716, 3032)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.781-0500 I INDEX [conn77] Registering index build: ff3d978b-fbe9-400a-b9c2-cf274f1629c7
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.028-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test4_fsmdb0.agg_out (5a0b3bde-63ad-432d-a4d5-484683e44103).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.033-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-245--7234316082034423155, commit timestamp: Timestamp(1574796716, 3032)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.782-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.028-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection c467fb4e-6e10-4d08-8479-385968423996 from test4_fsmdb0.tmp.agg_out.f14ba512-581f-47b8-9e8e-e082530b4e14 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.033-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d (c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2) to test4_fsmdb0.agg_out and drop c467fb4e-6e10-4d08-8479-385968423996.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.782-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.028-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (5a0b3bde-63ad-432d-a4d5-484683e44103)'. Ident: 'index-246--2310912778499990807', commit timestamp: 'Timestamp(1574796716, 3032)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.033-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test4_fsmdb0.agg_out (c467fb4e-6e10-4d08-8479-385968423996) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796716, 3033), t: 1 } and commit timestamp Timestamp(1574796716, 3033)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.784-0500 I STORAGE [conn82] createCollection: test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26 with generated UUID: 6b97609e-9e18-4a96-9edd-cc4e6b6742fe and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.028-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (5a0b3bde-63ad-432d-a4d5-484683e44103)'. Ident: 'index-253--2310912778499990807', commit timestamp: 'Timestamp(1574796716, 3032)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.033-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test4_fsmdb0.agg_out (c467fb4e-6e10-4d08-8479-385968423996).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.792-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.028-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-245--2310912778499990807, commit timestamp: Timestamp(1574796716, 3032)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.033-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2 from test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.795-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.029-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d (c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2) to test4_fsmdb0.agg_out and drop c467fb4e-6e10-4d08-8479-385968423996.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.033-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (c467fb4e-6e10-4d08-8479-385968423996)'. Ident: 'index-242--7234316082034423155', commit timestamp: 'Timestamp(1574796716, 3033)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.807-0500 I INDEX [conn88] index build: starting on test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.029-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test4_fsmdb0.agg_out (c467fb4e-6e10-4d08-8479-385968423996) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796716, 3033), t: 1 } and commit timestamp Timestamp(1574796716, 3033)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.033-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (c467fb4e-6e10-4d08-8479-385968423996)'. Ident: 'index-251--7234316082034423155', commit timestamp: 'Timestamp(1574796716, 3033)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.807-0500 I INDEX [conn88] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.029-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test4_fsmdb0.agg_out (c467fb4e-6e10-4d08-8479-385968423996).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.033-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-241--7234316082034423155, commit timestamp: Timestamp(1574796716, 3033)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.807-0500 I STORAGE [conn88] Index build initialized: c280221a-744b-4c4c-936c-eedb3fdaff21: test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 (5d97bf05-c8f9-4b41-8a9f-f27badb021e1 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.029-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2 from test4_fsmdb0.tmp.agg_out.60823f8c-a237-4962-b137-351174d2369d to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.035-0500 I STORAGE [ReplWriterWorker-11] createCollection: test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 with provided UUID: e5f250ab-d270-4ffc-975d-3368e7f2ea4a and options: { uuid: UUID("e5f250ab-d270-4ffc-975d-3368e7f2ea4a"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.807-0500 I INDEX [conn88] Waiting for index build to complete: c280221a-744b-4c4c-936c-eedb3fdaff21
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.029-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (c467fb4e-6e10-4d08-8479-385968423996)'. Ident: 'index-242--2310912778499990807', commit timestamp: 'Timestamp(1574796716, 3033)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.051-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.808-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.029-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (c467fb4e-6e10-4d08-8479-385968423996)'. Ident: 'index-251--2310912778499990807', commit timestamp: 'Timestamp(1574796716, 3033)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.052-0500 I STORAGE [ReplWriterWorker-12] createCollection: test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 with provided UUID: 5d97bf05-c8f9-4b41-8a9f-f27badb021e1 and options: { uuid: UUID("5d97bf05-c8f9-4b41-8a9f-f27badb021e1"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.808-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 59dc3cca-ff64-4f62-b1a4-1b7832d8cdb2: test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 ( 27269583-4a64-45c0-b2a4-50850668c464 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.029-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-241--2310912778499990807, commit timestamp: Timestamp(1574796716, 3033)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.069-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.808-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: cf9622dc-db2e-4a04-903a-e88adfe5efcc: test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 ( e5f250ab-d270-4ffc-975d-3368e7f2ea4a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.030-0500 I STORAGE [ReplWriterWorker-10] createCollection: test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 with provided UUID: e5f250ab-d270-4ffc-975d-3368e7f2ea4a and options: { uuid: UUID("e5f250ab-d270-4ffc-975d-3368e7f2ea4a"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.071-0500 I STORAGE [ReplWriterWorker-3] createCollection: test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 with provided UUID: 27269583-4a64-45c0-b2a4-50850668c464 and options: { uuid: UUID("27269583-4a64-45c0-b2a4-50850668c464"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.813-0500 I INDEX [conn82] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.045-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.083-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.813-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.046-0500 I STORAGE [ReplWriterWorker-3] createCollection: test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 with provided UUID: 5d97bf05-c8f9-4b41-8a9f-f27badb021e1 and options: { uuid: UUID("5d97bf05-c8f9-4b41-8a9f-f27badb021e1"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.358-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35916 #81 (12 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.814-0500 I INDEX [conn82] Registering index build: fc75054d-6d36-4934-b2e2-81f27dab4e2a
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.061-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.359-0500 I NETWORK [conn81] received client metadata from 127.0.0.1:35916 conn81: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.816-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.063-0500 I STORAGE [ReplWriterWorker-12] createCollection: test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 with provided UUID: 27269583-4a64-45c0-b2a4-50850668c464 and options: { uuid: UUID("27269583-4a64-45c0-b2a4-50850668c464"), temp: true }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.825-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: c280221a-744b-4c4c-936c-eedb3fdaff21: test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 ( 5d97bf05-c8f9-4b41-8a9f-f27badb021e1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.077-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.833-0500 I INDEX [conn77] index build: starting on test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.358-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52554 #87 (12 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.833-0500 I INDEX [conn77] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.359-0500 I NETWORK [conn87] received client metadata from 127.0.0.1:52554 conn87: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.833-0500 I STORAGE [conn77] Index build initialized: ff3d978b-fbe9-400a-b9c2-cf274f1629c7: test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85 (c5575831-c760-48cd-9685-6531387fa44c ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.392-0500 I CONNPOOL [ShardRegistry] Ending idle connection to host localhost:20000 because the pool meets constraints; 2 connections to that host remain open
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.833-0500 I INDEX [conn77] Waiting for index build to complete: ff3d978b-fbe9-400a-b9c2-cf274f1629c7
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.392-0500 I CONNPOOL [ShardRegistry] Ending idle connection to host localhost:20000 because the pool meets constraints; 1 connections to that host remain open
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.833-0500 I INDEX [conn85] Index build completed: 59dc3cca-ff64-4f62-b1a4-1b7832d8cdb2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.395-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52568 #88 (13 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.395-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35926 #82 (13 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.833-0500 I INDEX [conn84] Index build completed: cf9622dc-db2e-4a04-903a-e88adfe5efcc
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.395-0500 I NETWORK [conn88] received client metadata from 127.0.0.1:52568 conn88: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.395-0500 I NETWORK [conn82] received client metadata from 127.0.0.1:35926 conn82: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.833-0500 I COMMAND [conn85] command test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796717, 2), signature: { hash: BinData(0, B155B59C95953FCA2BC7F7AB81A2463789C1D0B7), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796716, 558), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 814ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.396-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.783-0500 I STORAGE [ReplWriterWorker-1] createCollection: test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85 with provided UUID: c5575831-c760-48cd-9685-6531387fa44c and options: { uuid: UUID("c5575831-c760-48cd-9685-6531387fa44c"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.833-0500 I COMMAND [conn84] command test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796716, 3087), signature: { hash: BinData(0, 66C38E54DEAE5E92FDAAA24F5665FFCF0FE11F41), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45740", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796716, 558), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 23094 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 838ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.396-0500 I SHARDING [Sharding-Fixed-1] Updating config server with confirmed set shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.798-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.848-0500 I INDEX [conn82] index build: starting on test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.800-0500 I COMMAND [ReplWriterWorker-2] CMD: drop test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.781-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.848-0500 I INDEX [conn82] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.800-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 (48e9cd12-2254-4a56-81bf-eca579fd6a89) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796717, 5), t: 1 } and commit timestamp Timestamp(1574796717, 5)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.781-0500 I SHARDING [Sharding-Fixed-1] Updating config server with confirmed set shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.848-0500 I STORAGE [conn82] Index build initialized: fc75054d-6d36-4934-b2e2-81f27dab4e2a: test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26 (6b97609e-9e18-4a96-9edd-cc4e6b6742fe ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.800-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 (48e9cd12-2254-4a56-81bf-eca579fd6a89).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.783-0500 I STORAGE [ReplWriterWorker-14] createCollection: test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85 with provided UUID: c5575831-c760-48cd-9685-6531387fa44c and options: { uuid: UUID("c5575831-c760-48cd-9685-6531387fa44c"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.848-0500 I INDEX [conn82] Waiting for index build to complete: fc75054d-6d36-4934-b2e2-81f27dab4e2a
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.800-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 (48e9cd12-2254-4a56-81bf-eca579fd6a89)'. Ident: 'index-244--2310912778499990807', commit timestamp: 'Timestamp(1574796717, 5)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.798-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.849-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.800-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 (48e9cd12-2254-4a56-81bf-eca579fd6a89)'. Ident: 'index-255--2310912778499990807', commit timestamp: 'Timestamp(1574796717, 5)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.799-0500 I COMMAND [ReplWriterWorker-8] CMD: drop test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.849-0500 I INDEX [conn88] Index build completed: c280221a-744b-4c4c-936c-eedb3fdaff21
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.800-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6'. Ident: collection-243--2310912778499990807, commit timestamp: Timestamp(1574796717, 5)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.799-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 (48e9cd12-2254-4a56-81bf-eca579fd6a89) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796717, 5), t: 1 } and commit timestamp Timestamp(1574796717, 5)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.849-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.816-0500 I STORAGE [ReplWriterWorker-10] createCollection: test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26 with provided UUID: 6b97609e-9e18-4a96-9edd-cc4e6b6742fe and options: { uuid: UUID("6b97609e-9e18-4a96-9edd-cc4e6b6742fe"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.800-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 (48e9cd12-2254-4a56-81bf-eca579fd6a89).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.849-0500 I COMMAND [conn88] command test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796717, 1), signature: { hash: BinData(0, B155B59C95953FCA2BC7F7AB81A2463789C1D0B7), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58880", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796716, 558), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 14391 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 844ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.831-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.800-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 (48e9cd12-2254-4a56-81bf-eca579fd6a89)'. Ident: 'index-244--7234316082034423155', commit timestamp: 'Timestamp(1574796717, 5)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.850-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.847-0500 I INDEX [ReplWriterWorker-3] index build: starting on test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.800-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6 (48e9cd12-2254-4a56-81bf-eca579fd6a89)'. Ident: 'index-255--7234316082034423155', commit timestamp: 'Timestamp(1574796717, 5)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.851-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.847-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.800-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6'. Ident: collection-243--7234316082034423155, commit timestamp: Timestamp(1574796717, 5)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.853-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.847-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 5ac9547b-315c-4deb-85b3-16c921ac9d10: test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 (27269583-4a64-45c0-b2a4-50850668c464 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.816-0500 I STORAGE [ReplWriterWorker-11] createCollection: test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26 with provided UUID: 6b97609e-9e18-4a96-9edd-cc4e6b6742fe and options: { uuid: UUID("6b97609e-9e18-4a96-9edd-cc4e6b6742fe"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.856-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.847-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.831-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.858-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: fc75054d-6d36-4934-b2e2-81f27dab4e2a: test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26 ( 6b97609e-9e18-4a96-9edd-cc4e6b6742fe ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.848-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.846-0500 I INDEX [ReplWriterWorker-12] index build: starting on test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.858-0500 I INDEX [conn82] Index build completed: fc75054d-6d36-4934-b2e2-81f27dab4e2a
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.850-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.846-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.861-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: ff3d978b-fbe9-400a-b9c2-cf274f1629c7: test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85 ( c5575831-c760-48cd-9685-6531387fa44c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.858-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 5ac9547b-315c-4deb-85b3-16c921ac9d10: test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 ( 27269583-4a64-45c0-b2a4-50850668c464 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.846-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: f8f745b8-8a7c-4ebc-a0b4-7e46c3efcb62: test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 (27269583-4a64-45c0-b2a4-50850668c464 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.873-0500 I INDEX [conn77] Index build completed: ff3d978b-fbe9-400a-b9c2-cf274f1629c7
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.867-0500 I INDEX [ReplWriterWorker-12] index build: starting on test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.847-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I COMMAND [conn77] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.867-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.847-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I COMMAND [conn85] CMD: drop test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.867-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 8c76162f-fff4-46bc-a88e-fef0bdf98951: test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 (e5f250ab-d270-4ffc-975d-3368e7f2ea4a ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.850-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I COMMAND [conn88] CMD: drop test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.867-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.860-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: f8f745b8-8a7c-4ebc-a0b4-7e46c3efcb62: test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 ( 27269583-4a64-45c0-b2a4-50850668c464 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I STORAGE [conn77] dropCollection: test4_fsmdb0.agg_out (c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796717, 2151), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.868-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.868-0500 I INDEX [ReplWriterWorker-3] index build: starting on test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I STORAGE [conn77] Finishing collection drop for test4_fsmdb0.agg_out (c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.871-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.868-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I STORAGE [conn77] renameCollection: renaming collection c5575831-c760-48cd-9685-6531387fa44c from test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.880-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 8c76162f-fff4-46bc-a88e-fef0bdf98951: test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 ( e5f250ab-d270-4ffc-975d-3368e7f2ea4a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.868-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: d09d6864-309e-4816-8524-490d4d817d1c: test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 (e5f250ab-d270-4ffc-975d-3368e7f2ea4a ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I STORAGE [conn77] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2)'. Ident: 'index-234--2588534479858262356', commit timestamp: 'Timestamp(1574796717, 2151)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.888-0500 I INDEX [ReplWriterWorker-2] index build: starting on test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.868-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I STORAGE [conn77] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2)'. Ident: 'index-240--2588534479858262356', commit timestamp: 'Timestamp(1574796717, 2151)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.888-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.869-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I COMMAND [conn84] CMD: drop test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.888-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: e5c9c5b0-1ff2-44e3-b637-913b34a2f75e: test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 (5d97bf05-c8f9-4b41-8a9f-f27badb021e1 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.873-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I STORAGE [conn77] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-229--2588534479858262356, commit timestamp: Timestamp(1574796717, 2151)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.888-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.881-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: d09d6864-309e-4816-8524-490d4d817d1c: test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 ( e5f250ab-d270-4ffc-975d-3368e7f2ea4a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I STORAGE [conn85] dropCollection: test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 (5d97bf05-c8f9-4b41-8a9f-f27badb021e1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.888-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.887-0500 I INDEX [ReplWriterWorker-8] index build: starting on test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I STORAGE [conn84] dropCollection: test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 (e5f250ab-d270-4ffc-975d-3368e7f2ea4a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.890-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.888-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I STORAGE [conn88] dropCollection: test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 (27269583-4a64-45c0-b2a4-50850668c464) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.895-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: e5c9c5b0-1ff2-44e3-b637-913b34a2f75e: test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 ( 5d97bf05-c8f9-4b41-8a9f-f27badb021e1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.915-0500 I INDEX [ReplWriterWorker-12] index build: starting on test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I STORAGE [conn85] Finishing collection drop for test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 (5d97bf05-c8f9-4b41-8a9f-f27badb021e1).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.888-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 3afc6a54-49ea-4f23-959d-5775a4cd9e62: test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 (5d97bf05-c8f9-4b41-8a9f-f27badb021e1 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.915-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I STORAGE [conn84] Finishing collection drop for test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 (e5f250ab-d270-4ffc-975d-3368e7f2ea4a).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.888-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.915-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: d8c720fa-344e-4114-ab74-ab18a09328e9: test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26 (6b97609e-9e18-4a96-9edd-cc4e6b6742fe ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I STORAGE [conn88] Finishing collection drop for test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 (27269583-4a64-45c0-b2a4-50850668c464).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.888-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.915-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I STORAGE [conn85] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 (5d97bf05-c8f9-4b41-8a9f-f27badb021e1)'. Ident: 'index-251--2588534479858262356', commit timestamp: 'Timestamp(1574796717, 2216)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I STORAGE [conn85] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 (5d97bf05-c8f9-4b41-8a9f-f27badb021e1)'. Ident: 'index-260--2588534479858262356', commit timestamp: 'Timestamp(1574796717, 2216)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.916-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.890-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I STORAGE [conn84] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 (e5f250ab-d270-4ffc-975d-3368e7f2ea4a)'. Ident: 'index-250--2588534479858262356', commit timestamp: 'Timestamp(1574796717, 2217)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.919-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.893-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 3afc6a54-49ea-4f23-959d-5775a4cd9e62: test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 ( 5d97bf05-c8f9-4b41-8a9f-f27badb021e1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:02.773-0500 I COMMAND [conn170] command test4_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6") }, $clusterTime: { clusterTime: Timestamp(1574796717, 2346), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 4877ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I STORAGE [conn85] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717'. Ident: collection-249--2588534479858262356, commit timestamp: Timestamp(1574796717, 2216)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.927-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: d8c720fa-344e-4114-ab74-ab18a09328e9: test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26 ( 6b97609e-9e18-4a96-9edd-cc4e6b6742fe ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.914-0500 I INDEX [ReplWriterWorker-14] index build: starting on test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I STORAGE [conn88] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 (27269583-4a64-45c0-b2a4-50850668c464)'. Ident: 'index-253--2588534479858262356', commit timestamp: 'Timestamp(1574796717, 2218)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.936-0500 I INDEX [ReplWriterWorker-10] index build: starting on test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.914-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I COMMAND [conn64] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1483278187485800979, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4111797544957386427, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796717005), clusterTime: Timestamp(1574796717, 1) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796717, 2), signature: { hash: BinData(0, B155B59C95953FCA2BC7F7AB81A2463789C1D0B7), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45728", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796716, 558), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 873ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.936-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.914-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: c64cd639-a89d-4321-b875-9558cd636f4d: test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26 (6b97609e-9e18-4a96-9edd-cc4e6b6742fe ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I STORAGE [conn84] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 (e5f250ab-d270-4ffc-975d-3368e7f2ea4a)'. Ident: 'index-257--2588534479858262356', commit timestamp: 'Timestamp(1574796717, 2217)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.936-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 5182cf68-f486-4b44-bd37-119e9bf17d5f: test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85 (c5575831-c760-48cd-9685-6531387fa44c ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.914-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I STORAGE [conn88] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 (27269583-4a64-45c0-b2a4-50850668c464)'. Ident: 'index-254--2588534479858262356', commit timestamp: 'Timestamp(1574796717, 2218)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.936-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.915-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I STORAGE [conn84] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05'. Ident: collection-248--2588534479858262356, commit timestamp: Timestamp(1574796717, 2217)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.937-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.917-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.892-0500 I STORAGE [conn88] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996'. Ident: collection-252--2588534479858262356, commit timestamp: Timestamp(1574796717, 2218)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.946-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85 (c5575831-c760-48cd-9685-6531387fa44c) to test4_fsmdb0.agg_out and drop c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.927-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: c64cd639-a89d-4321-b875-9558cd636f4d: test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26 ( 6b97609e-9e18-4a96-9edd-cc4e6b6742fe ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.893-0500 I COMMAND [conn80] command test4_fsmdb0.agg_out appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1696590300035901049, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 9184055738907638512, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796716970), clusterTime: Timestamp(1574796716, 3031) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796716, 3086), signature: { hash: BinData(0, 66C38E54DEAE5E92FDAAA24F5665FFCF0FE11F41), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58880", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796716, 558), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717\", to: \"test4_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:884 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 921ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.946-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52592 #89 (14 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.935-0500 I INDEX [ReplWriterWorker-9] index build: starting on test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.893-0500 I COMMAND [conn65] command test4_fsmdb0.agg_out appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6813285554240574457, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3897566804727573477, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796716967), clusterTime: Timestamp(1574796716, 2850) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796716, 3032), signature: { hash: BinData(0, 66C38E54DEAE5E92FDAAA24F5665FFCF0FE11F41), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45740", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796716, 558), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05\", to: \"test4_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:884 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 57 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 923ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.947-0500 I NETWORK [conn89] received client metadata from 127.0.0.1:52592 conn89: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.935-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.893-0500 I COMMAND [conn62] command test4_fsmdb0.agg_out appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6915952617209050846, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2310412560142608205, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796716971), clusterTime: Timestamp(1574796716, 3085) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796716, 3087), signature: { hash: BinData(0, 66C38E54DEAE5E92FDAAA24F5665FFCF0FE11F41), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796716, 558), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996\", to: \"test4_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:884 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 920ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.949-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: drain applied 500 side writes (inserted: 500, deleted: 0) for '_id_hashed' in 8 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.935-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: dafca6ad-25b2-4498-a98a-c40828a11374: test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85 (c5575831-c760-48cd-9685-6531387fa44c ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.894-0500 I COMMAND [conn80] CMD: dropIndexes test4_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.949-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.935-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.894-0500 I COMMAND [conn64] CMD: dropIndexes test4_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.951-0500 I INDEX [ReplWriterWorker-1] Waiting until the following index builds are finished:
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.936-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.895-0500 I STORAGE [conn88] createCollection: test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b with generated UUID: 2b6b795a-0c83-4765-9ebd-8e106f88b4e7 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.951-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 5182cf68-f486-4b44-bd37-119e9bf17d5f: test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85 ( c5575831-c760-48cd-9685-6531387fa44c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.938-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.895-0500 I STORAGE [conn84] createCollection: test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6 with generated UUID: 11d8f567-4661-4e25-8595-4ce121f2030d and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.951-0500 I INDEX [ReplWriterWorker-1] Index build with UUID: 5182cf68-f486-4b44-bd37-119e9bf17d5f
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.942-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: dafca6ad-25b2-4498-a98a-c40828a11374: test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85 ( c5575831-c760-48cd-9685-6531387fa44c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.897-0500 I STORAGE [conn85] createCollection: test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41 with generated UUID: bf494165-965e-497d-968d-60b56d9d4c4c and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.951-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85 (c5575831-c760-48cd-9685-6531387fa44c) to test4_fsmdb0.agg_out and drop c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.946-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35950 #89 (14 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.897-0500 I STORAGE [conn77] createCollection: test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d with generated UUID: a452286d-4741-4bd2-84c4-65f270281c9d and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.951-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test4_fsmdb0.agg_out (c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796717, 2151), t: 1 } and commit timestamp Timestamp(1574796717, 2151)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.946-0500 I NETWORK [conn89] received client metadata from 127.0.0.1:35950 conn89: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.933-0500 I INDEX [conn88] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.951-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test4_fsmdb0.agg_out (c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.949-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85 (c5575831-c760-48cd-9685-6531387fa44c) to test4_fsmdb0.agg_out and drop c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.939-0500 I INDEX [conn84] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.951-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection c5575831-c760-48cd-9685-6531387fa44c from test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.949-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test4_fsmdb0.agg_out (c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796717, 2151), t: 1 } and commit timestamp Timestamp(1574796717, 2151)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.946-0500 I INDEX [conn85] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.951-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2)'. Ident: 'index-240--2310912778499990807', commit timestamp: 'Timestamp(1574796717, 2151)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.949-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test4_fsmdb0.agg_out (c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.946-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47396 #194 (47 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.951-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2)'. Ident: 'index-249--2310912778499990807', commit timestamp: 'Timestamp(1574796717, 2151)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.949-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection c5575831-c760-48cd-9685-6531387fa44c from test4_fsmdb0.tmp.agg_out.2aae924e-1d49-402b-821c-d7262088ac85 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.947-0500 I NETWORK [conn194] received client metadata from 127.0.0.1:47396 conn194: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.951-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-239--2310912778499990807, commit timestamp: Timestamp(1574796717, 2151)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.949-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2)'. Ident: 'index-240--7234316082034423155', commit timestamp: 'Timestamp(1574796717, 2151)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.947-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47398 #195 (48 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.954-0500 I COMMAND [ReplWriterWorker-8] CMD: drop test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.949-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (c82a55f4-14f2-481e-9ac0-ae6d0da2fcb2)'. Ident: 'index-249--7234316082034423155', commit timestamp: 'Timestamp(1574796717, 2151)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.947-0500 I NETWORK [conn195] received client metadata from 127.0.0.1:47398 conn195: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.954-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 (5d97bf05-c8f9-4b41-8a9f-f27badb021e1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796717, 2216), t: 1 } and commit timestamp Timestamp(1574796717, 2216)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.949-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-239--7234316082034423155, commit timestamp: Timestamp(1574796717, 2151)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.952-0500 I INDEX [conn77] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.954-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 (5d97bf05-c8f9-4b41-8a9f-f27badb021e1).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.951-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.952-0500 I COMMAND [conn82] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.954-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 (5d97bf05-c8f9-4b41-8a9f-f27badb021e1)'. Ident: 'index-260--2310912778499990807', commit timestamp: 'Timestamp(1574796717, 2216)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.952-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 (5d97bf05-c8f9-4b41-8a9f-f27badb021e1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796717, 2216), t: 1 } and commit timestamp Timestamp(1574796717, 2216)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.952-0500 I STORAGE [conn82] dropCollection: test4_fsmdb0.agg_out (c5575831-c760-48cd-9685-6531387fa44c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796717, 2531), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.954-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 (5d97bf05-c8f9-4b41-8a9f-f27badb021e1)'. Ident: 'index-271--2310912778499990807', commit timestamp: 'Timestamp(1574796717, 2216)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.952-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 (5d97bf05-c8f9-4b41-8a9f-f27badb021e1).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.952-0500 I STORAGE [conn82] Finishing collection drop for test4_fsmdb0.agg_out (c5575831-c760-48cd-9685-6531387fa44c).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.954-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717'. Ident: collection-259--2310912778499990807, commit timestamp: Timestamp(1574796717, 2216)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.952-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 (5d97bf05-c8f9-4b41-8a9f-f27badb021e1)'. Ident: 'index-260--7234316082034423155', commit timestamp: 'Timestamp(1574796717, 2216)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.952-0500 I STORAGE [conn82] renameCollection: renaming collection 6b97609e-9e18-4a96-9edd-cc4e6b6742fe from test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.955-0500 I COMMAND [ReplWriterWorker-0] CMD: drop test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.952-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717 (5d97bf05-c8f9-4b41-8a9f-f27badb021e1)'. Ident: 'index-271--7234316082034423155', commit timestamp: 'Timestamp(1574796717, 2216)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.953-0500 I STORAGE [conn82] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (c5575831-c760-48cd-9685-6531387fa44c)'. Ident: 'index-258--2588534479858262356', commit timestamp: 'Timestamp(1574796717, 2531)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.955-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 (e5f250ab-d270-4ffc-975d-3368e7f2ea4a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796717, 2217), t: 1 } and commit timestamp Timestamp(1574796717, 2217)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.952-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717'. Ident: collection-259--7234316082034423155, commit timestamp: Timestamp(1574796717, 2216)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.953-0500 I STORAGE [conn82] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (c5575831-c760-48cd-9685-6531387fa44c)'. Ident: 'index-264--2588534479858262356', commit timestamp: 'Timestamp(1574796717, 2531)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.955-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 (e5f250ab-d270-4ffc-975d-3368e7f2ea4a).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.952-0500 I COMMAND [ReplWriterWorker-0] CMD: drop test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.953-0500 I STORAGE [conn82] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-255--2588534479858262356, commit timestamp: Timestamp(1574796717, 2531)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.955-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 (e5f250ab-d270-4ffc-975d-3368e7f2ea4a)'. Ident: 'index-258--2310912778499990807', commit timestamp: 'Timestamp(1574796717, 2217)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.952-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 (e5f250ab-d270-4ffc-975d-3368e7f2ea4a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796717, 2217), t: 1 } and commit timestamp Timestamp(1574796717, 2217)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.953-0500 I INDEX [conn88] Registering index build: f5c57de3-e433-4635-95f2-a5f775e6a00b
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.955-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 (e5f250ab-d270-4ffc-975d-3368e7f2ea4a)'. Ident: 'index-269--2310912778499990807', commit timestamp: 'Timestamp(1574796717, 2217)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.952-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 (e5f250ab-d270-4ffc-975d-3368e7f2ea4a).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.953-0500 I INDEX [conn84] Registering index build: 3335d930-572a-4bfe-936e-dddd4ac058bd
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.955-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05'. Ident: collection-257--2310912778499990807, commit timestamp: Timestamp(1574796717, 2217)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.952-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 (e5f250ab-d270-4ffc-975d-3368e7f2ea4a)'. Ident: 'index-258--7234316082034423155', commit timestamp: 'Timestamp(1574796717, 2217)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.953-0500 I INDEX [conn77] Registering index build: 562d2f03-ca96-493a-a291-457e7fb31817
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.956-0500 I COMMAND [ReplWriterWorker-10] CMD: drop test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.952-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05 (e5f250ab-d270-4ffc-975d-3368e7f2ea4a)'. Ident: 'index-269--7234316082034423155', commit timestamp: 'Timestamp(1574796717, 2217)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.953-0500 I INDEX [conn85] Registering index build: 1aab3df1-081f-47a4-a411-292dd0c53d35
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.956-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 (27269583-4a64-45c0-b2a4-50850668c464) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796717, 2218), t: 1 } and commit timestamp Timestamp(1574796717, 2218)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.953-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05'. Ident: collection-257--7234316082034423155, commit timestamp: Timestamp(1574796717, 2217)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.953-0500 I COMMAND [conn81] command test4_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8004691305260695506, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5162722493319157063, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796717783), clusterTime: Timestamp(1574796717, 5) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796717, 7), signature: { hash: BinData(0, B155B59C95953FCA2BC7F7AB81A2463789C1D0B7), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58884", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 7), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 169ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.956-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 (27269583-4a64-45c0-b2a4-50850668c464).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.953-0500 I COMMAND [ReplWriterWorker-5] CMD: drop test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.957-0500 I STORAGE [conn82] createCollection: test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d with generated UUID: 0235f091-e4f7-4096-8802-85a62492f0f9 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.956-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 (27269583-4a64-45c0-b2a4-50850668c464)'. Ident: 'index-262--2310912778499990807', commit timestamp: 'Timestamp(1574796717, 2218)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.953-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 (27269583-4a64-45c0-b2a4-50850668c464) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796717, 2218), t: 1 } and commit timestamp Timestamp(1574796717, 2218)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.975-0500 I INDEX [conn88] index build: starting on test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.956-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 (27269583-4a64-45c0-b2a4-50850668c464)'. Ident: 'index-267--2310912778499990807', commit timestamp: 'Timestamp(1574796717, 2218)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.953-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 (27269583-4a64-45c0-b2a4-50850668c464).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.975-0500 I INDEX [conn88] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.956-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996'. Ident: collection-261--2310912778499990807, commit timestamp: Timestamp(1574796717, 2218)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.953-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 (27269583-4a64-45c0-b2a4-50850668c464)'. Ident: 'index-262--7234316082034423155', commit timestamp: 'Timestamp(1574796717, 2218)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.975-0500 I STORAGE [conn88] Index build initialized: f5c57de3-e433-4635-95f2-a5f775e6a00b: test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b (2b6b795a-0c83-4765-9ebd-8e106f88b4e7 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.959-0500 I STORAGE [ReplWriterWorker-11] createCollection: test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b with provided UUID: 2b6b795a-0c83-4765-9ebd-8e106f88b4e7 and options: { uuid: UUID("2b6b795a-0c83-4765-9ebd-8e106f88b4e7"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.953-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996 (27269583-4a64-45c0-b2a4-50850668c464)'. Ident: 'index-267--7234316082034423155', commit timestamp: 'Timestamp(1574796717, 2218)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.975-0500 I INDEX [conn88] Waiting for index build to complete: f5c57de3-e433-4635-95f2-a5f775e6a00b
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.973-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.953-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996'. Ident: collection-261--7234316082034423155, commit timestamp: Timestamp(1574796717, 2218)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.975-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.974-0500 I STORAGE [ReplWriterWorker-10] createCollection: test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6 with provided UUID: 11d8f567-4661-4e25-8595-4ce121f2030d and options: { uuid: UUID("11d8f567-4661-4e25-8595-4ce121f2030d"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.956-0500 I STORAGE [ReplWriterWorker-9] createCollection: test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b with provided UUID: 2b6b795a-0c83-4765-9ebd-8e106f88b4e7 and options: { uuid: UUID("2b6b795a-0c83-4765-9ebd-8e106f88b4e7"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.983-0500 I INDEX [conn82] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.990-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.969-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.983-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:57.991-0500 I STORAGE [ReplWriterWorker-5] createCollection: test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41 with provided UUID: bf494165-965e-497d-968d-60b56d9d4c4c and options: { uuid: UUID("bf494165-965e-497d-968d-60b56d9d4c4c"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.970-0500 I STORAGE [ReplWriterWorker-15] createCollection: test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6 with provided UUID: 11d8f567-4661-4e25-8595-4ce121f2030d and options: { uuid: UUID("11d8f567-4661-4e25-8595-4ce121f2030d"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.983-0500 I INDEX [conn82] Registering index build: 3b3bd242-897c-49a9-9b72-0359d28b6114
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.006-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.984-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:57.992-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.007-0500 I STORAGE [ReplWriterWorker-12] createCollection: test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d with provided UUID: a452286d-4741-4bd2-84c4-65f270281c9d and options: { uuid: UUID("a452286d-4741-4bd2-84c4-65f270281c9d"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.985-0500 I STORAGE [ReplWriterWorker-5] createCollection: test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41 with provided UUID: bf494165-965e-497d-968d-60b56d9d4c4c and options: { uuid: UUID("bf494165-965e-497d-968d-60b56d9d4c4c"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.000-0500 I INDEX [conn84] index build: starting on test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.021-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52614 #90 (15 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.998-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.000-0500 I INDEX [conn84] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.021-0500 I NETWORK [conn90] received client metadata from 127.0.0.1:52614 conn90: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:57.999-0500 I STORAGE [ReplWriterWorker-11] createCollection: test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d with provided UUID: a452286d-4741-4bd2-84c4-65f270281c9d and options: { uuid: UUID("a452286d-4741-4bd2-84c4-65f270281c9d"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.001-0500 I STORAGE [conn84] Index build initialized: 3335d930-572a-4bfe-936e-dddd4ac058bd: test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6 (11d8f567-4661-4e25-8595-4ce121f2030d ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.023-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.013-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.001-0500 I INDEX [conn84] Waiting for index build to complete: 3335d930-572a-4bfe-936e-dddd4ac058bd
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.027-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26 (6b97609e-9e18-4a96-9edd-cc4e6b6742fe) to test4_fsmdb0.agg_out and drop c5575831-c760-48cd-9685-6531387fa44c.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.015-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26 (6b97609e-9e18-4a96-9edd-cc4e6b6742fe) to test4_fsmdb0.agg_out and drop c5575831-c760-48cd-9685-6531387fa44c.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.001-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.027-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test4_fsmdb0.agg_out (c5575831-c760-48cd-9685-6531387fa44c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796717, 2531), t: 1 } and commit timestamp Timestamp(1574796717, 2531)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.016-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test4_fsmdb0.agg_out (c5575831-c760-48cd-9685-6531387fa44c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796717, 2531), t: 1 } and commit timestamp Timestamp(1574796717, 2531)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.001-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: f5c57de3-e433-4635-95f2-a5f775e6a00b: test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b ( 2b6b795a-0c83-4765-9ebd-8e106f88b4e7 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.027-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test4_fsmdb0.agg_out (c5575831-c760-48cd-9685-6531387fa44c).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.016-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test4_fsmdb0.agg_out (c5575831-c760-48cd-9685-6531387fa44c).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.002-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.027-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection 6b97609e-9e18-4a96-9edd-cc4e6b6742fe from test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.016-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection 6b97609e-9e18-4a96-9edd-cc4e6b6742fe from test4_fsmdb0.tmp.agg_out.4a5cdfc8-0e6d-40e3-8be1-e0670a95ea26 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.012-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.027-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (c5575831-c760-48cd-9685-6531387fa44c)'. Ident: 'index-264--2310912778499990807', commit timestamp: 'Timestamp(1574796717, 2531)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.016-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (c5575831-c760-48cd-9685-6531387fa44c)'. Ident: 'index-264--7234316082034423155', commit timestamp: 'Timestamp(1574796717, 2531)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.017-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47412 #196 (49 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.027-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (c5575831-c760-48cd-9685-6531387fa44c)'. Ident: 'index-275--2310912778499990807', commit timestamp: 'Timestamp(1574796717, 2531)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.016-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (c5575831-c760-48cd-9685-6531387fa44c)'. Ident: 'index-275--7234316082034423155', commit timestamp: 'Timestamp(1574796717, 2531)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.017-0500 I NETWORK [conn196] received client metadata from 127.0.0.1:47412 conn196: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.027-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-263--2310912778499990807, commit timestamp: Timestamp(1574796717, 2531)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.016-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-263--7234316082034423155, commit timestamp: Timestamp(1574796717, 2531)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.019-0500 I INDEX [conn77] index build: starting on test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.028-0500 I STORAGE [ReplWriterWorker-5] createCollection: test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d with provided UUID: 0235f091-e4f7-4096-8802-85a62492f0f9 and options: { uuid: UUID("0235f091-e4f7-4096-8802-85a62492f0f9"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.016-0500 I STORAGE [ReplWriterWorker-15] createCollection: test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d with provided UUID: 0235f091-e4f7-4096-8802-85a62492f0f9 and options: { uuid: UUID("0235f091-e4f7-4096-8802-85a62492f0f9"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.019-0500 I INDEX [conn77] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.032-0500 W CONTROL [conn90] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 126 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.021-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35976 #90 (15 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.019-0500 I STORAGE [conn77] Index build initialized: 562d2f03-ca96-493a-a291-457e7fb31817: test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d (a452286d-4741-4bd2-84c4-65f270281c9d ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.044-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.022-0500 I NETWORK [conn90] received client metadata from 127.0.0.1:35976 conn90: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.019-0500 I INDEX [conn77] Waiting for index build to complete: 562d2f03-ca96-493a-a291-457e7fb31817
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.062-0500 I INDEX [ReplWriterWorker-1] index build: starting on test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.030-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.019-0500 I INDEX [conn88] Index build completed: f5c57de3-e433-4635-95f2-a5f775e6a00b
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.062-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.032-0500 W CONTROL [conn90] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 79 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.020-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 3335d930-572a-4bfe-936e-dddd4ac058bd: test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6 ( 11d8f567-4661-4e25-8595-4ce121f2030d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.062-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 338492da-5f28-41c4-9d6f-0ec5dde96600: test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b (2b6b795a-0c83-4765-9ebd-8e106f88b4e7 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.048-0500 I INDEX [ReplWriterWorker-4] index build: starting on test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.020-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47414 #197 (50 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.062-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.048-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.020-0500 I NETWORK [conn197] received client metadata from 127.0.0.1:47414 conn197: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.063-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.048-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 31351667-205b-4315-b59c-41c6e45bf5d6: test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b (2b6b795a-0c83-4765-9ebd-8e106f88b4e7 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.031-0500 W CONTROL [conn197] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 84 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.066-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.048-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.039-0500 I INDEX [conn85] index build: starting on test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.075-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 338492da-5f28-41c4-9d6f-0ec5dde96600: test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b ( 2b6b795a-0c83-4765-9ebd-8e106f88b4e7 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.049-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.039-0500 I INDEX [conn85] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.082-0500 I INDEX [ReplWriterWorker-6] index build: starting on test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.051-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.039-0500 I STORAGE [conn85] Index build initialized: 1aab3df1-081f-47a4-a411-292dd0c53d35: test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41 (bf494165-965e-497d-968d-60b56d9d4c4c ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.082-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.059-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 31351667-205b-4315-b59c-41c6e45bf5d6: test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b ( 2b6b795a-0c83-4765-9ebd-8e106f88b4e7 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.039-0500 I INDEX [conn85] Waiting for index build to complete: 1aab3df1-081f-47a4-a411-292dd0c53d35
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.082-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 796328fd-8a44-4327-b305-dccfd0641015: test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6 (11d8f567-4661-4e25-8595-4ce121f2030d ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.068-0500 I INDEX [ReplWriterWorker-5] index build: starting on test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.039-0500 I INDEX [conn84] Index build completed: 3335d930-572a-4bfe-936e-dddd4ac058bd
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.082-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.068-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.039-0500 I COMMAND [conn88] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.083-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.068-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 8b8f8bb8-53fa-44ea-9329-337a2f5ebdc4: test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6 (11d8f567-4661-4e25-8595-4ce121f2030d ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.039-0500 I COMMAND [conn84] command test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6 appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796717, 2530), signature: { hash: BinData(0, B155B59C95953FCA2BC7F7AB81A2463789C1D0B7), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45740", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 13461 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 100ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.086-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.068-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.039-0500 I STORAGE [conn88] dropCollection: test4_fsmdb0.agg_out (6b97609e-9e18-4a96-9edd-cc4e6b6742fe) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796718, 506), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.093-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 796328fd-8a44-4327-b305-dccfd0641015: test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6 ( 11d8f567-4661-4e25-8595-4ce121f2030d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.068-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.039-0500 I STORAGE [conn88] Finishing collection drop for test4_fsmdb0.agg_out (6b97609e-9e18-4a96-9edd-cc4e6b6742fe).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.095-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b (2b6b795a-0c83-4765-9ebd-8e106f88b4e7) to test4_fsmdb0.agg_out and drop 6b97609e-9e18-4a96-9edd-cc4e6b6742fe.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.072-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.039-0500 I STORAGE [conn88] renameCollection: renaming collection 2b6b795a-0c83-4765-9ebd-8e106f88b4e7 from test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.095-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test4_fsmdb0.agg_out (6b97609e-9e18-4a96-9edd-cc4e6b6742fe) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796718, 506), t: 1 } and commit timestamp Timestamp(1574796718, 506)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.074-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 8b8f8bb8-53fa-44ea-9329-337a2f5ebdc4: test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6 ( 11d8f567-4661-4e25-8595-4ce121f2030d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.040-0500 I STORAGE [conn88] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (6b97609e-9e18-4a96-9edd-cc4e6b6742fe)'. Ident: 'index-263--2588534479858262356', commit timestamp: 'Timestamp(1574796718, 506)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.095-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test4_fsmdb0.agg_out (6b97609e-9e18-4a96-9edd-cc4e6b6742fe).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.075-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b (2b6b795a-0c83-4765-9ebd-8e106f88b4e7) to test4_fsmdb0.agg_out and drop 6b97609e-9e18-4a96-9edd-cc4e6b6742fe.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.040-0500 I STORAGE [conn88] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (6b97609e-9e18-4a96-9edd-cc4e6b6742fe)'. Ident: 'index-266--2588534479858262356', commit timestamp: 'Timestamp(1574796718, 506)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.095-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 2b6b795a-0c83-4765-9ebd-8e106f88b4e7 from test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.075-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test4_fsmdb0.agg_out (6b97609e-9e18-4a96-9edd-cc4e6b6742fe) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796718, 506), t: 1 } and commit timestamp Timestamp(1574796718, 506)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.040-0500 I STORAGE [conn88] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-261--2588534479858262356, commit timestamp: Timestamp(1574796718, 506)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.095-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (6b97609e-9e18-4a96-9edd-cc4e6b6742fe)'. Ident: 'index-266--2310912778499990807', commit timestamp: 'Timestamp(1574796718, 506)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.075-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test4_fsmdb0.agg_out (6b97609e-9e18-4a96-9edd-cc4e6b6742fe).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.040-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.095-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (6b97609e-9e18-4a96-9edd-cc4e6b6742fe)'. Ident: 'index-273--2310912778499990807', commit timestamp: 'Timestamp(1574796718, 506)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.075-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 2b6b795a-0c83-4765-9ebd-8e106f88b4e7 from test4_fsmdb0.tmp.agg_out.96cc917c-58c6-4a81-8063-62080d273a2b to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.040-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.095-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-265--2310912778499990807, commit timestamp: Timestamp(1574796718, 506)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.075-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (6b97609e-9e18-4a96-9edd-cc4e6b6742fe)'. Ident: 'index-266--7234316082034423155', commit timestamp: 'Timestamp(1574796718, 506)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.040-0500 I COMMAND [conn65] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2595806212135161572, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7159019506320359035, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796717894), clusterTime: Timestamp(1574796717, 2218) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796717, 2282), signature: { hash: BinData(0, B155B59C95953FCA2BC7F7AB81A2463789C1D0B7), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45728", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 145ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.096-0500 I STORAGE [ReplWriterWorker-4] createCollection: test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36 with provided UUID: cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b and options: { uuid: UUID("cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.075-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (6b97609e-9e18-4a96-9edd-cc4e6b6742fe)'. Ident: 'index-273--7234316082034423155', commit timestamp: 'Timestamp(1574796718, 506)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.040-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.110-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.075-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-265--7234316082034423155, commit timestamp: Timestamp(1574796718, 506)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.043-0500 I STORAGE [conn84] createCollection: test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36 with generated UUID: cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.125-0500 I INDEX [ReplWriterWorker-2] index build: starting on test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.081-0500 I STORAGE [ReplWriterWorker-12] createCollection: test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36 with provided UUID: cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b and options: { uuid: UUID("cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.043-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.125-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.096-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.050-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.125-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 11e409ff-5ac2-4392-9214-b3431d4c9771: test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d (a452286d-4741-4bd2-84c4-65f270281c9d ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.125-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.125-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.128-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.131-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 11e409ff-5ac2-4392-9214-b3431d4c9771: test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d ( a452286d-4741-4bd2-84c4-65f270281c9d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.149-0500 I INDEX [ReplWriterWorker-15] index build: starting on test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.149-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.149-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 500c9078-654f-436e-b291-3865a53a4043: test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41 (bf494165-965e-497d-968d-60b56d9d4c4c ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.150-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.150-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.153-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.161-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 500c9078-654f-436e-b291-3865a53a4043: test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41 ( bf494165-965e-497d-968d-60b56d9d4c4c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.167-0500 I INDEX [ReplWriterWorker-4] index build: starting on test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.167-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.167-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: df2e6496-01af-4821-b962-8a25c2bade79: test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d (0235f091-e4f7-4096-8802-85a62492f0f9 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.167-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.168-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.168-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6 (11d8f567-4661-4e25-8595-4ce121f2030d) to test4_fsmdb0.agg_out and drop 2b6b795a-0c83-4765-9ebd-8e106f88b4e7.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.170-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.170-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test4_fsmdb0.agg_out (2b6b795a-0c83-4765-9ebd-8e106f88b4e7) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796718, 1015), t: 1 } and commit timestamp Timestamp(1574796718, 1015)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.170-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test4_fsmdb0.agg_out (2b6b795a-0c83-4765-9ebd-8e106f88b4e7).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.170-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection 11d8f567-4661-4e25-8595-4ce121f2030d from test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.170-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (2b6b795a-0c83-4765-9ebd-8e106f88b4e7)'. Ident: 'index-278--2310912778499990807', commit timestamp: 'Timestamp(1574796718, 1015)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.170-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (2b6b795a-0c83-4765-9ebd-8e106f88b4e7)'. Ident: 'index-287--2310912778499990807', commit timestamp: 'Timestamp(1574796718, 1015)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.170-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-277--2310912778499990807, commit timestamp: Timestamp(1574796718, 1015)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:31:58.172-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: df2e6496-01af-4821-b962-8a25c2bade79: test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d ( 0235f091-e4f7-4096-8802-85a62492f0f9 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.775-0500 I STORAGE [ReplWriterWorker-14] createCollection: test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9 with provided UUID: ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46 and options: { uuid: UUID("ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.789-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.109-0500 I INDEX [ReplWriterWorker-6] index build: starting on test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.058-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 562d2f03-ca96-493a-a291-457e7fb31817: test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d ( a452286d-4741-4bd2-84c4-65f270281c9d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.066-0500 I INDEX [conn82] index build: starting on test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.066-0500 I INDEX [conn82] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.066-0500 I STORAGE [conn82] Index build initialized: 3b3bd242-897c-49a9-9b72-0359d28b6114: test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d (0235f091-e4f7-4096-8802-85a62492f0f9 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.066-0500 I INDEX [conn82] Waiting for index build to complete: 3b3bd242-897c-49a9-9b72-0359d28b6114
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.066-0500 I INDEX [conn77] Index build completed: 562d2f03-ca96-493a-a291-457e7fb31817
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.066-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.066-0500 I COMMAND [conn77] command test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796717, 2531), signature: { hash: BinData(0, B155B59C95953FCA2BC7F7AB81A2463789C1D0B7), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 113ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.069-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.078-0500 I INDEX [conn84] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.078-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.080-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 1aab3df1-081f-47a4-a411-292dd0c53d35: test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41 ( bf494165-965e-497d-968d-60b56d9d4c4c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.080-0500 I INDEX [conn85] Index build completed: 1aab3df1-081f-47a4-a411-292dd0c53d35
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.080-0500 I COMMAND [conn85] command test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41 appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796717, 2530), signature: { hash: BinData(0, B155B59C95953FCA2BC7F7AB81A2463789C1D0B7), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58880", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 6649 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 133ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.081-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.082-0500 I COMMAND [conn88] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.082-0500 I STORAGE [conn88] dropCollection: test4_fsmdb0.agg_out (2b6b795a-0c83-4765-9ebd-8e106f88b4e7) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796718, 1015), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.082-0500 I STORAGE [conn88] Finishing collection drop for test4_fsmdb0.agg_out (2b6b795a-0c83-4765-9ebd-8e106f88b4e7).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.082-0500 I STORAGE [conn88] renameCollection: renaming collection 11d8f567-4661-4e25-8595-4ce121f2030d from test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.082-0500 I STORAGE [conn88] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (2b6b795a-0c83-4765-9ebd-8e106f88b4e7)'. Ident: 'index-272--2588534479858262356', commit timestamp: 'Timestamp(1574796718, 1015)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.082-0500 I STORAGE [conn88] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (2b6b795a-0c83-4765-9ebd-8e106f88b4e7)'. Ident: 'index-276--2588534479858262356', commit timestamp: 'Timestamp(1574796718, 1015)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.082-0500 I STORAGE [conn88] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-268--2588534479858262356, commit timestamp: Timestamp(1574796718, 1015)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.082-0500 I INDEX [conn84] Registering index build: 1c084e89-c5d7-40e4-88f2-23fb181a4104
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.082-0500 I COMMAND [conn64] command test4_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1664021116629937072, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5339545277730689154, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796717894), clusterTime: Timestamp(1574796717, 2218) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796717, 2346), signature: { hash: BinData(0, B155B59C95953FCA2BC7F7AB81A2463789C1D0B7), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45740", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 187ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.085-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 3b3bd242-897c-49a9-9b72-0359d28b6114: test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d ( 0235f091-e4f7-4096-8802-85a62492f0f9 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.085-0500 I STORAGE [conn85] createCollection: test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9 with generated UUID: ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:31:58.101-0500 I INDEX [conn84] index build: starting on test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.760-0500 I INDEX [conn84] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.760-0500 I STORAGE [conn84] Index build initialized: 1c084e89-c5d7-40e4-88f2-23fb181a4104: test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36 (cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.761-0500 I INDEX [conn84] Waiting for index build to complete: 1c084e89-c5d7-40e4-88f2-23fb181a4104
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.761-0500 I INDEX [conn82] Index build completed: 3b3bd242-897c-49a9-9b72-0359d28b6114
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.761-0500 I COMMAND [conn82] command test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796717, 2533), signature: { hash: BinData(0, B155B59C95953FCA2BC7F7AB81A2463789C1D0B7), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58884", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 422 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 4777ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.771-0500 I INDEX [conn85] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.771-0500 I COMMAND [conn85] command test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9 appName: "tid:4" command: create { create: "tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9", temp: true, validationLevel: "strict", validationAction: "error", databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796718, 1079), signature: { hash: BinData(0, 60E17FFC301900125993E52B535A0EF47806D1B6), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45740", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 4686ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.771-0500 I COMMAND [conn77] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.772-0500 I STORAGE [conn77] dropCollection: test4_fsmdb0.agg_out (11d8f567-4661-4e25-8595-4ce121f2030d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796722, 2), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.772-0500 I STORAGE [conn77] Finishing collection drop for test4_fsmdb0.agg_out (11d8f567-4661-4e25-8595-4ce121f2030d).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.772-0500 I STORAGE [conn77] renameCollection: renaming collection a452286d-4741-4bd2-84c4-65f270281c9d from test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.772-0500 I STORAGE [conn77] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (11d8f567-4661-4e25-8595-4ce121f2030d)'. Ident: 'index-273--2588534479858262356', commit timestamp: 'Timestamp(1574796722, 2)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.772-0500 I STORAGE [conn77] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (11d8f567-4661-4e25-8595-4ce121f2030d)'. Ident: 'index-280--2588534479858262356', commit timestamp: 'Timestamp(1574796722, 2)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.772-0500 I STORAGE [conn77] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-269--2588534479858262356, commit timestamp: Timestamp(1574796722, 2)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.772-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.772-0500 I COMMAND [conn77] command test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d appName: "tid:2" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d", to: "test4_fsmdb0.agg_out", collectionOptions: { validationLevel: "strict", validationAction: "error" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796718, 1580), signature: { hash: BinData(0, 60E17FFC301900125993E52B535A0EF47806D1B6), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 4670605 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 4671ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.772-0500 I INDEX [conn85] Registering index build: ed3a08fa-c81a-4a7a-bb64-4cab4cb6ca95
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.772-0500 I COMMAND [conn197] command test4_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796718, 509), lsid: { id: UUID("3f42616f-e948-49d9-97ee-c2eb72d5ff98") }, $clusterTime: { clusterTime: Timestamp(1574796718, 573), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796718, 509). Collection minimum timestamp is Timestamp(1574796718, 1014)" errName:SnapshotUnavailable errCode:246 reslen:581 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 4644180 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 4644ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.772-0500 I COMMAND [conn62] command test4_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8766208209831237709, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8609095727780815943, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796717895), clusterTime: Timestamp(1574796717, 2346) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796717, 2348), signature: { hash: BinData(0, B155B59C95953FCA2BC7F7AB81A2463789C1D0B7), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 4876ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.773-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.775-0500 I STORAGE [conn77] createCollection: test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45 with generated UUID: f72342ad-48c4-4fba-9026-a5fc9dc65208 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.781-0500 I COMMAND [conn88] command test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41 appName: "tid:3" command: insert { insert: "tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41", bypassDocumentValidation: false, ordered: false, documents: 500, shardVersion: [ Timestamp(0, 0), ObjectId('000000000000000000000000') ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, writeConcern: { w: 1, wtimeout: 0 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796718, 1080), signature: { hash: BinData(0, 60E17FFC301900125993E52B535A0EF47806D1B6), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58880", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } ninserted:500 keysInserted:1000 numYields:0 reslen:400 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 8 } }, ReplicationStateTransition: { acquireCount: { w: 8 } }, Global: { acquireCount: { w: 8 } }, Database: { acquireCount: { w: 8 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 4669696 } }, Collection: { acquireCount: { w: 8 } }, Mutex: { acquireCount: { r: 1016 } } } flowControl:{ acquireCount: 8 } storage:{ timeWaitingMicros: { schemaLock: 6892 } } protocol:op_msg 4687ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.781-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.109-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.109-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: f9d84643-b5f3-4066-829c-9073ecaf3ff8: test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d (a452286d-4741-4bd2-84c4-65f270281c9d ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.109-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.110-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.111-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.115-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: f9d84643-b5f3-4066-829c-9073ecaf3ff8: test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d ( a452286d-4741-4bd2-84c4-65f270281c9d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.128-0500 I INDEX [ReplWriterWorker-11] index build: starting on test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.128-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.128-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 2fa9c206-3c72-43bd-a703-09a7f2645df9: test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41 (bf494165-965e-497d-968d-60b56d9d4c4c ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.128-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.129-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.131-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.139-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 2fa9c206-3c72-43bd-a703-09a7f2645df9: test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41 ( bf494165-965e-497d-968d-60b56d9d4c4c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.146-0500 I INDEX [ReplWriterWorker-4] index build: starting on test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.146-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.146-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 2e72a4c7-e615-4963-a967-b192a4a5001d: test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d (0235f091-e4f7-4096-8802-85a62492f0f9 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.146-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.147-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.148-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6 (11d8f567-4661-4e25-8595-4ce121f2030d) to test4_fsmdb0.agg_out and drop 2b6b795a-0c83-4765-9ebd-8e106f88b4e7.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.149-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.149-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test4_fsmdb0.agg_out (2b6b795a-0c83-4765-9ebd-8e106f88b4e7) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796718, 1015), t: 1 } and commit timestamp Timestamp(1574796718, 1015)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.149-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test4_fsmdb0.agg_out (2b6b795a-0c83-4765-9ebd-8e106f88b4e7).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.149-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 11d8f567-4661-4e25-8595-4ce121f2030d from test4_fsmdb0.tmp.agg_out.51238036-c5b4-43aa-a48f-8deb42d7f2b6 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.149-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (2b6b795a-0c83-4765-9ebd-8e106f88b4e7)'. Ident: 'index-278--7234316082034423155', commit timestamp: 'Timestamp(1574796718, 1015)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.150-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (2b6b795a-0c83-4765-9ebd-8e106f88b4e7)'. Ident: 'index-287--7234316082034423155', commit timestamp: 'Timestamp(1574796718, 1015)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.150-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-277--7234316082034423155, commit timestamp: Timestamp(1574796718, 1015)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:31:58.150-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 2e72a4c7-e615-4963-a967-b192a4a5001d: test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d ( 0235f091-e4f7-4096-8802-85a62492f0f9 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.775-0500 I STORAGE [ReplWriterWorker-10] createCollection: test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9 with provided UUID: ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46 and options: { uuid: UUID("ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.788-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.793-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d (a452286d-4741-4bd2-84c4-65f270281c9d) to test4_fsmdb0.agg_out and drop 11d8f567-4661-4e25-8595-4ce121f2030d.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.793-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test4_fsmdb0.agg_out (11d8f567-4661-4e25-8595-4ce121f2030d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796722, 2), t: 1 } and commit timestamp Timestamp(1574796722, 2)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.793-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test4_fsmdb0.agg_out (11d8f567-4661-4e25-8595-4ce121f2030d).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.793-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection a452286d-4741-4bd2-84c4-65f270281c9d from test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.793-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (11d8f567-4661-4e25-8595-4ce121f2030d)'. Ident: 'index-280--7234316082034423155', commit timestamp: 'Timestamp(1574796722, 2)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.793-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (11d8f567-4661-4e25-8595-4ce121f2030d)'. Ident: 'index-289--7234316082034423155', commit timestamp: 'Timestamp(1574796722, 2)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.794-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-279--7234316082034423155, commit timestamp: Timestamp(1574796722, 2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.795-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d (a452286d-4741-4bd2-84c4-65f270281c9d) to test4_fsmdb0.agg_out and drop 11d8f567-4661-4e25-8595-4ce121f2030d.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.795-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test4_fsmdb0.agg_out (11d8f567-4661-4e25-8595-4ce121f2030d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796722, 2), t: 1 } and commit timestamp Timestamp(1574796722, 2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.795-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test4_fsmdb0.agg_out (11d8f567-4661-4e25-8595-4ce121f2030d).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.795-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection a452286d-4741-4bd2-84c4-65f270281c9d from test4_fsmdb0.tmp.agg_out.cb8663aa-875a-41b8-b1d6-998370b4a57d to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.795-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (11d8f567-4661-4e25-8595-4ce121f2030d)'. Ident: 'index-280--2310912778499990807', commit timestamp: 'Timestamp(1574796722, 2)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.795-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (11d8f567-4661-4e25-8595-4ce121f2030d)'. Ident: 'index-289--2310912778499990807', commit timestamp: 'Timestamp(1574796722, 2)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.795-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-279--2310912778499990807, commit timestamp: Timestamp(1574796722, 2)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.800-0500 I INDEX [conn85] index build: starting on test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.800-0500 I INDEX [conn85] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.800-0500 I STORAGE [conn85] Index build initialized: ed3a08fa-c81a-4a7a-bb64-4cab4cb6ca95: test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9 (ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.800-0500 I INDEX [conn85] Waiting for index build to complete: ed3a08fa-c81a-4a7a-bb64-4cab4cb6ca95
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.800-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 1c084e89-c5d7-40e4-88f2-23fb181a4104: test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36 ( cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.800-0500 I INDEX [conn84] Index build completed: 1c084e89-c5d7-40e4-88f2-23fb181a4104
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.800-0500 I COMMAND [conn84] command test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796718, 1012), signature: { hash: BinData(0, 60E17FFC301900125993E52B535A0EF47806D1B6), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45728", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 3693 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 4721ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:02.806-0500 I COMMAND [conn52] command test4_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a") }, $clusterTime: { clusterTime: Timestamp(1574796717, 2282), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 4910ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.808-0500 I STORAGE [ReplWriterWorker-11] createCollection: test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45 with provided UUID: f72342ad-48c4-4fba-9026-a5fc9dc65208 and options: { uuid: UUID("f72342ad-48c4-4fba-9026-a5fc9dc65208"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.809-0500 I STORAGE [ReplWriterWorker-10] createCollection: test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45 with provided UUID: f72342ad-48c4-4fba-9026-a5fc9dc65208 and options: { uuid: UUID("f72342ad-48c4-4fba-9026-a5fc9dc65208"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:02.877-0500 I COMMAND [conn164] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458") }, $clusterTime: { clusterTime: Timestamp(1574796718, 506), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 4836ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:02.898-0500 I COMMAND [conn169] command test4_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51") }, $clusterTime: { clusterTime: Timestamp(1574796718, 1079), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 4814ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.805-0500 I INDEX [conn77] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.823-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.825-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:02.937-0500 I COMMAND [conn170] command test4_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6") }, $clusterTime: { clusterTime: Timestamp(1574796722, 66), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 163ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.805-0500 I COMMAND [conn88] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:02.840-0500 I COMMAND [conn53] command test4_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70") }, $clusterTime: { clusterTime: Timestamp(1574796717, 2531), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 4885ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.842-0500 I INDEX [ReplWriterWorker-11] index build: starting on test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.842-0500 I INDEX [ReplWriterWorker-12] index build: starting on test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:03.072-0500 I COMMAND [conn164] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458") }, $clusterTime: { clusterTime: Timestamp(1574796722, 2020), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 173ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:03.139-0500 I COMMAND [conn169] command test4_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51") }, $clusterTime: { clusterTime: Timestamp(1574796722, 2085), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 239ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:02.976-0500 I COMMAND [conn52] command test4_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a") }, $clusterTime: { clusterTime: Timestamp(1574796722, 571), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 168ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.842-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.842-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.805-0500 I STORAGE [conn88] dropCollection: test4_fsmdb0.agg_out (a452286d-4741-4bd2-84c4-65f270281c9d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796722, 507), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:03.142-0500 I COMMAND [conn170] command test4_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6") }, $clusterTime: { clusterTime: Timestamp(1574796722, 2590), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 204ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:03.014-0500 I COMMAND [conn53] command test4_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70") }, $clusterTime: { clusterTime: Timestamp(1574796722, 1076), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 172ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.842-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 59a55c09-0bda-4432-8d1f-e80cc126b69b: test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36 (cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.842-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: ac941b34-f579-48c1-a96e-0388686ff46e: test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36 (cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.805-0500 I STORAGE [conn88] Finishing collection drop for test4_fsmdb0.agg_out (a452286d-4741-4bd2-84c4-65f270281c9d).
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:05.996-0500 I COMMAND [conn52] command test4_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a") }, $clusterTime: { clusterTime: Timestamp(1574796722, 3093), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3019ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.842-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.843-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.805-0500 I STORAGE [conn88] renameCollection: renaming collection bf494165-965e-497d-968d-60b56d9d4c4c from test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.843-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.843-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.805-0500 I STORAGE [conn88] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (a452286d-4741-4bd2-84c4-65f270281c9d)'. Ident: 'index-275--2588534479858262356', commit timestamp: 'Timestamp(1574796722, 507)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.845-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41 (bf494165-965e-497d-968d-60b56d9d4c4c) to test4_fsmdb0.agg_out and drop a452286d-4741-4bd2-84c4-65f270281c9d.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.845-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41 (bf494165-965e-497d-968d-60b56d9d4c4c) to test4_fsmdb0.agg_out and drop a452286d-4741-4bd2-84c4-65f270281c9d.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.805-0500 I STORAGE [conn88] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (a452286d-4741-4bd2-84c4-65f270281c9d)'. Ident: 'index-282--2588534479858262356', commit timestamp: 'Timestamp(1574796722, 507)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.846-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.846-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.805-0500 I STORAGE [conn88] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-271--2588534479858262356, commit timestamp: Timestamp(1574796722, 507)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.846-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test4_fsmdb0.agg_out (a452286d-4741-4bd2-84c4-65f270281c9d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796722, 507), t: 1 } and commit timestamp Timestamp(1574796722, 507)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.846-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test4_fsmdb0.agg_out (a452286d-4741-4bd2-84c4-65f270281c9d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796722, 507), t: 1 } and commit timestamp Timestamp(1574796722, 507)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.805-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.846-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test4_fsmdb0.agg_out (a452286d-4741-4bd2-84c4-65f270281c9d).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.846-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test4_fsmdb0.agg_out (a452286d-4741-4bd2-84c4-65f270281c9d).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.805-0500 I INDEX [conn77] Registering index build: 7dc60415-bd1b-4b8f-a1aa-37f493f0e019
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.846-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection bf494165-965e-497d-968d-60b56d9d4c4c from test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.846-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection bf494165-965e-497d-968d-60b56d9d4c4c from test4_fsmdb0.tmp.agg_out.61c68af2-5e03-401d-86b6-aa907e750f41 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.806-0500 I COMMAND [conn80] command test4_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3653006092236159782, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7135808831356019758, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796717895), clusterTime: Timestamp(1574796717, 2282) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796717, 2348), signature: { hash: BinData(0, B155B59C95953FCA2BC7F7AB81A2463789C1D0B7), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58880", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 4909ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.846-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (a452286d-4741-4bd2-84c4-65f270281c9d)'. Ident: 'index-284--2310912778499990807', commit timestamp: 'Timestamp(1574796722, 507)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.846-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (a452286d-4741-4bd2-84c4-65f270281c9d)'. Ident: 'index-284--7234316082034423155', commit timestamp: 'Timestamp(1574796722, 507)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.806-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.846-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (a452286d-4741-4bd2-84c4-65f270281c9d)'. Ident: 'index-293--2310912778499990807', commit timestamp: 'Timestamp(1574796722, 507)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.847-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (a452286d-4741-4bd2-84c4-65f270281c9d)'. Ident: 'index-293--7234316082034423155', commit timestamp: 'Timestamp(1574796722, 507)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.809-0500 I STORAGE [conn88] createCollection: test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951 with generated UUID: 97059ead-49a2-46c0-98f4-e78a9d171500 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.846-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-283--2310912778499990807, commit timestamp: Timestamp(1574796722, 507)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.847-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-283--7234316082034423155, commit timestamp: Timestamp(1574796722, 507)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.814-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.849-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 59a55c09-0bda-4432-8d1f-e80cc126b69b: test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36 ( cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.848-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: ac941b34-f579-48c1-a96e-0388686ff46e: test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36 ( cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.829-0500 I INDEX [conn77] index build: starting on test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.850-0500 I STORAGE [ReplWriterWorker-8] createCollection: test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951 with provided UUID: 97059ead-49a2-46c0-98f4-e78a9d171500 and options: { uuid: UUID("97059ead-49a2-46c0-98f4-e78a9d171500"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.851-0500 I STORAGE [ReplWriterWorker-4] createCollection: test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951 with provided UUID: 97059ead-49a2-46c0-98f4-e78a9d171500 and options: { uuid: UUID("97059ead-49a2-46c0-98f4-e78a9d171500"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.829-0500 I INDEX [conn77] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.865-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.863-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.830-0500 I STORAGE [conn77] Index build initialized: 7dc60415-bd1b-4b8f-a1aa-37f493f0e019: test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45 (f72342ad-48c4-4fba-9026-a5fc9dc65208 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.884-0500 I INDEX [ReplWriterWorker-2] index build: starting on test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.881-0500 I INDEX [ReplWriterWorker-3] index build: starting on test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.830-0500 I INDEX [conn77] Waiting for index build to complete: 7dc60415-bd1b-4b8f-a1aa-37f493f0e019
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.884-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.881-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.831-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: ed3a08fa-c81a-4a7a-bb64-4cab4cb6ca95: test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9 ( ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.884-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 1399f768-a37f-4807-9ddb-a2997a156c87: test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9 (ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.881-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: e09d14d5-49c1-46c3-8583-c5eba417ef3f: test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9 (ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.831-0500 I INDEX [conn85] Index build completed: ed3a08fa-c81a-4a7a-bb64-4cab4cb6ca95
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.884-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.881-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.839-0500 I INDEX [conn88] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.885-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.882-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.839-0500 I COMMAND [conn82] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.886-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d (0235f091-e4f7-4096-8802-85a62492f0f9) to test4_fsmdb0.agg_out and drop bf494165-965e-497d-968d-60b56d9d4c4c.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.883-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d (0235f091-e4f7-4096-8802-85a62492f0f9) to test4_fsmdb0.agg_out and drop bf494165-965e-497d-968d-60b56d9d4c4c.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.839-0500 I STORAGE [conn82] dropCollection: test4_fsmdb0.agg_out (bf494165-965e-497d-968d-60b56d9d4c4c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796722, 1012), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.888-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.884-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.839-0500 I STORAGE [conn82] Finishing collection drop for test4_fsmdb0.agg_out (bf494165-965e-497d-968d-60b56d9d4c4c).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.888-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test4_fsmdb0.agg_out (bf494165-965e-497d-968d-60b56d9d4c4c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796722, 1012), t: 1 } and commit timestamp Timestamp(1574796722, 1012)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.885-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test4_fsmdb0.agg_out (bf494165-965e-497d-968d-60b56d9d4c4c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796722, 1012), t: 1 } and commit timestamp Timestamp(1574796722, 1012)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.840-0500 I STORAGE [conn82] renameCollection: renaming collection 0235f091-e4f7-4096-8802-85a62492f0f9 from test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.888-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test4_fsmdb0.agg_out (bf494165-965e-497d-968d-60b56d9d4c4c).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.885-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test4_fsmdb0.agg_out (bf494165-965e-497d-968d-60b56d9d4c4c).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.840-0500 I STORAGE [conn82] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (bf494165-965e-497d-968d-60b56d9d4c4c)'. Ident: 'index-274--2588534479858262356', commit timestamp: 'Timestamp(1574796722, 1012)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.888-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 0235f091-e4f7-4096-8802-85a62492f0f9 from test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.885-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 0235f091-e4f7-4096-8802-85a62492f0f9 from test4_fsmdb0.tmp.agg_out.f7d9737e-f17a-4c8b-bf82-8b630f9f354d to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.840-0500 I STORAGE [conn82] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (bf494165-965e-497d-968d-60b56d9d4c4c)'. Ident: 'index-284--2588534479858262356', commit timestamp: 'Timestamp(1574796722, 1012)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.888-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (bf494165-965e-497d-968d-60b56d9d4c4c)'. Ident: 'index-282--2310912778499990807', commit timestamp: 'Timestamp(1574796722, 1012)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.885-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (bf494165-965e-497d-968d-60b56d9d4c4c)'. Ident: 'index-282--7234316082034423155', commit timestamp: 'Timestamp(1574796722, 1012)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.840-0500 I STORAGE [conn82] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-270--2588534479858262356, commit timestamp: Timestamp(1574796722, 1012)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.888-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (bf494165-965e-497d-968d-60b56d9d4c4c)'. Ident: 'index-295--2310912778499990807', commit timestamp: 'Timestamp(1574796722, 1012)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.885-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (bf494165-965e-497d-968d-60b56d9d4c4c)'. Ident: 'index-295--7234316082034423155', commit timestamp: 'Timestamp(1574796722, 1012)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.840-0500 I INDEX [conn88] Registering index build: 850c55c6-40bc-43fd-9c09-f18320242d9c
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.888-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-281--2310912778499990807, commit timestamp: Timestamp(1574796722, 1012)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.885-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-281--7234316082034423155, commit timestamp: Timestamp(1574796722, 1012)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.840-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.891-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 1399f768-a37f-4807-9ddb-a2997a156c87: test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9 ( ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.886-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: e09d14d5-49c1-46c3-8583-c5eba417ef3f: test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9 ( ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.840-0500 I COMMAND [conn81] command test4_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8497598742709907207, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1158676286123010378, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796717955), clusterTime: Timestamp(1574796717, 2531) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796717, 2531), signature: { hash: BinData(0, B155B59C95953FCA2BC7F7AB81A2463789C1D0B7), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58884", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 4883ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.891-0500 I STORAGE [ReplWriterWorker-4] createCollection: test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d with provided UUID: 6ccf89c7-f71a-49f6-83f6-c2f6487688f6 and options: { uuid: UUID("6ccf89c7-f71a-49f6-83f6-c2f6487688f6"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.888-0500 I STORAGE [ReplWriterWorker-5] createCollection: test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d with provided UUID: 6ccf89c7-f71a-49f6-83f6-c2f6487688f6 and options: { uuid: UUID("6ccf89c7-f71a-49f6-83f6-c2f6487688f6"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.840-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.905-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.904-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.843-0500 I STORAGE [conn82] createCollection: test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d with generated UUID: 6ccf89c7-f71a-49f6-83f6-c2f6487688f6 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.923-0500 I INDEX [ReplWriterWorker-10] index build: starting on test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.933-0500 I INDEX [ReplWriterWorker-3] index build: starting on test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.853-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.923-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.933-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.867-0500 I INDEX [conn88] index build: starting on test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.923-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 25fb4f8a-199a-4019-ba48-f981a8be31e6: test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45 (f72342ad-48c4-4fba-9026-a5fc9dc65208 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.933-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: b25a5425-3a69-43e8-80f6-cff16052211d: test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45 (f72342ad-48c4-4fba-9026-a5fc9dc65208 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.867-0500 I INDEX [conn88] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.923-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.933-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.867-0500 I STORAGE [conn88] Index build initialized: 850c55c6-40bc-43fd-9c09-f18320242d9c: test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951 (97059ead-49a2-46c0-98f4-e78a9d171500 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.924-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.934-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.867-0500 I INDEX [conn88] Waiting for index build to complete: 850c55c6-40bc-43fd-9c09-f18320242d9c
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.925-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36 (cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b) to test4_fsmdb0.agg_out and drop 0235f091-e4f7-4096-8802-85a62492f0f9.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.935-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36 (cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b) to test4_fsmdb0.agg_out and drop 0235f091-e4f7-4096-8802-85a62492f0f9.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.937-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.938-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test4_fsmdb0.agg_out (0235f091-e4f7-4096-8802-85a62492f0f9) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796722, 1517), t: 1 } and commit timestamp Timestamp(1574796722, 1517)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.938-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test4_fsmdb0.agg_out (0235f091-e4f7-4096-8802-85a62492f0f9).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.938-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b from test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.938-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (0235f091-e4f7-4096-8802-85a62492f0f9)'. Ident: 'index-286--7234316082034423155', commit timestamp: 'Timestamp(1574796722, 1517)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.938-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (0235f091-e4f7-4096-8802-85a62492f0f9)'. Ident: 'index-297--7234316082034423155', commit timestamp: 'Timestamp(1574796722, 1517)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.938-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-285--7234316082034423155, commit timestamp: Timestamp(1574796722, 1517)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.939-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: b25a5425-3a69-43e8-80f6-cff16052211d: test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45 ( f72342ad-48c4-4fba-9026-a5fc9dc65208 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.957-0500 I INDEX [ReplWriterWorker-3] index build: starting on test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.957-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.957-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 31572b58-5979-4b8c-a199-1e7ad0623ac6: test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951 (97059ead-49a2-46c0-98f4-e78a9d171500 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.958-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.958-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.960-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9 (ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46) to test4_fsmdb0.agg_out and drop cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.960-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.960-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test4_fsmdb0.agg_out (cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796722, 2021), t: 1 } and commit timestamp Timestamp(1574796722, 2021)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.960-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test4_fsmdb0.agg_out (cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.960-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46 from test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.960-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b)'. Ident: 'index-292--7234316082034423155', commit timestamp: 'Timestamp(1574796722, 2021)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.960-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b)'. Ident: 'index-303--7234316082034423155', commit timestamp: 'Timestamp(1574796722, 2021)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.960-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-291--7234316082034423155, commit timestamp: Timestamp(1574796722, 2021)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.962-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 31572b58-5979-4b8c-a199-1e7ad0623ac6: test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951 ( 97059ead-49a2-46c0-98f4-e78a9d171500 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.963-0500 I STORAGE [ReplWriterWorker-5] createCollection: test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723 with provided UUID: 731fc8c4-de8c-4f5b-9359-53f3260b7d0a and options: { uuid: UUID("731fc8c4-de8c-4f5b-9359-53f3260b7d0a"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.978-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.979-0500 I STORAGE [ReplWriterWorker-14] createCollection: test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a with provided UUID: aa33fc25-f384-484c-a5f4-b20506c409ea and options: { uuid: UUID("aa33fc25-f384-484c-a5f4-b20506c409ea"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:02.993-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.009-0500 I INDEX [ReplWriterWorker-4] index build: starting on test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.009-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.009-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 029bba6c-03e3-44ac-ad39-d54d26047b97: test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d (6ccf89c7-f71a-49f6-83f6-c2f6487688f6 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.009-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.010-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.012-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.014-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45 (f72342ad-48c4-4fba-9026-a5fc9dc65208) to test4_fsmdb0.agg_out and drop ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.014-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test4_fsmdb0.agg_out (ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796722, 2526), t: 1 } and commit timestamp Timestamp(1574796722, 2526)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.014-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test4_fsmdb0.agg_out (ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.014-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection f72342ad-48c4-4fba-9026-a5fc9dc65208 from test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.014-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46)'. Ident: 'index-300--7234316082034423155', commit timestamp: 'Timestamp(1574796722, 2526)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.014-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46)'. Ident: 'index-307--7234316082034423155', commit timestamp: 'Timestamp(1574796722, 2526)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.014-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-299--7234316082034423155, commit timestamp: Timestamp(1574796722, 2526)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.017-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 029bba6c-03e3-44ac-ad39-d54d26047b97: test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d ( 6ccf89c7-f71a-49f6-83f6-c2f6487688f6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.017-0500 I STORAGE [ReplWriterWorker-3] createCollection: test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2 with provided UUID: ea92c756-ca1c-454a-8900-8c864b5c4ed5 and options: { uuid: UUID("ea92c756-ca1c-454a-8900-8c864b5c4ed5"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.031-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.046-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951 (97059ead-49a2-46c0-98f4-e78a9d171500) to test4_fsmdb0.agg_out and drop f72342ad-48c4-4fba-9026-a5fc9dc65208.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.046-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test4_fsmdb0.agg_out (f72342ad-48c4-4fba-9026-a5fc9dc65208) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796722, 3029), t: 1 } and commit timestamp Timestamp(1574796722, 3029)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.046-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test4_fsmdb0.agg_out (f72342ad-48c4-4fba-9026-a5fc9dc65208).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.046-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 97059ead-49a2-46c0-98f4-e78a9d171500 from test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.046-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (f72342ad-48c4-4fba-9026-a5fc9dc65208)'. Ident: 'index-302--7234316082034423155', commit timestamp: 'Timestamp(1574796722, 3029)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.046-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (f72342ad-48c4-4fba-9026-a5fc9dc65208)'. Ident: 'index-311--7234316082034423155', commit timestamp: 'Timestamp(1574796722, 3029)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.046-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-301--7234316082034423155, commit timestamp: Timestamp(1574796722, 3029)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.049-0500 I STORAGE [ReplWriterWorker-15] createCollection: test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92 with provided UUID: b65f9d6c-8205-4b49-8809-d9a6fa995b77 and options: { uuid: UUID("b65f9d6c-8205-4b49-8809-d9a6fa995b77"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.065-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.082-0500 I INDEX [ReplWriterWorker-8] index build: starting on test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.082-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.082-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 22fd30f1-7a9d-43cd-8d8a-de7906e12fcb: test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723 (731fc8c4-de8c-4f5b-9359-53f3260b7d0a ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.082-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.083-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.084-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d (6ccf89c7-f71a-49f6-83f6-c2f6487688f6) to test4_fsmdb0.agg_out and drop 97059ead-49a2-46c0-98f4-e78a9d171500.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.086-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.087-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test4_fsmdb0.agg_out (97059ead-49a2-46c0-98f4-e78a9d171500) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796723, 2), t: 1 } and commit timestamp Timestamp(1574796723, 2)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.087-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test4_fsmdb0.agg_out (97059ead-49a2-46c0-98f4-e78a9d171500).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.087-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection 6ccf89c7-f71a-49f6-83f6-c2f6487688f6 from test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.087-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (97059ead-49a2-46c0-98f4-e78a9d171500)'. Ident: 'index-306--7234316082034423155', commit timestamp: 'Timestamp(1574796723, 2)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.087-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (97059ead-49a2-46c0-98f4-e78a9d171500)'. Ident: 'index-313--7234316082034423155', commit timestamp: 'Timestamp(1574796723, 2)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.087-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-305--7234316082034423155, commit timestamp: Timestamp(1574796723, 2)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.087-0500 I STORAGE [ReplWriterWorker-2] createCollection: test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b with provided UUID: 9c782e1b-4d04-4dc2-8f81-30b332175404 and options: { uuid: UUID("9c782e1b-4d04-4dc2-8f81-30b332175404"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.088-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 22fd30f1-7a9d-43cd-8d8a-de7906e12fcb: test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723 ( 731fc8c4-de8c-4f5b-9359-53f3260b7d0a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.104-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.123-0500 I INDEX [ReplWriterWorker-8] index build: starting on test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.123-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.123-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: c103d55b-30f0-4e5c-bac5-a1590bd20bb9: test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a (aa33fc25-f384-484c-a5f4-b20506c409ea ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.123-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.124-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.126-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.131-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: c103d55b-30f0-4e5c-bac5-a1590bd20bb9: test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a ( aa33fc25-f384-484c-a5f4-b20506c409ea ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.148-0500 I INDEX [ReplWriterWorker-6] index build: starting on test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.148-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.148-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 560d22e0-9b22-4f23-b956-9502e0706b56: test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2 (ea92c756-ca1c-454a-8900-8c864b5c4ed5 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.148-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.148-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.150-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723 (731fc8c4-de8c-4f5b-9359-53f3260b7d0a) to test4_fsmdb0.agg_out and drop 6ccf89c7-f71a-49f6-83f6-c2f6487688f6.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.151-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.151-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test4_fsmdb0.agg_out (6ccf89c7-f71a-49f6-83f6-c2f6487688f6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796723, 510), t: 1 } and commit timestamp Timestamp(1574796723, 510)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.151-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test4_fsmdb0.agg_out (6ccf89c7-f71a-49f6-83f6-c2f6487688f6).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.151-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection 731fc8c4-de8c-4f5b-9359-53f3260b7d0a from test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.151-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (6ccf89c7-f71a-49f6-83f6-c2f6487688f6)'. Ident: 'index-310--7234316082034423155', commit timestamp: 'Timestamp(1574796723, 510)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.151-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (6ccf89c7-f71a-49f6-83f6-c2f6487688f6)'. Ident: 'index-319--7234316082034423155', commit timestamp: 'Timestamp(1574796723, 510)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.151-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-309--7234316082034423155, commit timestamp: Timestamp(1574796723, 510)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.152-0500 I STORAGE [ReplWriterWorker-0] createCollection: test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 with provided UUID: c4615f37-3be4-4b6e-b8d0-dd476eedafca and options: { uuid: UUID("c4615f37-3be4-4b6e-b8d0-dd476eedafca"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.154-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 560d22e0-9b22-4f23-b956-9502e0706b56: test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2 ( ea92c756-ca1c-454a-8900-8c864b5c4ed5 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.167-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.183-0500 I INDEX [ReplWriterWorker-11] index build: starting on test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.183-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.183-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 696aabf2-dbe8-4a9c-aaa9-50db8f6212d4: test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92 (b65f9d6c-8205-4b49-8809-d9a6fa995b77 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.183-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.184-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.186-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.217-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 696aabf2-dbe8-4a9c-aaa9-50db8f6212d4: test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92 ( b65f9d6c-8205-4b49-8809-d9a6fa995b77 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.234-0500 I INDEX [ReplWriterWorker-8] index build: starting on test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.234-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.234-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 9c78736f-46e8-49ad-ba5c-74da5760e34c: test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b (9c782e1b-4d04-4dc2-8f81-30b332175404 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.234-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.235-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.869-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 7dc60415-bd1b-4b8f-a1aa-37f493f0e019: test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45 ( f72342ad-48c4-4fba-9026-a5fc9dc65208 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.926-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.237-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.869-0500 I INDEX [conn77] Index build completed: 7dc60415-bd1b-4b8f-a1aa-37f493f0e019
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.927-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test4_fsmdb0.agg_out (0235f091-e4f7-4096-8802-85a62492f0f9) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796722, 1517), t: 1 } and commit timestamp Timestamp(1574796722, 1517)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.238-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a (aa33fc25-f384-484c-a5f4-b20506c409ea) to test4_fsmdb0.agg_out and drop 731fc8c4-de8c-4f5b-9359-53f3260b7d0a.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.876-0500 I INDEX [conn82] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.927-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test4_fsmdb0.agg_out (0235f091-e4f7-4096-8802-85a62492f0f9).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.238-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test4_fsmdb0.agg_out (731fc8c4-de8c-4f5b-9359-53f3260b7d0a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796723, 1850), t: 1 } and commit timestamp Timestamp(1574796723, 1850)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.876-0500 I COMMAND [conn84] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.927-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b from test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.238-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test4_fsmdb0.agg_out (731fc8c4-de8c-4f5b-9359-53f3260b7d0a).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.877-0500 I STORAGE [conn84] dropCollection: test4_fsmdb0.agg_out (0235f091-e4f7-4096-8802-85a62492f0f9) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796722, 1517), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.927-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (0235f091-e4f7-4096-8802-85a62492f0f9)'. Ident: 'index-286--2310912778499990807', commit timestamp: 'Timestamp(1574796722, 1517)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.238-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection aa33fc25-f384-484c-a5f4-b20506c409ea from test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.877-0500 I STORAGE [conn84] Finishing collection drop for test4_fsmdb0.agg_out (0235f091-e4f7-4096-8802-85a62492f0f9).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.927-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (0235f091-e4f7-4096-8802-85a62492f0f9)'. Ident: 'index-297--2310912778499990807', commit timestamp: 'Timestamp(1574796722, 1517)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.238-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (731fc8c4-de8c-4f5b-9359-53f3260b7d0a)'. Ident: 'index-316--7234316082034423155', commit timestamp: 'Timestamp(1574796723, 1850)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.877-0500 I STORAGE [conn84] renameCollection: renaming collection cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b from test4_fsmdb0.tmp.agg_out.80041ed4-0482-4b4a-82f4-96b1128dce36 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.927-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-285--2310912778499990807, commit timestamp: Timestamp(1574796722, 1517)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.238-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (731fc8c4-de8c-4f5b-9359-53f3260b7d0a)'. Ident: 'index-325--7234316082034423155', commit timestamp: 'Timestamp(1574796723, 1850)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.877-0500 I STORAGE [conn84] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (0235f091-e4f7-4096-8802-85a62492f0f9)'. Ident: 'index-279--2588534479858262356', commit timestamp: 'Timestamp(1574796722, 1517)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.932-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 25fb4f8a-199a-4019-ba48-f981a8be31e6: test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45 ( f72342ad-48c4-4fba-9026-a5fc9dc65208 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.239-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-315--7234316082034423155, commit timestamp: Timestamp(1574796723, 1850)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.877-0500 I STORAGE [conn84] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (0235f091-e4f7-4096-8802-85a62492f0f9)'. Ident: 'index-286--2588534479858262356', commit timestamp: 'Timestamp(1574796722, 1517)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.952-0500 I INDEX [ReplWriterWorker-4] index build: starting on test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.241-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 9c78736f-46e8-49ad-ba5c-74da5760e34c: test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b ( 9c782e1b-4d04-4dc2-8f81-30b332175404 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.877-0500 I STORAGE [conn84] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-277--2588534479858262356, commit timestamp: Timestamp(1574796722, 1517)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.952-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.254-0500 I INDEX [ReplWriterWorker-10] index build: starting on test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.877-0500 I INDEX [conn82] Registering index build: a229063e-94c9-49ee-9e4e-879d930caf3f
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.952-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 9dced7d1-0ee4-413c-a99b-04873f12f6d5: test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951 (97059ead-49a2-46c0-98f4-e78a9d171500 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.254-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.877-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.952-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.254-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 94a5d54f-561e-4426-9849-39960ce38f5b: test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 (c4615f37-3be4-4b6e-b8d0-dd476eedafca ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.877-0500 I COMMAND [conn65] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6973069385474061608, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8522828651619278424, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796718041), clusterTime: Timestamp(1574796718, 506) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796718, 506), signature: { hash: BinData(0, 60E17FFC301900125993E52B535A0EF47806D1B6), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45728", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 4834ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.953-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.255-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.877-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.954-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9 (ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46) to test4_fsmdb0.agg_out and drop cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.255-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.879-0500 I COMMAND [conn65] CMD: dropIndexes test4_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.955-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.256-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2 (ea92c756-ca1c-454a-8900-8c864b5c4ed5) to test4_fsmdb0.agg_out and drop aa33fc25-f384-484c-a5f4-b20506c409ea.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.889-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.955-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test4_fsmdb0.agg_out (cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796722, 2021), t: 1 } and commit timestamp Timestamp(1574796722, 2021)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.258-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.897-0500 I INDEX [conn82] index build: starting on test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.955-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test4_fsmdb0.agg_out (cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.258-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test4_fsmdb0.agg_out (aa33fc25-f384-484c-a5f4-b20506c409ea) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796723, 2021), t: 1 } and commit timestamp Timestamp(1574796723, 2021)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.897-0500 I INDEX [conn82] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.955-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46 from test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.258-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test4_fsmdb0.agg_out (aa33fc25-f384-484c-a5f4-b20506c409ea).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.897-0500 I STORAGE [conn82] Index build initialized: a229063e-94c9-49ee-9e4e-879d930caf3f: test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d (6ccf89c7-f71a-49f6-83f6-c2f6487688f6 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.956-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b)'. Ident: 'index-292--2310912778499990807', commit timestamp: 'Timestamp(1574796722, 2021)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.258-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection ea92c756-ca1c-454a-8900-8c864b5c4ed5 from test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.897-0500 I INDEX [conn82] Waiting for index build to complete: a229063e-94c9-49ee-9e4e-879d930caf3f
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.956-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b)'. Ident: 'index-303--2310912778499990807', commit timestamp: 'Timestamp(1574796722, 2021)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.258-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (aa33fc25-f384-484c-a5f4-b20506c409ea)'. Ident: 'index-318--7234316082034423155', commit timestamp: 'Timestamp(1574796723, 2021)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.897-0500 I COMMAND [conn85] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.956-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-291--2310912778499990807, commit timestamp: Timestamp(1574796722, 2021)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.258-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (aa33fc25-f384-484c-a5f4-b20506c409ea)'. Ident: 'index-329--7234316082034423155', commit timestamp: 'Timestamp(1574796723, 2021)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.898-0500 I STORAGE [conn85] dropCollection: test4_fsmdb0.agg_out (cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796722, 2021), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.957-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 9dced7d1-0ee4-413c-a99b-04873f12f6d5: test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951 ( 97059ead-49a2-46c0-98f4-e78a9d171500 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.258-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-317--7234316082034423155, commit timestamp: Timestamp(1574796723, 2021)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.898-0500 I STORAGE [conn85] Finishing collection drop for test4_fsmdb0.agg_out (cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.959-0500 I STORAGE [ReplWriterWorker-14] createCollection: test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723 with provided UUID: 731fc8c4-de8c-4f5b-9359-53f3260b7d0a and options: { uuid: UUID("731fc8c4-de8c-4f5b-9359-53f3260b7d0a"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:03.260-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 94a5d54f-561e-4426-9849-39960ce38f5b: test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 ( c4615f37-3be4-4b6e-b8d0-dd476eedafca ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.898-0500 I STORAGE [conn85] renameCollection: renaming collection ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46 from test4_fsmdb0.tmp.agg_out.827df27f-189e-44c7-a8a2-59682511ade9 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.973-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.898-0500 I STORAGE [conn85] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b)'. Ident: 'index-289--2588534479858262356', commit timestamp: 'Timestamp(1574796722, 2021)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:05.998-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92 (b65f9d6c-8205-4b49-8809-d9a6fa995b77) to test4_fsmdb0.agg_out and drop ea92c756-ca1c-454a-8900-8c864b5c4ed5.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.974-0500 I STORAGE [ReplWriterWorker-0] createCollection: test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a with provided UUID: aa33fc25-f384-484c-a5f4-b20506c409ea and options: { uuid: UUID("aa33fc25-f384-484c-a5f4-b20506c409ea"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.898-0500 I STORAGE [conn85] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (cc062ac2-80af-4c86-8fe5-6e3f23ae7c9b)'. Ident: 'index-290--2588534479858262356', commit timestamp: 'Timestamp(1574796722, 2021)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:05.998-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test4_fsmdb0.agg_out (ea92c756-ca1c-454a-8900-8c864b5c4ed5) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796723, 2022), t: 1 } and commit timestamp Timestamp(1574796723, 2022)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:02.987-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.898-0500 I STORAGE [conn85] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-287--2588534479858262356, commit timestamp: Timestamp(1574796722, 2021)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:05.998-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test4_fsmdb0.agg_out (ea92c756-ca1c-454a-8900-8c864b5c4ed5).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.004-0500 I INDEX [ReplWriterWorker-11] index build: starting on test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.898-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:05.998-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection b65f9d6c-8205-4b49-8809-d9a6fa995b77 from test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.004-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.898-0500 I COMMAND [conn64] command test4_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6514833422801480279, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6305188033398323370, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796718083), clusterTime: Timestamp(1574796718, 1079) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796718, 1079), signature: { hash: BinData(0, 60E17FFC301900125993E52B535A0EF47806D1B6), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45740", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 4813ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:05.998-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (ea92c756-ca1c-454a-8900-8c864b5c4ed5)'. Ident: 'index-322--7234316082034423155', commit timestamp: 'Timestamp(1574796723, 2022)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.004-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 86b708a7-e647-48bf-88ba-fbc20456193c: test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d (6ccf89c7-f71a-49f6-83f6-c2f6487688f6 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.900-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 850c55c6-40bc-43fd-9c09-f18320242d9c: test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951 ( 97059ead-49a2-46c0-98f4-e78a9d171500 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:05.998-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (ea92c756-ca1c-454a-8900-8c864b5c4ed5)'. Ident: 'index-331--7234316082034423155', commit timestamp: 'Timestamp(1574796723, 2022)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.004-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.900-0500 I INDEX [conn88] Index build completed: 850c55c6-40bc-43fd-9c09-f18320242d9c
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:05.998-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-321--7234316082034423155, commit timestamp: Timestamp(1574796723, 2022)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.004-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.900-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.008-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.900-0500 I STORAGE [conn85] createCollection: test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723 with generated UUID: 731fc8c4-de8c-4f5b-9359-53f3260b7d0a and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.009-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45 (f72342ad-48c4-4fba-9026-a5fc9dc65208) to test4_fsmdb0.agg_out and drop ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.901-0500 I STORAGE [conn84] createCollection: test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a with generated UUID: aa33fc25-f384-484c-a5f4-b20506c409ea and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.009-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test4_fsmdb0.agg_out (ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796722, 2526), t: 1 } and commit timestamp Timestamp(1574796722, 2526)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.902-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.009-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test4_fsmdb0.agg_out (ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.920-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: a229063e-94c9-49ee-9e4e-879d930caf3f: test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d ( 6ccf89c7-f71a-49f6-83f6-c2f6487688f6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.009-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection f72342ad-48c4-4fba-9026-a5fc9dc65208 from test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.920-0500 I INDEX [conn82] Index build completed: a229063e-94c9-49ee-9e4e-879d930caf3f
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.009-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46)'. Ident: 'index-300--2310912778499990807', commit timestamp: 'Timestamp(1574796722, 2526)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.928-0500 I INDEX [conn85] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.009-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46)'. Ident: 'index-307--2310912778499990807', commit timestamp: 'Timestamp(1574796722, 2526)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.936-0500 I INDEX [conn84] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.009-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-299--2310912778499990807, commit timestamp: Timestamp(1574796722, 2526)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.936-0500 I COMMAND [conn77] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.010-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 86b708a7-e647-48bf-88ba-fbc20456193c: test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d ( 6ccf89c7-f71a-49f6-83f6-c2f6487688f6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.936-0500 I STORAGE [conn77] dropCollection: test4_fsmdb0.agg_out (ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796722, 2526), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.013-0500 I STORAGE [ReplWriterWorker-2] createCollection: test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2 with provided UUID: ea92c756-ca1c-454a-8900-8c864b5c4ed5 and options: { uuid: UUID("ea92c756-ca1c-454a-8900-8c864b5c4ed5"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.936-0500 I STORAGE [conn77] Finishing collection drop for test4_fsmdb0.agg_out (ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.028-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.936-0500 I STORAGE [conn77] renameCollection: renaming collection f72342ad-48c4-4fba-9026-a5fc9dc65208 from test4_fsmdb0.tmp.agg_out.3d51c2f8-0bf9-4cd6-a587-3a9f1f016a45 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.032-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951 (97059ead-49a2-46c0-98f4-e78a9d171500) to test4_fsmdb0.agg_out and drop f72342ad-48c4-4fba-9026-a5fc9dc65208.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.936-0500 I STORAGE [conn77] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46)'. Ident: 'index-293--2588534479858262356', commit timestamp: 'Timestamp(1574796722, 2526)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.032-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test4_fsmdb0.agg_out (f72342ad-48c4-4fba-9026-a5fc9dc65208) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796722, 3029), t: 1 } and commit timestamp Timestamp(1574796722, 3029)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.936-0500 I STORAGE [conn77] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (ab00cc5d-7595-4ce3-b79b-d82c7cf4fa46)'. Ident: 'index-294--2588534479858262356', commit timestamp: 'Timestamp(1574796722, 2526)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.032-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test4_fsmdb0.agg_out (f72342ad-48c4-4fba-9026-a5fc9dc65208).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.936-0500 I STORAGE [conn77] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-292--2588534479858262356, commit timestamp: Timestamp(1574796722, 2526)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.033-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 97059ead-49a2-46c0-98f4-e78a9d171500 from test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.936-0500 I INDEX [conn85] Registering index build: 4d1ab433-5e5e-4909-b8ac-23fa135e8042
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.033-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (f72342ad-48c4-4fba-9026-a5fc9dc65208)'. Ident: 'index-302--2310912778499990807', commit timestamp: 'Timestamp(1574796722, 3029)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.936-0500 I INDEX [conn84] Registering index build: c8d59491-b1b2-4e62-a4b0-6c676bb3d09a
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.033-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (f72342ad-48c4-4fba-9026-a5fc9dc65208)'. Ident: 'index-311--2310912778499990807', commit timestamp: 'Timestamp(1574796722, 3029)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.937-0500 I COMMAND [conn62] command test4_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7669348664673332202, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 911467933518941417, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796722774), clusterTime: Timestamp(1574796722, 66) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796722, 130), signature: { hash: BinData(0, A80560DAA830D2432029D1115DC53CF4A40CC44C), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 161ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.033-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-301--2310912778499990807, commit timestamp: Timestamp(1574796722, 3029)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.940-0500 I STORAGE [conn77] createCollection: test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2 with generated UUID: ea92c756-ca1c-454a-8900-8c864b5c4ed5 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.036-0500 I STORAGE [ReplWriterWorker-12] createCollection: test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92 with provided UUID: b65f9d6c-8205-4b49-8809-d9a6fa995b77 and options: { uuid: UUID("b65f9d6c-8205-4b49-8809-d9a6fa995b77"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.967-0500 I INDEX [conn85] index build: starting on test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.052-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.967-0500 I INDEX [conn85] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.071-0500 I INDEX [ReplWriterWorker-9] index build: starting on test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.967-0500 I STORAGE [conn85] Index build initialized: 4d1ab433-5e5e-4909-b8ac-23fa135e8042: test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723 (731fc8c4-de8c-4f5b-9359-53f3260b7d0a ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.071-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.967-0500 I INDEX [conn85] Waiting for index build to complete: 4d1ab433-5e5e-4909-b8ac-23fa135e8042
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.071-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: f89233c2-b29b-4f63-89ad-c7e0fe622647: test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723 (731fc8c4-de8c-4f5b-9359-53f3260b7d0a ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.974-0500 I INDEX [conn77] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.072-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.975-0500 I COMMAND [conn88] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.072-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.975-0500 I STORAGE [conn88] dropCollection: test4_fsmdb0.agg_out (f72342ad-48c4-4fba-9026-a5fc9dc65208) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796722, 3029), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.073-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d (6ccf89c7-f71a-49f6-83f6-c2f6487688f6) to test4_fsmdb0.agg_out and drop 97059ead-49a2-46c0-98f4-e78a9d171500.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.975-0500 I STORAGE [conn88] Finishing collection drop for test4_fsmdb0.agg_out (f72342ad-48c4-4fba-9026-a5fc9dc65208).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.075-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.975-0500 I STORAGE [conn88] renameCollection: renaming collection 97059ead-49a2-46c0-98f4-e78a9d171500 from test4_fsmdb0.tmp.agg_out.8f886a13-2eaf-4f48-b54c-445570bc0951 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.075-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test4_fsmdb0.agg_out (97059ead-49a2-46c0-98f4-e78a9d171500) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796723, 2), t: 1 } and commit timestamp Timestamp(1574796723, 2)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.975-0500 I STORAGE [conn88] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (f72342ad-48c4-4fba-9026-a5fc9dc65208)'. Ident: 'index-297--2588534479858262356', commit timestamp: 'Timestamp(1574796722, 3029)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.075-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test4_fsmdb0.agg_out (97059ead-49a2-46c0-98f4-e78a9d171500).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.975-0500 I STORAGE [conn88] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (f72342ad-48c4-4fba-9026-a5fc9dc65208)'. Ident: 'index-298--2588534479858262356', commit timestamp: 'Timestamp(1574796722, 3029)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.075-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection 6ccf89c7-f71a-49f6-83f6-c2f6487688f6 from test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.975-0500 I STORAGE [conn88] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-295--2588534479858262356, commit timestamp: Timestamp(1574796722, 3029)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.075-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (97059ead-49a2-46c0-98f4-e78a9d171500)'. Ident: 'index-306--2310912778499990807', commit timestamp: 'Timestamp(1574796723, 2)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.975-0500 I INDEX [conn77] Registering index build: b2ea52af-fac8-4219-a87b-0f009c517cfe
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.075-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (97059ead-49a2-46c0-98f4-e78a9d171500)'. Ident: 'index-313--2310912778499990807', commit timestamp: 'Timestamp(1574796723, 2)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.975-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.075-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-305--2310912778499990807, commit timestamp: Timestamp(1574796723, 2)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.975-0500 I COMMAND [conn80] command test4_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1167064594981159251, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4902552143383476306, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796722807), clusterTime: Timestamp(1574796722, 571) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796722, 635), signature: { hash: BinData(0, A80560DAA830D2432029D1115DC53CF4A40CC44C), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58880", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 167ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.076-0500 I STORAGE [ReplWriterWorker-7] createCollection: test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b with provided UUID: 9c782e1b-4d04-4dc2-8f81-30b332175404 and options: { uuid: UUID("9c782e1b-4d04-4dc2-8f81-30b332175404"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.976-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.078-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: f89233c2-b29b-4f63-89ad-c7e0fe622647: test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723 ( 731fc8c4-de8c-4f5b-9359-53f3260b7d0a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.978-0500 I STORAGE [conn88] createCollection: test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92 with generated UUID: b65f9d6c-8205-4b49-8809-d9a6fa995b77 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.093-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:02.987-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.111-0500 I INDEX [ReplWriterWorker-7] index build: starting on test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.005-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 4d1ab433-5e5e-4909-b8ac-23fa135e8042: test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723 ( 731fc8c4-de8c-4f5b-9359-53f3260b7d0a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.111-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.005-0500 I INDEX [conn84] index build: starting on test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.111-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 1d349f9c-9176-4b7a-8f45-9fccae586014: test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a (aa33fc25-f384-484c-a5f4-b20506c409ea ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.005-0500 I INDEX [conn84] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.112-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.006-0500 I STORAGE [conn84] Index build initialized: c8d59491-b1b2-4e62-a4b0-6c676bb3d09a: test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a (aa33fc25-f384-484c-a5f4-b20506c409ea ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.112-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.006-0500 I INDEX [conn84] Waiting for index build to complete: c8d59491-b1b2-4e62-a4b0-6c676bb3d09a
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.115-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.006-0500 I INDEX [conn85] Index build completed: 4d1ab433-5e5e-4909-b8ac-23fa135e8042
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.119-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 1d349f9c-9176-4b7a-8f45-9fccae586014: test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a ( aa33fc25-f384-484c-a5f4-b20506c409ea ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.013-0500 I INDEX [conn88] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.134-0500 I INDEX [ReplWriterWorker-8] index build: starting on test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.013-0500 I COMMAND [conn82] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.134-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.013-0500 I STORAGE [conn82] dropCollection: test4_fsmdb0.agg_out (97059ead-49a2-46c0-98f4-e78a9d171500) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796723, 2), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.134-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: ad65e4ff-41a1-482e-af21-eb0e463219e8: test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2 (ea92c756-ca1c-454a-8900-8c864b5c4ed5 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.013-0500 I STORAGE [conn82] Finishing collection drop for test4_fsmdb0.agg_out (97059ead-49a2-46c0-98f4-e78a9d171500).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.134-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.013-0500 I STORAGE [conn82] renameCollection: renaming collection 6ccf89c7-f71a-49f6-83f6-c2f6487688f6 from test4_fsmdb0.tmp.agg_out.2ed78865-0157-4b7d-ae2b-e7b312a8b99d to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.135-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.013-0500 I STORAGE [conn82] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (97059ead-49a2-46c0-98f4-e78a9d171500)'. Ident: 'index-301--2588534479858262356', commit timestamp: 'Timestamp(1574796723, 2)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.136-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723 (731fc8c4-de8c-4f5b-9359-53f3260b7d0a) to test4_fsmdb0.agg_out and drop 6ccf89c7-f71a-49f6-83f6-c2f6487688f6.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.013-0500 I STORAGE [conn82] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (97059ead-49a2-46c0-98f4-e78a9d171500)'. Ident: 'index-302--2588534479858262356', commit timestamp: 'Timestamp(1574796723, 2)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.138-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.013-0500 I STORAGE [conn82] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-299--2588534479858262356, commit timestamp: Timestamp(1574796723, 2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.138-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test4_fsmdb0.agg_out (6ccf89c7-f71a-49f6-83f6-c2f6487688f6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796723, 510), t: 1 } and commit timestamp Timestamp(1574796723, 510)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.013-0500 I INDEX [conn88] Registering index build: ce566484-30fc-4f92-9df0-a543d5e65eb7
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.138-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test4_fsmdb0.agg_out (6ccf89c7-f71a-49f6-83f6-c2f6487688f6).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.013-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.138-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection 731fc8c4-de8c-4f5b-9359-53f3260b7d0a from test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.013-0500 I COMMAND [conn81] command test4_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3979733740864589578, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3558365536550783535, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796722842), clusterTime: Timestamp(1574796722, 1076) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796722, 1140), signature: { hash: BinData(0, A80560DAA830D2432029D1115DC53CF4A40CC44C), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58884", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 171ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.138-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (6ccf89c7-f71a-49f6-83f6-c2f6487688f6)'. Ident: 'index-310--2310912778499990807', commit timestamp: 'Timestamp(1574796723, 510)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.014-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.138-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (6ccf89c7-f71a-49f6-83f6-c2f6487688f6)'. Ident: 'index-319--2310912778499990807', commit timestamp: 'Timestamp(1574796723, 510)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.017-0500 I STORAGE [conn82] createCollection: test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b with generated UUID: 9c782e1b-4d04-4dc2-8f81-30b332175404 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.138-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-309--2310912778499990807, commit timestamp: Timestamp(1574796723, 510)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.024-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.139-0500 I STORAGE [ReplWriterWorker-1] createCollection: test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 with provided UUID: c4615f37-3be4-4b6e-b8d0-dd476eedafca and options: { uuid: UUID("c4615f37-3be4-4b6e-b8d0-dd476eedafca"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.040-0500 I INDEX [conn77] index build: starting on test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.143-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: ad65e4ff-41a1-482e-af21-eb0e463219e8: test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2 ( ea92c756-ca1c-454a-8900-8c864b5c4ed5 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.040-0500 I INDEX [conn77] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.157-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.040-0500 I STORAGE [conn77] Index build initialized: b2ea52af-fac8-4219-a87b-0f009c517cfe: test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2 (ea92c756-ca1c-454a-8900-8c864b5c4ed5 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.173-0500 I INDEX [ReplWriterWorker-4] index build: starting on test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.040-0500 I INDEX [conn77] Waiting for index build to complete: b2ea52af-fac8-4219-a87b-0f009c517cfe
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.022-0500 I STORAGE [ReplWriterWorker-3] createCollection: test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 with provided UUID: 8b458b39-930e-4388-bffb-f6bf754070a6 and options: { uuid: UUID("8b458b39-930e-4388-bffb-f6bf754070a6"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.173-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.041-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.173-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: bcbb9250-c34a-4299-9734-33021c02b615: test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92 (b65f9d6c-8205-4b49-8809-d9a6fa995b77 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.043-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: c8d59491-b1b2-4e62-a4b0-6c676bb3d09a: test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a ( aa33fc25-f384-484c-a5f4-b20506c409ea ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.173-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.051-0500 I INDEX [conn82] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.173-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.051-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.176-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.052-0500 I INDEX [conn82] Registering index build: 5353debf-59b8-4826-a71b-84a73cc02754
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.180-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: bcbb9250-c34a-4299-9734-33021c02b615: test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92 ( b65f9d6c-8205-4b49-8809-d9a6fa995b77 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.062-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.230-0500 I INDEX [ReplWriterWorker-0] index build: starting on test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.071-0500 I INDEX [conn88] index build: starting on test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.230-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.071-0500 I INDEX [conn88] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.230-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 847001f8-b744-4616-8afc-300503dc4494: test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b (9c782e1b-4d04-4dc2-8f81-30b332175404 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.071-0500 I STORAGE [conn88] Index build initialized: ce566484-30fc-4f92-9df0-a543d5e65eb7: test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92 (b65f9d6c-8205-4b49-8809-d9a6fa995b77 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.230-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.071-0500 I INDEX [conn88] Waiting for index build to complete: ce566484-30fc-4f92-9df0-a543d5e65eb7
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.230-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.071-0500 I INDEX [conn84] Index build completed: c8d59491-b1b2-4e62-a4b0-6c676bb3d09a
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.232-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.071-0500 I COMMAND [conn85] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.234-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a (aa33fc25-f384-484c-a5f4-b20506c409ea) to test4_fsmdb0.agg_out and drop 731fc8c4-de8c-4f5b-9359-53f3260b7d0a.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.071-0500 I COMMAND [conn84] command test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796722, 2526), signature: { hash: BinData(0, A80560DAA830D2432029D1115DC53CF4A40CC44C), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45740", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 7577 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 134ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.234-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test4_fsmdb0.agg_out (731fc8c4-de8c-4f5b-9359-53f3260b7d0a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796723, 1850), t: 1 } and commit timestamp Timestamp(1574796723, 1850)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.071-0500 I STORAGE [conn85] dropCollection: test4_fsmdb0.agg_out (6ccf89c7-f71a-49f6-83f6-c2f6487688f6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796723, 510), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.234-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test4_fsmdb0.agg_out (731fc8c4-de8c-4f5b-9359-53f3260b7d0a).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.071-0500 I STORAGE [conn85] Finishing collection drop for test4_fsmdb0.agg_out (6ccf89c7-f71a-49f6-83f6-c2f6487688f6).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.234-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection aa33fc25-f384-484c-a5f4-b20506c409ea from test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.071-0500 I STORAGE [conn85] renameCollection: renaming collection 731fc8c4-de8c-4f5b-9359-53f3260b7d0a from test4_fsmdb0.tmp.agg_out.702242b0-2bb1-47ae-a829-2a54de248723 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.234-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (731fc8c4-de8c-4f5b-9359-53f3260b7d0a)'. Ident: 'index-316--2310912778499990807', commit timestamp: 'Timestamp(1574796723, 1850)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.072-0500 I STORAGE [conn85] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (6ccf89c7-f71a-49f6-83f6-c2f6487688f6)'. Ident: 'index-305--2588534479858262356', commit timestamp: 'Timestamp(1574796723, 510)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.234-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (731fc8c4-de8c-4f5b-9359-53f3260b7d0a)'. Ident: 'index-325--2310912778499990807', commit timestamp: 'Timestamp(1574796723, 1850)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.072-0500 I STORAGE [conn85] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (6ccf89c7-f71a-49f6-83f6-c2f6487688f6)'. Ident: 'index-306--2588534479858262356', commit timestamp: 'Timestamp(1574796723, 510)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.234-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-315--2310912778499990807, commit timestamp: Timestamp(1574796723, 1850)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.072-0500 I STORAGE [conn85] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-303--2588534479858262356, commit timestamp: Timestamp(1574796723, 510)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.236-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 847001f8-b744-4616-8afc-300503dc4494: test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b ( 9c782e1b-4d04-4dc2-8f81-30b332175404 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.072-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: b2ea52af-fac8-4219-a87b-0f009c517cfe: test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2 ( ea92c756-ca1c-454a-8900-8c864b5c4ed5 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.251-0500 I INDEX [ReplWriterWorker-14] index build: starting on test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.072-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.251-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.072-0500 I COMMAND [conn64] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 786533706229653290, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4223526031131986814, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796722899), clusterTime: Timestamp(1574796722, 2085) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796722, 2149), signature: { hash: BinData(0, A80560DAA830D2432029D1115DC53CF4A40CC44C), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45728", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 172ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.251-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: d84ed0d6-fde2-4eb0-bb7e-74048bc0b555: test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 (c4615f37-3be4-4b6e-b8d0-dd476eedafca ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.072-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.251-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.075-0500 I STORAGE [conn84] createCollection: test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 with generated UUID: c4615f37-3be4-4b6e-b8d0-dd476eedafca and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.252-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.082-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.253-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2 (ea92c756-ca1c-454a-8900-8c864b5c4ed5) to test4_fsmdb0.agg_out and drop aa33fc25-f384-484c-a5f4-b20506c409ea.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.100-0500 I INDEX [conn82] index build: starting on test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.254-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.100-0500 I INDEX [conn82] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.254-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test4_fsmdb0.agg_out (aa33fc25-f384-484c-a5f4-b20506c409ea) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796723, 2021), t: 1 } and commit timestamp Timestamp(1574796723, 2021)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.100-0500 I STORAGE [conn82] Index build initialized: 5353debf-59b8-4826-a71b-84a73cc02754: test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b (9c782e1b-4d04-4dc2-8f81-30b332175404 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.254-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test4_fsmdb0.agg_out (aa33fc25-f384-484c-a5f4-b20506c409ea).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.100-0500 I INDEX [conn82] Waiting for index build to complete: 5353debf-59b8-4826-a71b-84a73cc02754
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.254-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection ea92c756-ca1c-454a-8900-8c864b5c4ed5 from test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.100-0500 I INDEX [conn77] Index build completed: b2ea52af-fac8-4219-a87b-0f009c517cfe
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.254-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (aa33fc25-f384-484c-a5f4-b20506c409ea)'. Ident: 'index-318--2310912778499990807', commit timestamp: 'Timestamp(1574796723, 2021)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.100-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.254-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (aa33fc25-f384-484c-a5f4-b20506c409ea)'. Ident: 'index-329--2310912778499990807', commit timestamp: 'Timestamp(1574796723, 2021)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.100-0500 I COMMAND [conn77] command test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2 appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796722, 3029), signature: { hash: BinData(0, A80560DAA830D2432029D1115DC53CF4A40CC44C), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 7672 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 125ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.254-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-317--2310912778499990807, commit timestamp: Timestamp(1574796723, 2021)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.102-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: ce566484-30fc-4f92-9df0-a543d5e65eb7: test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92 ( b65f9d6c-8205-4b49-8809-d9a6fa995b77 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:03.256-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: d84ed0d6-fde2-4eb0-bb7e-74048bc0b555: test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 ( c4615f37-3be4-4b6e-b8d0-dd476eedafca ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.102-0500 I INDEX [conn88] Index build completed: ce566484-30fc-4f92-9df0-a543d5e65eb7
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:05.998-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92 (b65f9d6c-8205-4b49-8809-d9a6fa995b77) to test4_fsmdb0.agg_out and drop ea92c756-ca1c-454a-8900-8c864b5c4ed5.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.111-0500 I INDEX [conn84] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:05.998-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test4_fsmdb0.agg_out (ea92c756-ca1c-454a-8900-8c864b5c4ed5) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796723, 2022), t: 1 } and commit timestamp Timestamp(1574796723, 2022)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.111-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:05.998-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test4_fsmdb0.agg_out (ea92c756-ca1c-454a-8900-8c864b5c4ed5).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.111-0500 I INDEX [conn84] Registering index build: 03183208-ccc1-45e3-a178-56124c0b29ac
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:05.998-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection b65f9d6c-8205-4b49-8809-d9a6fa995b77 from test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.116-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:05.998-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (ea92c756-ca1c-454a-8900-8c864b5c4ed5)'. Ident: 'index-322--2310912778499990807', commit timestamp: 'Timestamp(1574796723, 2022)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.129-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 5353debf-59b8-4826-a71b-84a73cc02754: test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b ( 9c782e1b-4d04-4dc2-8f81-30b332175404 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:05.998-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (ea92c756-ca1c-454a-8900-8c864b5c4ed5)'. Ident: 'index-331--2310912778499990807', commit timestamp: 'Timestamp(1574796723, 2022)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.138-0500 I INDEX [conn84] index build: starting on test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:05.998-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-321--2310912778499990807, commit timestamp: Timestamp(1574796723, 2022)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.138-0500 I INDEX [conn84] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.022-0500 I STORAGE [ReplWriterWorker-0] createCollection: test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 with provided UUID: 8b458b39-930e-4388-bffb-f6bf754070a6 and options: { uuid: UUID("8b458b39-930e-4388-bffb-f6bf754070a6"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.138-0500 I STORAGE [conn84] Index build initialized: 03183208-ccc1-45e3-a178-56124c0b29ac: test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 (c4615f37-3be4-4b6e-b8d0-dd476eedafca ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.138-0500 I INDEX [conn84] Waiting for index build to complete: 03183208-ccc1-45e3-a178-56124c0b29ac
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.138-0500 I INDEX [conn82] Index build completed: 5353debf-59b8-4826-a71b-84a73cc02754
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.138-0500 I COMMAND [conn85] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.138-0500 I STORAGE [conn85] dropCollection: test4_fsmdb0.agg_out (731fc8c4-de8c-4f5b-9359-53f3260b7d0a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796723, 1850), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.138-0500 I STORAGE [conn85] Finishing collection drop for test4_fsmdb0.agg_out (731fc8c4-de8c-4f5b-9359-53f3260b7d0a).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.139-0500 I STORAGE [conn85] renameCollection: renaming collection aa33fc25-f384-484c-a5f4-b20506c409ea from test4_fsmdb0.tmp.agg_out.41894e47-02f5-4ddf-8039-c9c61f88cc9a to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.139-0500 I STORAGE [conn85] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (731fc8c4-de8c-4f5b-9359-53f3260b7d0a)'. Ident: 'index-310--2588534479858262356', commit timestamp: 'Timestamp(1574796723, 1850)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.139-0500 I STORAGE [conn85] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (731fc8c4-de8c-4f5b-9359-53f3260b7d0a)'. Ident: 'index-312--2588534479858262356', commit timestamp: 'Timestamp(1574796723, 1850)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.139-0500 I STORAGE [conn85] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-308--2588534479858262356, commit timestamp: Timestamp(1574796723, 1850)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.139-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.139-0500 I COMMAND [conn65] command test4_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 991446406879673201, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7627820893746182099, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796722900), clusterTime: Timestamp(1574796722, 2149) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796722, 2149), signature: { hash: BinData(0, A80560DAA830D2432029D1115DC53CF4A40CC44C), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45740", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 238ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.139-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.141-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.142-0500 I COMMAND [conn77] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.142-0500 I STORAGE [conn77] dropCollection: test4_fsmdb0.agg_out (aa33fc25-f384-484c-a5f4-b20506c409ea) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796723, 2021), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.142-0500 I STORAGE [conn77] Finishing collection drop for test4_fsmdb0.agg_out (aa33fc25-f384-484c-a5f4-b20506c409ea).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.142-0500 I STORAGE [conn77] renameCollection: renaming collection ea92c756-ca1c-454a-8900-8c864b5c4ed5 from test4_fsmdb0.tmp.agg_out.299a29c1-cad6-4c60-bc94-0c9d906e10a2 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.142-0500 I STORAGE [conn77] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (aa33fc25-f384-484c-a5f4-b20506c409ea)'. Ident: 'index-311--2588534479858262356', commit timestamp: 'Timestamp(1574796723, 2021)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.142-0500 I STORAGE [conn77] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (aa33fc25-f384-484c-a5f4-b20506c409ea)'. Ident: 'index-316--2588534479858262356', commit timestamp: 'Timestamp(1574796723, 2021)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.142-0500 I STORAGE [conn77] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-309--2588534479858262356, commit timestamp: Timestamp(1574796723, 2021)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.142-0500 I COMMAND [conn88] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.142-0500 I COMMAND [conn62] command test4_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6302104822739570951, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4037276902203149182, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796722938), clusterTime: Timestamp(1574796722, 2590) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796722, 2654), signature: { hash: BinData(0, A80560DAA830D2432029D1115DC53CF4A40CC44C), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 203ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.142-0500 I STORAGE [conn88] dropCollection: test4_fsmdb0.agg_out (ea92c756-ca1c-454a-8900-8c864b5c4ed5) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796723, 2022), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.142-0500 I STORAGE [conn88] Finishing collection drop for test4_fsmdb0.agg_out (ea92c756-ca1c-454a-8900-8c864b5c4ed5).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.142-0500 I STORAGE [conn88] renameCollection: renaming collection b65f9d6c-8205-4b49-8809-d9a6fa995b77 from test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:03.144-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 03183208-ccc1-45e3-a178-56124c0b29ac: test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 ( c4615f37-3be4-4b6e-b8d0-dd476eedafca ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:05.995-0500 I INDEX [conn84] Index build completed: 03183208-ccc1-45e3-a178-56124c0b29ac
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:05.995-0500 I STORAGE [conn88] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (ea92c756-ca1c-454a-8900-8c864b5c4ed5)'. Ident: 'index-315--2588534479858262356', commit timestamp: 'Timestamp(1574796723, 2022)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:05.995-0500 I STORAGE [conn88] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (ea92c756-ca1c-454a-8900-8c864b5c4ed5)'. Ident: 'index-320--2588534479858262356', commit timestamp: 'Timestamp(1574796723, 2022)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:05.995-0500 I STORAGE [conn88] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-313--2588534479858262356, commit timestamp: Timestamp(1574796723, 2022)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:05.996-0500 I COMMAND [conn84] command test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796723, 1026), signature: { hash: BinData(0, A52D224811EE02C2D2B5572BB138ABAA426016D1), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45728", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2884ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:05.996-0500 I COMMAND [conn88] command test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92 appName: "tid:3" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test4_fsmdb0.tmp.agg_out.0828dede-9994-4e69-b77f-ce588add5e92", to: "test4_fsmdb0.agg_out", collectionOptions: { validationLevel: "strict", validationAction: "error" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796723, 2020), signature: { hash: BinData(0, A52D224811EE02C2D2B5572BB138ABAA426016D1), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58880", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 413 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2853ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:05.996-0500 I COMMAND [conn65] command test4_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } } ], fromMongos: true, needsMerge: true, collation: { locale: "simple" }, cursor: { batchSize: 0 }, runtimeConstants: { localNow: new Date(1574796723144), clusterTime: Timestamp(1574796723, 2021) }, use44SortKeys: true, allowImplicitCollectionCreation: false, shardVersion: [ Timestamp(1, 3), ObjectId('5ddd7daccf8184c2e1494359') ], lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796723, 2021), signature: { hash: BinData(0, A52D224811EE02C2D2B5572BB138ABAA426016D1), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } planSummary: COLLSCAN cursorid:8126516292210711152 keysExamined:0 docsExamined:0 numYields:0 nreturned:0 queryHash:CC4733C9 planCacheKey:CC4733C9 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2851610 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 3 } } } protocol:op_msg 2851ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:05.996-0500 I COMMAND [conn80] command test4_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8373911012248473359, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6875872994056324976, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796722977), clusterTime: Timestamp(1574796722, 3093) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796722, 3157), signature: { hash: BinData(0, A80560DAA830D2432029D1115DC53CF4A40CC44C), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58880", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3018ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:05.996-0500 I STORAGE [conn84] createCollection: test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 with generated UUID: 8b458b39-930e-4388-bffb-f6bf754070a6 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:05.997-0500 I STORAGE [conn77] createCollection: test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 with generated UUID: 38074c2a-6a92-45f7-838b-8965c3a7b309 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:05.998-0500 I COMMAND [conn197] command test4_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796723, 577), lsid: { id: UUID("3f42616f-e948-49d9-97ee-c2eb72d5ff98") }, $clusterTime: { clusterTime: Timestamp(1574796723, 577), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796723, 577). Collection minimum timestamp is Timestamp(1574796723, 1464)" errName:SnapshotUnavailable errCode:246 reslen:581 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2810554 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 2812ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:05.999-0500 I STORAGE [conn88] createCollection: test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b with generated UUID: b2ffcee2-27e0-4c4c-baea-dd0f8042aa30 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.007-0500 I COMMAND [conn82] command test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b appName: "tid:1" command: insert { insert: "tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b", bypassDocumentValidation: false, ordered: false, documents: 500, shardVersion: [ Timestamp(0, 0), ObjectId('000000000000000000000000') ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, writeConcern: { w: 1, wtimeout: 0 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796723, 2020), signature: { hash: BinData(0, A52D224811EE02C2D2B5572BB138ABAA426016D1), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58884", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } ninserted:500 keysInserted:1000 numYields:0 reslen:400 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 8 } }, ReplicationStateTransition: { acquireCount: { w: 8 } }, Global: { acquireCount: { w: 8 } }, Database: { acquireCount: { w: 8 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 2853117 } }, Collection: { acquireCount: { w: 8 } }, Mutex: { acquireCount: { r: 1016 } } } flowControl:{ acquireCount: 8 } storage:{} protocol:op_msg 2864ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.020-0500 I INDEX [conn84] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.025-0500 I INDEX [conn77] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.032-0500 I INDEX [conn88] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.032-0500 I COMMAND [conn82] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.032-0500 I STORAGE [conn82] dropCollection: test4_fsmdb0.agg_out (b65f9d6c-8205-4b49-8809-d9a6fa995b77) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 373), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.032-0500 I STORAGE [conn82] Finishing collection drop for test4_fsmdb0.agg_out (b65f9d6c-8205-4b49-8809-d9a6fa995b77).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.032-0500 I STORAGE [conn82] renameCollection: renaming collection 9c782e1b-4d04-4dc2-8f81-30b332175404 from test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.032-0500 I STORAGE [conn82] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (b65f9d6c-8205-4b49-8809-d9a6fa995b77)'. Ident: 'index-319--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 373)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.032-0500 I STORAGE [conn82] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (b65f9d6c-8205-4b49-8809-d9a6fa995b77)'. Ident: 'index-324--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 373)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.032-0500 I STORAGE [conn82] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-317--2588534479858262356, commit timestamp: Timestamp(1574796726, 373)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.032-0500 I INDEX [conn88] Registering index build: e3d27712-374f-4c4f-9c96-1823d12d048a
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.033-0500 I INDEX [conn77] Registering index build: 12d81dc6-0002-4531-9bb6-6e48e0fd72e2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.033-0500 I INDEX [conn84] Registering index build: 0390737f-d69a-41e4-9268-ae58e9ab4fe8
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.033-0500 I COMMAND [conn81] command test4_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1798101688562690764, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3618849868728379206, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796723015), clusterTime: Timestamp(1574796723, 2) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796723, 2), signature: { hash: BinData(0, A52D224811EE02C2D2B5572BB138ABAA426016D1), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58884", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3016ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:06.033-0500 I COMMAND [conn53] command test4_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70") }, $clusterTime: { clusterTime: Timestamp(1574796723, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3018ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.036-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.036-0500 I STORAGE [ReplWriterWorker-4] createCollection: test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 with provided UUID: 38074c2a-6a92-45f7-838b-8965c3a7b309 and options: { uuid: UUID("38074c2a-6a92-45f7-838b-8965c3a7b309"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.037-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.038-0500 I STORAGE [ReplWriterWorker-7] createCollection: test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 with provided UUID: 38074c2a-6a92-45f7-838b-8965c3a7b309 and options: { uuid: UUID("38074c2a-6a92-45f7-838b-8965c3a7b309"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.047-0500 I INDEX [conn88] index build: starting on test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.047-0500 I INDEX [conn88] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.047-0500 I STORAGE [conn88] Index build initialized: e3d27712-374f-4c4f-9c96-1823d12d048a: test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b (b2ffcee2-27e0-4c4c-baea-dd0f8042aa30 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.047-0500 I INDEX [conn88] Waiting for index build to complete: e3d27712-374f-4c4f-9c96-1823d12d048a
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.048-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.048-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.050-0500 I STORAGE [conn82] createCollection: test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a with generated UUID: 93c63257-205e-42cd-9acf-c8363aeb1208 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.050-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.053-0500 I STORAGE [ReplWriterWorker-3] createCollection: test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b with provided UUID: b2ffcee2-27e0-4c4c-baea-dd0f8042aa30 and options: { uuid: UUID("b2ffcee2-27e0-4c4c-baea-dd0f8042aa30"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.055-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.057-0500 I STORAGE [ReplWriterWorker-1] createCollection: test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b with provided UUID: b2ffcee2-27e0-4c4c-baea-dd0f8042aa30 and options: { uuid: UUID("b2ffcee2-27e0-4c4c-baea-dd0f8042aa30"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.060-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.067-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.071-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b (9c782e1b-4d04-4dc2-8f81-30b332175404) to test4_fsmdb0.agg_out and drop b65f9d6c-8205-4b49-8809-d9a6fa995b77.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.071-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test4_fsmdb0.agg_out (b65f9d6c-8205-4b49-8809-d9a6fa995b77) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 373), t: 1 } and commit timestamp Timestamp(1574796726, 373)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.071-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test4_fsmdb0.agg_out (b65f9d6c-8205-4b49-8809-d9a6fa995b77).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.071-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 9c782e1b-4d04-4dc2-8f81-30b332175404 from test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.071-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (b65f9d6c-8205-4b49-8809-d9a6fa995b77)'. Ident: 'index-324--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 373)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.071-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (b65f9d6c-8205-4b49-8809-d9a6fa995b77)'. Ident: 'index-335--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 373)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.071-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-323--7234316082034423155, commit timestamp: Timestamp(1574796726, 373)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.073-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.075-0500 I INDEX [conn77] index build: starting on test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.075-0500 I INDEX [conn77] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.075-0500 I STORAGE [conn77] Index build initialized: 12d81dc6-0002-4531-9bb6-6e48e0fd72e2: test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 (38074c2a-6a92-45f7-838b-8965c3a7b309 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.075-0500 I INDEX [conn77] Waiting for index build to complete: 12d81dc6-0002-4531-9bb6-6e48e0fd72e2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.076-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: e3d27712-374f-4c4f-9c96-1823d12d048a: test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b ( b2ffcee2-27e0-4c4c-baea-dd0f8042aa30 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.076-0500 I INDEX [conn88] Index build completed: e3d27712-374f-4c4f-9c96-1823d12d048a
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.077-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b (9c782e1b-4d04-4dc2-8f81-30b332175404) to test4_fsmdb0.agg_out and drop b65f9d6c-8205-4b49-8809-d9a6fa995b77.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.077-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test4_fsmdb0.agg_out (b65f9d6c-8205-4b49-8809-d9a6fa995b77) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 373), t: 1 } and commit timestamp Timestamp(1574796726, 373)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.077-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test4_fsmdb0.agg_out (b65f9d6c-8205-4b49-8809-d9a6fa995b77).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.077-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection 9c782e1b-4d04-4dc2-8f81-30b332175404 from test4_fsmdb0.tmp.agg_out.ce4b5f5a-e148-4af0-99d7-ac6766a7fd3b to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.077-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (b65f9d6c-8205-4b49-8809-d9a6fa995b77)'. Ident: 'index-324--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 373)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.077-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (b65f9d6c-8205-4b49-8809-d9a6fa995b77)'. Ident: 'index-335--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 373)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.077-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-323--2310912778499990807, commit timestamp: Timestamp(1574796726, 373)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.083-0500 I INDEX [conn82] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.083-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.084-0500 I INDEX [conn82] Registering index build: b1cd8c13-cd92-4fd4-87ed-9c3037e9d97c
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.084-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.084-0500 I COMMAND [conn85] CMD: drop test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.085-0500 I STORAGE [ReplWriterWorker-14] createCollection: test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a with provided UUID: 93c63257-205e-42cd-9acf-c8363aeb1208 and options: { uuid: UUID("93c63257-205e-42cd-9acf-c8363aeb1208"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.085-0500 I STORAGE [ReplWriterWorker-15] createCollection: test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a with provided UUID: 93c63257-205e-42cd-9acf-c8363aeb1208 and options: { uuid: UUID("93c63257-205e-42cd-9acf-c8363aeb1208"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.094-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.100-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.101-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.102-0500 I INDEX [conn84] index build: starting on test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.102-0500 I INDEX [conn84] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.102-0500 I STORAGE [conn84] Index build initialized: 0390737f-d69a-41e4-9268-ae58e9ab4fe8: test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 (8b458b39-930e-4388-bffb-f6bf754070a6 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.102-0500 I INDEX [conn84] Waiting for index build to complete: 0390737f-d69a-41e4-9268-ae58e9ab4fe8
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.102-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.102-0500 I STORAGE [conn85] dropCollection: test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 (c4615f37-3be4-4b6e-b8d0-dd476eedafca) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.102-0500 I STORAGE [conn85] Finishing collection drop for test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 (c4615f37-3be4-4b6e-b8d0-dd476eedafca).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.102-0500 I STORAGE [conn85] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 (c4615f37-3be4-4b6e-b8d0-dd476eedafca)'. Ident: 'index-329--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 1203)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.102-0500 I STORAGE [conn85] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 (c4615f37-3be4-4b6e-b8d0-dd476eedafca)'. Ident: 'index-330--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 1203)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.102-0500 I STORAGE [conn85] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796'. Ident: collection-327--2588534479858262356, commit timestamp: Timestamp(1574796726, 1203)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.103-0500 I COMMAND [conn64] command test4_fsmdb0.agg_out appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4680316805383202144, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4063382271515699976, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796723073), clusterTime: Timestamp(1574796723, 510) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796723, 510), signature: { hash: BinData(0, A52D224811EE02C2D2B5572BB138ABAA426016D1), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45728", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:991 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3028ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:06.103-0500 I COMMAND [conn164] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458") }, $clusterTime: { clusterTime: Timestamp(1574796723, 510), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:821 protocol:op_msg 3029ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.104-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 12d81dc6-0002-4531-9bb6-6e48e0fd72e2: test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 ( 38074c2a-6a92-45f7-838b-8965c3a7b309 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.104-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.106-0500 I STORAGE [conn88] createCollection: test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7 with generated UUID: 86bb941f-7b83-4a5f-b482-f55ec5131c25 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.113-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.117-0500 I INDEX [ReplWriterWorker-0] index build: starting on test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.117-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.117-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: e1d668a8-069f-4263-9b92-bdb905343199: test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b (b2ffcee2-27e0-4c4c-baea-dd0f8042aa30 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.117-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.117-0500 I INDEX [ReplWriterWorker-11] index build: starting on test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.117-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.117-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: c88e7764-ed8d-433b-8c69-92c6558ac007: test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b (b2ffcee2-27e0-4c4c-baea-dd0f8042aa30 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.117-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.118-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.118-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.122-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: drain applied 64 side writes (inserted: 64, deleted: 0) for '_id_hashed' in 1 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.122-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.129-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: e1d668a8-069f-4263-9b92-bdb905343199: test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b ( b2ffcee2-27e0-4c4c-baea-dd0f8042aa30 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.130-0500 I INDEX [conn82] index build: starting on test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.130-0500 I INDEX [conn82] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.130-0500 I STORAGE [conn82] Index build initialized: b1cd8c13-cd92-4fd4-87ed-9c3037e9d97c: test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a (93c63257-205e-42cd-9acf-c8363aeb1208 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.130-0500 I INDEX [conn82] Waiting for index build to complete: b1cd8c13-cd92-4fd4-87ed-9c3037e9d97c
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.130-0500 I INDEX [conn77] Index build completed: 12d81dc6-0002-4531-9bb6-6e48e0fd72e2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.130-0500 I COMMAND [conn77] command test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796726, 372), signature: { hash: BinData(0, F7EF3025DE0D0D56407335B45DB1964F5F5ED405), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796725, 1), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 6530 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 103ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.131-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 0390737f-d69a-41e4-9268-ae58e9ab4fe8: test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 ( 8b458b39-930e-4388-bffb-f6bf754070a6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.131-0500 I INDEX [conn84] Index build completed: 0390737f-d69a-41e4-9268-ae58e9ab4fe8
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.131-0500 I COMMAND [conn84] command test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796726, 372), signature: { hash: BinData(0, F7EF3025DE0D0D56407335B45DB1964F5F5ED405), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45740", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796725, 1), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 19535 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 110ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.132-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 64 side writes (inserted: 64, deleted: 0) for '_id_hashed' in 9 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.132-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.137-0500 I INDEX [ReplWriterWorker-5] index build: starting on test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.137-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.137-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 513d13bc-ce1a-4336-b2f2-5a88380ae535: test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 (38074c2a-6a92-45f7-838b-8965c3a7b309 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.138-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.138-0500 I INDEX [conn88] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.138-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.138-0500 I INDEX [ReplWriterWorker-5] index build: starting on test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.138-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.138-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.138-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: f225637b-d7b1-4262-b795-e0753ffd24f6: test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 (38074c2a-6a92-45f7-838b-8965c3a7b309 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.139-0500 I INDEX [conn88] Registering index build: 19c9a2f4-55bf-48a0-97a8-c78ca278a82f
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.141-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:06.163-0500 I COMMAND [conn52] command test4_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a") }, $clusterTime: { clusterTime: Timestamp(1574796725, 64), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:821 protocol:op_msg 165ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:06.171-0500 I COMMAND [conn170] command test4_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6") }, $clusterTime: { clusterTime: Timestamp(1574796723, 2021), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:821 protocol:op_msg 3027ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.138-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.142-0500 I COMMAND [ReplWriterWorker-2] CMD: drop test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:06.226-0500 I COMMAND [conn53] command test4_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70") }, $clusterTime: { clusterTime: Timestamp(1574796726, 503), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 177ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.139-0500 I COMMAND [conn85] CMD: drop test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.142-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 (c4615f37-3be4-4b6e-b8d0-dd476eedafca) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 1203), t: 1 } and commit timestamp Timestamp(1574796726, 1203)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.139-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: c88e7764-ed8d-433b-8c69-92c6558ac007: test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b ( b2ffcee2-27e0-4c4c-baea-dd0f8042aa30 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:06.226-0500 I COMMAND [conn169] command test4_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51") }, $clusterTime: { clusterTime: Timestamp(1574796723, 1966), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:821 protocol:op_msg 3085ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:06.339-0500 I COMMAND [conn52] command test4_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a") }, $clusterTime: { clusterTime: Timestamp(1574796726, 2083), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:820 protocol:op_msg 174ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.139-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.142-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 (c4615f37-3be4-4b6e-b8d0-dd476eedafca).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.139-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:06.226-0500 I COMMAND [conn164] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458") }, $clusterTime: { clusterTime: Timestamp(1574796726, 1203), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 121ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:09.177-0500 I COMMAND [conn53] command test4_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70") }, $clusterTime: { clusterTime: Timestamp(1574796726, 3340), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2930ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.150-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.143-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 (c4615f37-3be4-4b6e-b8d0-dd476eedafca)'. Ident: 'index-334--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 1203)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.142-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:06.360-0500 I COMMAND [conn170] command test4_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6") }, $clusterTime: { clusterTime: Timestamp(1574796726, 2150), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:820 protocol:op_msg 188ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:09.179-0500 I COMMAND [conn52] command test4_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a") }, $clusterTime: { clusterTime: Timestamp(1574796726, 4350), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2839ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.162-0500 I INDEX [conn88] index build: starting on test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.143-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 (c4615f37-3be4-4b6e-b8d0-dd476eedafca)'. Ident: 'index-339--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 1203)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.143-0500 I COMMAND [ReplWriterWorker-8] CMD: drop test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:06.417-0500 I COMMAND [conn164] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458") }, $clusterTime: { clusterTime: Timestamp(1574796726, 3336), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 189ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.162-0500 I INDEX [conn88] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.143-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796'. Ident: collection-333--2310912778499990807, commit timestamp: Timestamp(1574796726, 1203)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.144-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 (c4615f37-3be4-4b6e-b8d0-dd476eedafca) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 1203), t: 1 } and commit timestamp Timestamp(1574796726, 1203)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:06.418-0500 I COMMAND [conn169] command test4_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51") }, $clusterTime: { clusterTime: Timestamp(1574796726, 3335), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 190ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.162-0500 I STORAGE [conn88] Index build initialized: 19c9a2f4-55bf-48a0-97a8-c78ca278a82f: test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7 (86bb941f-7b83-4a5f-b482-f55ec5131c25 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.145-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 513d13bc-ce1a-4336-b2f2-5a88380ae535: test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 ( 38074c2a-6a92-45f7-838b-8965c3a7b309 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.144-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 (c4615f37-3be4-4b6e-b8d0-dd476eedafca).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.162-0500 I INDEX [conn88] Waiting for index build to complete: 19c9a2f4-55bf-48a0-97a8-c78ca278a82f
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.145-0500 I STORAGE [ReplWriterWorker-8] createCollection: test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7 with provided UUID: 86bb941f-7b83-4a5f-b482-f55ec5131c25 and options: { uuid: UUID("86bb941f-7b83-4a5f-b482-f55ec5131c25"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.144-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 (c4615f37-3be4-4b6e-b8d0-dd476eedafca)'. Ident: 'index-334--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 1203)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.162-0500 I STORAGE [conn85] dropCollection: test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b (b2ffcee2-27e0-4c4c-baea-dd0f8042aa30) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.162-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.144-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796 (c4615f37-3be4-4b6e-b8d0-dd476eedafca)'. Ident: 'index-339--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 1203)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.162-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.177-0500 I INDEX [ReplWriterWorker-13] index build: starting on test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.144-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796'. Ident: collection-333--7234316082034423155, commit timestamp: Timestamp(1574796726, 1203)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.162-0500 I STORAGE [conn85] Finishing collection drop for test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b (b2ffcee2-27e0-4c4c-baea-dd0f8042aa30).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.177-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.146-0500 I STORAGE [ReplWriterWorker-14] createCollection: test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7 with provided UUID: 86bb941f-7b83-4a5f-b482-f55ec5131c25 and options: { uuid: UUID("86bb941f-7b83-4a5f-b482-f55ec5131c25"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.162-0500 I STORAGE [conn85] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b (b2ffcee2-27e0-4c4c-baea-dd0f8042aa30)'. Ident: 'index-337--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 2083)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.177-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 1e6c6a48-2ba3-4a85-9486-f94987c007dd: test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 (8b458b39-930e-4388-bffb-f6bf754070a6 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.147-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: f225637b-d7b1-4262-b795-e0753ffd24f6: test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 ( 38074c2a-6a92-45f7-838b-8965c3a7b309 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.162-0500 I STORAGE [conn85] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b (b2ffcee2-27e0-4c4c-baea-dd0f8042aa30)'. Ident: 'index-338--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 2083)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.177-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.163-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.162-0500 I STORAGE [conn85] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b'. Ident: collection-334--2588534479858262356, commit timestamp: Timestamp(1574796726, 2083)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.178-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.180-0500 I INDEX [ReplWriterWorker-13] index build: starting on test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.163-0500 I COMMAND [conn80] command test4_fsmdb0.agg_out appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2310863922564566967, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 458128222284218841, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796725997), clusterTime: Timestamp(1574796725, 64) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796725, 130), signature: { hash: BinData(0, 9C3D7C7F6893577A667A0A4EA6435E9248E73AA5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58880", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796725, 1), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:991 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 164ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.180-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.180-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.166-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: b1cd8c13-cd92-4fd4-87ed-9c3037e9d97c: test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a ( 93c63257-205e-42cd-9acf-c8363aeb1208 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.183-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 1e6c6a48-2ba3-4a85-9486-f94987c007dd: test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 ( 8b458b39-930e-4388-bffb-f6bf754070a6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.180-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: b87959b9-c638-4058-a1c8-0a7a79313def: test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 (8b458b39-930e-4388-bffb-f6bf754070a6 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.166-0500 I INDEX [conn82] Index build completed: b1cd8c13-cd92-4fd4-87ed-9c3037e9d97c
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.201-0500 I INDEX [ReplWriterWorker-11] index build: starting on test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.180-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.167-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.202-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.181-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.169-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.202-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: d9953da5-75fd-4ac2-a72f-5019ca217334: test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a (93c63257-205e-42cd-9acf-c8363aeb1208 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.183-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.170-0500 I COMMAND [conn84] CMD: drop test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.202-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.186-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: b87959b9-c638-4058-a1c8-0a7a79313def: test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 ( 8b458b39-930e-4388-bffb-f6bf754070a6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.170-0500 I STORAGE [conn84] dropCollection: test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 (38074c2a-6a92-45f7-838b-8965c3a7b309) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.202-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.205-0500 I INDEX [ReplWriterWorker-15] index build: starting on test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.170-0500 I STORAGE [conn84] Finishing collection drop for test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 (38074c2a-6a92-45f7-838b-8965c3a7b309).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.205-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.205-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.170-0500 I STORAGE [conn84] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 (38074c2a-6a92-45f7-838b-8965c3a7b309)'. Ident: 'index-336--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 2150)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.207-0500 I COMMAND [ReplWriterWorker-11] CMD: drop test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.205-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 710b99e3-0774-46aa-9bc7-78013d35d155: test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a (93c63257-205e-42cd-9acf-c8363aeb1208 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.170-0500 I STORAGE [conn84] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 (38074c2a-6a92-45f7-838b-8965c3a7b309)'. Ident: 'index-340--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 2150)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.207-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b (b2ffcee2-27e0-4c4c-baea-dd0f8042aa30) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 2083), t: 1 } and commit timestamp Timestamp(1574796726, 2083)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.206-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.170-0500 I STORAGE [conn84] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05'. Ident: collection-333--2588534479858262356, commit timestamp: Timestamp(1574796726, 2150)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.207-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b (b2ffcee2-27e0-4c4c-baea-dd0f8042aa30).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.206-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.170-0500 I COMMAND [conn65] command test4_fsmdb0.agg_out appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6029671308904795691, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8126516292210711152, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796723144), clusterTime: Timestamp(1574796723, 2021) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796725, 64), signature: { hash: BinData(0, 9C3D7C7F6893577A667A0A4EA6435E9248E73AA5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796725, 1), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:991 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 174ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.207-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b (b2ffcee2-27e0-4c4c-baea-dd0f8042aa30)'. Ident: 'index-346--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 2083)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.209-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.170-0500 I STORAGE [conn84] createCollection: test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b with generated UUID: f1af14b0-31f6-4052-8ace-db55ff5d7550 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.207-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b (b2ffcee2-27e0-4c4c-baea-dd0f8042aa30)'. Ident: 'index-349--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 2083)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.210-0500 I COMMAND [ReplWriterWorker-15] CMD: drop test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.173-0500 I STORAGE [conn82] createCollection: test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a with generated UUID: 77f97fd4-9ec6-4554-ae41-2f2f9cfb33c9 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.207-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b'. Ident: collection-345--2310912778499990807, commit timestamp: Timestamp(1574796726, 2083)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.210-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b (b2ffcee2-27e0-4c4c-baea-dd0f8042aa30) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 2083), t: 1 } and commit timestamp Timestamp(1574796726, 2083)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.174-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 19c9a2f4-55bf-48a0-97a8-c78ca278a82f: test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7 ( 86bb941f-7b83-4a5f-b482-f55ec5131c25 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.209-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: d9953da5-75fd-4ac2-a72f-5019ca217334: test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a ( 93c63257-205e-42cd-9acf-c8363aeb1208 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.210-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b (b2ffcee2-27e0-4c4c-baea-dd0f8042aa30).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.174-0500 I INDEX [conn88] Index build completed: 19c9a2f4-55bf-48a0-97a8-c78ca278a82f
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.224-0500 I INDEX [ReplWriterWorker-1] index build: starting on test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.211-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b (b2ffcee2-27e0-4c4c-baea-dd0f8042aa30)'. Ident: 'index-346--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 2083)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.196-0500 I INDEX [conn84] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.224-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.211-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b (b2ffcee2-27e0-4c4c-baea-dd0f8042aa30)'. Ident: 'index-349--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 2083)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.203-0500 I INDEX [conn82] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.224-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 1f651c65-34f6-406e-95a8-695d581a7d2b: test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7 (86bb941f-7b83-4a5f-b482-f55ec5131c25 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.211-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b'. Ident: collection-345--7234316082034423155, commit timestamp: Timestamp(1574796726, 2083)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.204-0500 I INDEX [conn84] Registering index build: e88e95c5-8d34-4fbe-bf55-5b9f885045b2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.225-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.213-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 710b99e3-0774-46aa-9bc7-78013d35d155: test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a ( 93c63257-205e-42cd-9acf-c8363aeb1208 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.204-0500 I INDEX [conn82] Registering index build: 39843261-4ea6-4c9d-911f-637cc4f0c5d0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.225-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.226-0500 I INDEX [ReplWriterWorker-0] index build: starting on test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.204-0500 I COMMAND [conn77] CMD: drop test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.228-0500 I COMMAND [ReplWriterWorker-2] CMD: drop test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.226-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.224-0500 I INDEX [conn84] index build: starting on test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.228-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 (38074c2a-6a92-45f7-838b-8965c3a7b309) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 2150), t: 1 } and commit timestamp Timestamp(1574796726, 2150)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.226-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: da34803c-e23a-412b-b502-f43885dadc1b: test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7 (86bb941f-7b83-4a5f-b482-f55ec5131c25 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.224-0500 I INDEX [conn84] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.228-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 (38074c2a-6a92-45f7-838b-8965c3a7b309).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.226-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.225-0500 I STORAGE [conn84] Index build initialized: e88e95c5-8d34-4fbe-bf55-5b9f885045b2: test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b (f1af14b0-31f6-4052-8ace-db55ff5d7550 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.228-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 (38074c2a-6a92-45f7-838b-8965c3a7b309)'. Ident: 'index-344--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 2150)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.227-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.225-0500 I INDEX [conn84] Waiting for index build to complete: e88e95c5-8d34-4fbe-bf55-5b9f885045b2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.228-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 (38074c2a-6a92-45f7-838b-8965c3a7b309)'. Ident: 'index-351--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 2150)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.229-0500 I COMMAND [ReplWriterWorker-9] CMD: drop test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.225-0500 I STORAGE [conn77] dropCollection: test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 (8b458b39-930e-4388-bffb-f6bf754070a6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.228-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05'. Ident: collection-343--2310912778499990807, commit timestamp: Timestamp(1574796726, 2150)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.229-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 (38074c2a-6a92-45f7-838b-8965c3a7b309) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 2150), t: 1 } and commit timestamp Timestamp(1574796726, 2150)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.225-0500 I STORAGE [conn77] Finishing collection drop for test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 (8b458b39-930e-4388-bffb-f6bf754070a6).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.228-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.229-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 (38074c2a-6a92-45f7-838b-8965c3a7b309).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.225-0500 I STORAGE [conn77] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 (8b458b39-930e-4388-bffb-f6bf754070a6)'. Ident: 'index-335--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 3334)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.229-0500 I STORAGE [ReplWriterWorker-13] createCollection: test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b with provided UUID: f1af14b0-31f6-4052-8ace-db55ff5d7550 and options: { uuid: UUID("f1af14b0-31f6-4052-8ace-db55ff5d7550"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.229-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 (38074c2a-6a92-45f7-838b-8965c3a7b309)'. Ident: 'index-344--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 2150)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.225-0500 I STORAGE [conn77] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 (8b458b39-930e-4388-bffb-f6bf754070a6)'. Ident: 'index-344--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 3334)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.232-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 1f651c65-34f6-406e-95a8-695d581a7d2b: test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7 ( 86bb941f-7b83-4a5f-b482-f55ec5131c25 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.229-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05 (38074c2a-6a92-45f7-838b-8965c3a7b309)'. Ident: 'index-351--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 2150)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.225-0500 I STORAGE [conn77] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235'. Ident: collection-332--2588534479858262356, commit timestamp: Timestamp(1574796726, 3334)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.248-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.229-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05'. Ident: collection-343--7234316082034423155, commit timestamp: Timestamp(1574796726, 2150)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.225-0500 I COMMAND [conn85] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.249-0500 I STORAGE [ReplWriterWorker-0] createCollection: test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a with provided UUID: 77f97fd4-9ec6-4554-ae41-2f2f9cfb33c9 and options: { uuid: UUID("77f97fd4-9ec6-4554-ae41-2f2f9cfb33c9"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.230-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.225-0500 I STORAGE [conn85] dropCollection: test4_fsmdb0.agg_out (9c782e1b-4d04-4dc2-8f81-30b332175404) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 3335), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.263-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.230-0500 I STORAGE [ReplWriterWorker-2] createCollection: test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b with provided UUID: f1af14b0-31f6-4052-8ace-db55ff5d7550 and options: { uuid: UUID("f1af14b0-31f6-4052-8ace-db55ff5d7550"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.225-0500 I STORAGE [conn85] Finishing collection drop for test4_fsmdb0.agg_out (9c782e1b-4d04-4dc2-8f81-30b332175404).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.281-0500 I COMMAND [ReplWriterWorker-4] CMD: drop test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.234-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: da34803c-e23a-412b-b502-f43885dadc1b: test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7 ( 86bb941f-7b83-4a5f-b482-f55ec5131c25 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.225-0500 I STORAGE [conn85] renameCollection: renaming collection 86bb941f-7b83-4a5f-b482-f55ec5131c25 from test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.281-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 (8b458b39-930e-4388-bffb-f6bf754070a6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 3334), t: 1 } and commit timestamp Timestamp(1574796726, 3334)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.250-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.226-0500 I STORAGE [conn85] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (9c782e1b-4d04-4dc2-8f81-30b332175404)'. Ident: 'index-323--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 3335)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.281-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 (8b458b39-930e-4388-bffb-f6bf754070a6).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.251-0500 I STORAGE [ReplWriterWorker-8] createCollection: test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a with provided UUID: 77f97fd4-9ec6-4554-ae41-2f2f9cfb33c9 and options: { uuid: UUID("77f97fd4-9ec6-4554-ae41-2f2f9cfb33c9"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.226-0500 I STORAGE [conn85] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (9c782e1b-4d04-4dc2-8f81-30b332175404)'. Ident: 'index-326--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 3335)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.281-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 (8b458b39-930e-4388-bffb-f6bf754070a6)'. Ident: 'index-342--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 3334)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.267-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.226-0500 I STORAGE [conn85] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-321--2588534479858262356, commit timestamp: Timestamp(1574796726, 3335)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.281-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 (8b458b39-930e-4388-bffb-f6bf754070a6)'. Ident: 'index-355--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 3334)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.274-0500 I COMMAND [ReplWriterWorker-14] CMD: drop test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.226-0500 I COMMAND [conn62] command test4_fsmdb0.agg_out appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 794239630766855779, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7891367351943879273, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796723141), clusterTime: Timestamp(1574796723, 1966) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796723, 2021), signature: { hash: BinData(0, A52D224811EE02C2D2B5572BB138ABAA426016D1), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45740", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796717, 8), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:991 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2852968 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3082ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.281-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235'. Ident: collection-341--2310912778499990807, commit timestamp: Timestamp(1574796726, 3334)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.274-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 (8b458b39-930e-4388-bffb-f6bf754070a6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 3334), t: 1 } and commit timestamp Timestamp(1574796726, 3334)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.226-0500 I COMMAND [conn88] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.281-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7 (86bb941f-7b83-4a5f-b482-f55ec5131c25) to test4_fsmdb0.agg_out and drop 9c782e1b-4d04-4dc2-8f81-30b332175404.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.274-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 (8b458b39-930e-4388-bffb-f6bf754070a6).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.226-0500 I STORAGE [conn88] dropCollection: test4_fsmdb0.agg_out (86bb941f-7b83-4a5f-b482-f55ec5131c25) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 3336), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.282-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test4_fsmdb0.agg_out (9c782e1b-4d04-4dc2-8f81-30b332175404) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 3335), t: 1 } and commit timestamp Timestamp(1574796726, 3335)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.275-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 (8b458b39-930e-4388-bffb-f6bf754070a6)'. Ident: 'index-342--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 3334)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.226-0500 I STORAGE [conn88] Finishing collection drop for test4_fsmdb0.agg_out (86bb941f-7b83-4a5f-b482-f55ec5131c25).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.282-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test4_fsmdb0.agg_out (9c782e1b-4d04-4dc2-8f81-30b332175404).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.275-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235 (8b458b39-930e-4388-bffb-f6bf754070a6)'. Ident: 'index-355--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 3334)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.226-0500 I COMMAND [conn64] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 793525169824340464, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8180439918207295214, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796726104), clusterTime: Timestamp(1574796726, 1203) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796726, 1319), signature: { hash: BinData(0, F7EF3025DE0D0D56407335B45DB1964F5F5ED405), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45728", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796725, 1), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 120ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.282-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection 86bb941f-7b83-4a5f-b482-f55ec5131c25 from test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.275-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235'. Ident: collection-341--7234316082034423155, commit timestamp: Timestamp(1574796726, 3334)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.226-0500 I STORAGE [conn88] renameCollection: renaming collection 93c63257-205e-42cd-9acf-c8363aeb1208 from test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.282-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (9c782e1b-4d04-4dc2-8f81-30b332175404)'. Ident: 'index-328--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 3335)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.275-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7 (86bb941f-7b83-4a5f-b482-f55ec5131c25) to test4_fsmdb0.agg_out and drop 9c782e1b-4d04-4dc2-8f81-30b332175404.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.226-0500 I STORAGE [conn88] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (86bb941f-7b83-4a5f-b482-f55ec5131c25)'. Ident: 'index-349--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 3336)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.282-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (9c782e1b-4d04-4dc2-8f81-30b332175404)'. Ident: 'index-337--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 3335)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.275-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test4_fsmdb0.agg_out (9c782e1b-4d04-4dc2-8f81-30b332175404) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 3335), t: 1 } and commit timestamp Timestamp(1574796726, 3335)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.226-0500 I STORAGE [conn88] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (86bb941f-7b83-4a5f-b482-f55ec5131c25)'. Ident: 'index-350--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 3336)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.282-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-327--2310912778499990807, commit timestamp: Timestamp(1574796726, 3335)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.275-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test4_fsmdb0.agg_out (9c782e1b-4d04-4dc2-8f81-30b332175404).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.226-0500 I STORAGE [conn88] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-347--2588534479858262356, commit timestamp: Timestamp(1574796726, 3336)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.282-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a (93c63257-205e-42cd-9acf-c8363aeb1208) to test4_fsmdb0.agg_out and drop 86bb941f-7b83-4a5f-b482-f55ec5131c25.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.275-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection 86bb941f-7b83-4a5f-b482-f55ec5131c25 from test4_fsmdb0.tmp.agg_out.24ea14a0-fc6f-4a3b-830f-2dc0eee57af7 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.226-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.283-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test4_fsmdb0.agg_out (86bb941f-7b83-4a5f-b482-f55ec5131c25) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 3336), t: 1 } and commit timestamp Timestamp(1574796726, 3336)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.275-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (9c782e1b-4d04-4dc2-8f81-30b332175404)'. Ident: 'index-328--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 3335)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.226-0500 I COMMAND [conn81] command test4_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2799590100394135689, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2281545397093645179, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796726049), clusterTime: Timestamp(1574796726, 503) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796726, 631), signature: { hash: BinData(0, F7EF3025DE0D0D56407335B45DB1964F5F5ED405), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58884", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796725, 1), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 176ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.283-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test4_fsmdb0.agg_out (86bb941f-7b83-4a5f-b482-f55ec5131c25).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.275-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (9c782e1b-4d04-4dc2-8f81-30b332175404)'. Ident: 'index-337--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 3335)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.227-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.283-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 93c63257-205e-42cd-9acf-c8363aeb1208 from test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.275-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-327--7234316082034423155, commit timestamp: Timestamp(1574796726, 3335)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.238-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.283-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (86bb941f-7b83-4a5f-b482-f55ec5131c25)'. Ident: 'index-354--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 3336)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.276-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a (93c63257-205e-42cd-9acf-c8363aeb1208) to test4_fsmdb0.agg_out and drop 86bb941f-7b83-4a5f-b482-f55ec5131c25.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.245-0500 I INDEX [conn82] index build: starting on test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.283-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (86bb941f-7b83-4a5f-b482-f55ec5131c25)'. Ident: 'index-359--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 3336)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.276-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test4_fsmdb0.agg_out (86bb941f-7b83-4a5f-b482-f55ec5131c25) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 3336), t: 1 } and commit timestamp Timestamp(1574796726, 3336)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.245-0500 I INDEX [conn82] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.283-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-353--2310912778499990807, commit timestamp: Timestamp(1574796726, 3336)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.276-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test4_fsmdb0.agg_out (86bb941f-7b83-4a5f-b482-f55ec5131c25).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.245-0500 I STORAGE [conn82] Index build initialized: 39843261-4ea6-4c9d-911f-637cc4f0c5d0: test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a (77f97fd4-9ec6-4554-ae41-2f2f9cfb33c9 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.297-0500 I INDEX [ReplWriterWorker-13] index build: starting on test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.276-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 93c63257-205e-42cd-9acf-c8363aeb1208 from test4_fsmdb0.tmp.agg_out.3f1fb14d-c321-4efd-a87c-2e6777171f4a to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.245-0500 I INDEX [conn82] Waiting for index build to complete: 39843261-4ea6-4c9d-911f-637cc4f0c5d0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.297-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.276-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (86bb941f-7b83-4a5f-b482-f55ec5131c25)'. Ident: 'index-354--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 3336)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.245-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.298-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: a69acc19-74c8-4df2-99fb-63bdb9b8fd43: test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b (f1af14b0-31f6-4052-8ace-db55ff5d7550 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.276-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (86bb941f-7b83-4a5f-b482-f55ec5131c25)'. Ident: 'index-359--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 3336)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.246-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: e88e95c5-8d34-4fbe-bf55-5b9f885045b2: test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b ( f1af14b0-31f6-4052-8ace-db55ff5d7550 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.298-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.276-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-353--7234316082034423155, commit timestamp: Timestamp(1574796726, 3336)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.246-0500 I INDEX [conn84] Index build completed: e88e95c5-8d34-4fbe-bf55-5b9f885045b2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.298-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.292-0500 I INDEX [ReplWriterWorker-12] index build: starting on test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.246-0500 I STORAGE [conn88] createCollection: test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d with generated UUID: 30310ce8-9bdb-49bd-89b4-4a1d5438c6db and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.301-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.292-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.246-0500 I STORAGE [conn85] createCollection: test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89 with generated UUID: 76651a3d-97e1-4c49-bb38-9125d0e3faf6 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.302-0500 I STORAGE [ReplWriterWorker-2] createCollection: test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d with provided UUID: 30310ce8-9bdb-49bd-89b4-4a1d5438c6db and options: { uuid: UUID("30310ce8-9bdb-49bd-89b4-4a1d5438c6db"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.292-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 58e11033-851f-4538-a94e-278d8d154e16: test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b (f1af14b0-31f6-4052-8ace-db55ff5d7550 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.246-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.303-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: a69acc19-74c8-4df2-99fb-63bdb9b8fd43: test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b ( f1af14b0-31f6-4052-8ace-db55ff5d7550 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.292-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.248-0500 I STORAGE [conn77] createCollection: test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8 with generated UUID: 137831bc-02a9-4b1f-848b-320ccb8b5425 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.318-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.293-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.263-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.318-0500 I STORAGE [ReplWriterWorker-4] createCollection: test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89 with provided UUID: 76651a3d-97e1-4c49-bb38-9125d0e3faf6 and options: { uuid: UUID("76651a3d-97e1-4c49-bb38-9125d0e3faf6"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.295-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.280-0500 I INDEX [conn88] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.335-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.296-0500 I STORAGE [ReplWriterWorker-10] createCollection: test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d with provided UUID: 30310ce8-9bdb-49bd-89b4-4a1d5438c6db and options: { uuid: UUID("30310ce8-9bdb-49bd-89b4-4a1d5438c6db"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.281-0500 I INDEX [conn88] Registering index build: b49222b5-2d96-43a0-94a0-af7e62a5dd7f
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.336-0500 I STORAGE [ReplWriterWorker-11] createCollection: test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8 with provided UUID: 137831bc-02a9-4b1f-848b-320ccb8b5425 and options: { uuid: UUID("137831bc-02a9-4b1f-848b-320ccb8b5425"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.299-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 58e11033-851f-4538-a94e-278d8d154e16: test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b ( f1af14b0-31f6-4052-8ace-db55ff5d7550 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.288-0500 I INDEX [conn85] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.350-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.315-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.291-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 39843261-4ea6-4c9d-911f-637cc4f0c5d0: test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a ( 77f97fd4-9ec6-4554-ae41-2f2f9cfb33c9 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.379-0500 I INDEX [ReplWriterWorker-12] index build: starting on test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.315-0500 I STORAGE [ReplWriterWorker-14] createCollection: test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89 with provided UUID: 76651a3d-97e1-4c49-bb38-9125d0e3faf6 and options: { uuid: UUID("76651a3d-97e1-4c49-bb38-9125d0e3faf6"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.302-0500 I INDEX [conn77] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.379-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.331-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.317-0500 I INDEX [conn88] index build: starting on test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.379-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 8caadbb6-6b8f-4c13-b881-29ba8b83804f: test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a (77f97fd4-9ec6-4554-ae41-2f2f9cfb33c9 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.332-0500 I STORAGE [ReplWriterWorker-8] createCollection: test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8 with provided UUID: 137831bc-02a9-4b1f-848b-320ccb8b5425 and options: { uuid: UUID("137831bc-02a9-4b1f-848b-320ccb8b5425"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.317-0500 I INDEX [conn88] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.380-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.346-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.317-0500 I STORAGE [conn88] Index build initialized: b49222b5-2d96-43a0-94a0-af7e62a5dd7f: test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d (30310ce8-9bdb-49bd-89b4-4a1d5438c6db ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.380-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.364-0500 I INDEX [ReplWriterWorker-15] index build: starting on test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.317-0500 I INDEX [conn88] Waiting for index build to complete: b49222b5-2d96-43a0-94a0-af7e62a5dd7f
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.383-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.364-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.317-0500 I INDEX [conn82] Index build completed: 39843261-4ea6-4c9d-911f-637cc4f0c5d0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.385-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 8caadbb6-6b8f-4c13-b881-29ba8b83804f: test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a ( 77f97fd4-9ec6-4554-ae41-2f2f9cfb33c9 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.364-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 357222d4-cd55-49bb-95b7-68de2f7fe340: test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a (77f97fd4-9ec6-4554-ae41-2f2f9cfb33c9 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.317-0500 I COMMAND [conn82] command test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796726, 2460), signature: { hash: BinData(0, F7EF3025DE0D0D56407335B45DB1964F5F5ED405), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796725, 1), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 1418 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 113ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.404-0500 I INDEX [ReplWriterWorker-6] index build: starting on test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.364-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.318-0500 I INDEX [conn85] Registering index build: 84ba541e-ef29-4fa9-a0e3-d679bfdc79cc
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.404-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.365-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.318-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.404-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 28dd7170-9728-45c2-994c-81f6412c1a12: test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d (30310ce8-9bdb-49bd-89b4-4a1d5438c6db ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.367-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.318-0500 I INDEX [conn77] Registering index build: 184553f1-566e-4ef3-b4bc-31265ebdb07b
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.404-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.371-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 357222d4-cd55-49bb-95b7-68de2f7fe340: test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a ( 77f97fd4-9ec6-4554-ae41-2f2f9cfb33c9 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.318-0500 I COMMAND [conn82] CMD: drop test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.404-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.386-0500 I INDEX [ReplWriterWorker-0] index build: starting on test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.318-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.408-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.386-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.327-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.408-0500 I COMMAND [ReplWriterWorker-11] CMD: drop test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.386-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 789dff0f-50ce-4181-b20b-32627cd7567d: test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d (30310ce8-9bdb-49bd-89b4-4a1d5438c6db ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.338-0500 I INDEX [conn85] index build: starting on test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.408-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b (f1af14b0-31f6-4052-8ace-db55ff5d7550) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 4350), t: 1 } and commit timestamp Timestamp(1574796726, 4350)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.386-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.338-0500 I INDEX [conn85] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.408-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b (f1af14b0-31f6-4052-8ace-db55ff5d7550).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.387-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.338-0500 I STORAGE [conn85] Index build initialized: 84ba541e-ef29-4fa9-a0e3-d679bfdc79cc: test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89 (76651a3d-97e1-4c49-bb38-9125d0e3faf6 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.408-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b (f1af14b0-31f6-4052-8ace-db55ff5d7550)'. Ident: 'index-362--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 4350)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.389-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.338-0500 I INDEX [conn85] Waiting for index build to complete: 84ba541e-ef29-4fa9-a0e3-d679bfdc79cc
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.408-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b (f1af14b0-31f6-4052-8ace-db55ff5d7550)'. Ident: 'index-365--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 4350)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.391-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.338-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.408-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b'. Ident: collection-361--2310912778499990807, commit timestamp: Timestamp(1574796726, 4350)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.391-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b (f1af14b0-31f6-4052-8ace-db55ff5d7550) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 4350), t: 1 } and commit timestamp Timestamp(1574796726, 4350)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.338-0500 I STORAGE [conn82] dropCollection: test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b (f1af14b0-31f6-4052-8ace-db55ff5d7550) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.412-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 28dd7170-9728-45c2-994c-81f6412c1a12: test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d ( 30310ce8-9bdb-49bd-89b4-4a1d5438c6db ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.391-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b (f1af14b0-31f6-4052-8ace-db55ff5d7550).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.338-0500 I STORAGE [conn82] Finishing collection drop for test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b (f1af14b0-31f6-4052-8ace-db55ff5d7550).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.428-0500 I INDEX [ReplWriterWorker-12] index build: starting on test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.391-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b (f1af14b0-31f6-4052-8ace-db55ff5d7550)'. Ident: 'index-362--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 4350)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.338-0500 I STORAGE [conn82] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b (f1af14b0-31f6-4052-8ace-db55ff5d7550)'. Ident: 'index-354--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 4350)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.428-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.391-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b (f1af14b0-31f6-4052-8ace-db55ff5d7550)'. Ident: 'index-365--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 4350)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.338-0500 I STORAGE [conn82] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b (f1af14b0-31f6-4052-8ace-db55ff5d7550)'. Ident: 'index-356--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 4350)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.428-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: ab097731-783b-43ea-a218-ed0ba0cacfc8: test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89 (76651a3d-97e1-4c49-bb38-9125d0e3faf6 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.391-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b'. Ident: collection-361--7234316082034423155, commit timestamp: Timestamp(1574796726, 4350)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.338-0500 I STORAGE [conn82] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b'. Ident: collection-352--2588534479858262356, commit timestamp: Timestamp(1574796726, 4350)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.428-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.393-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 789dff0f-50ce-4181-b20b-32627cd7567d: test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d ( 30310ce8-9bdb-49bd-89b4-4a1d5438c6db ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.338-0500 I COMMAND [conn80] command test4_fsmdb0.agg_out appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3081826975996167438, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4264944014327998707, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796726164), clusterTime: Timestamp(1574796726, 2083) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796726, 2149), signature: { hash: BinData(0, F7EF3025DE0D0D56407335B45DB1964F5F5ED405), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58880", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796725, 1), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:990 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 168ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.429-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.408-0500 I INDEX [ReplWriterWorker-0] index build: starting on test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.342-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: b49222b5-2d96-43a0-94a0-af7e62a5dd7f: test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d ( 30310ce8-9bdb-49bd-89b4-4a1d5438c6db ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.343-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.408-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.430-0500 I COMMAND [ReplWriterWorker-3] CMD: drop test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.351-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.408-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: c29a7a55-0078-4c1c-a94d-47c779f3ad5b: test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89 (76651a3d-97e1-4c49-bb38-9125d0e3faf6 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.430-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a (77f97fd4-9ec6-4554-ae41-2f2f9cfb33c9) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 4354), t: 1 } and commit timestamp Timestamp(1574796726, 4354)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.358-0500 I INDEX [conn77] index build: starting on test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.408-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.430-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a (77f97fd4-9ec6-4554-ae41-2f2f9cfb33c9).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.358-0500 I INDEX [conn77] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.409-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.430-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a (77f97fd4-9ec6-4554-ae41-2f2f9cfb33c9)'. Ident: 'index-364--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 4354)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.359-0500 I STORAGE [conn77] Index build initialized: 184553f1-566e-4ef3-b4bc-31265ebdb07b: test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8 (137831bc-02a9-4b1f-848b-320ccb8b5425 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.410-0500 I COMMAND [ReplWriterWorker-11] CMD: drop test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.430-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a (77f97fd4-9ec6-4554-ae41-2f2f9cfb33c9)'. Ident: 'index-373--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 4354)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.359-0500 I INDEX [conn77] Waiting for index build to complete: 184553f1-566e-4ef3-b4bc-31265ebdb07b
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.410-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a (77f97fd4-9ec6-4554-ae41-2f2f9cfb33c9) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 4354), t: 1 } and commit timestamp Timestamp(1574796726, 4354)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.430-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a'. Ident: collection-363--2310912778499990807, commit timestamp: Timestamp(1574796726, 4354)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.359-0500 I INDEX [conn88] Index build completed: b49222b5-2d96-43a0-94a0-af7e62a5dd7f
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.410-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a (77f97fd4-9ec6-4554-ae41-2f2f9cfb33c9).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.431-0500 I STORAGE [ReplWriterWorker-7] createCollection: test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909 with provided UUID: 04e37496-38da-4cdd-bbf9-86733a6f175b and options: { uuid: UUID("04e37496-38da-4cdd-bbf9-86733a6f175b"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.359-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.410-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a (77f97fd4-9ec6-4554-ae41-2f2f9cfb33c9)'. Ident: 'index-364--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 4354)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.431-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.359-0500 I COMMAND [conn88] CMD: drop test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.410-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a (77f97fd4-9ec6-4554-ae41-2f2f9cfb33c9)'. Ident: 'index-373--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 4354)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.441-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: ab097731-783b-43ea-a218-ed0ba0cacfc8: test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89 ( 76651a3d-97e1-4c49-bb38-9125d0e3faf6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.359-0500 I STORAGE [conn88] dropCollection: test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a (77f97fd4-9ec6-4554-ae41-2f2f9cfb33c9) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.410-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a'. Ident: collection-363--7234316082034423155, commit timestamp: Timestamp(1574796726, 4354)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.448-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.359-0500 I STORAGE [conn88] Finishing collection drop for test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a (77f97fd4-9ec6-4554-ae41-2f2f9cfb33c9).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.411-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.449-0500 I STORAGE [ReplWriterWorker-8] createCollection: test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1 with provided UUID: 1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29 and options: { uuid: UUID("1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.359-0500 I STORAGE [conn88] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a (77f97fd4-9ec6-4554-ae41-2f2f9cfb33c9)'. Ident: 'index-355--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 4354)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.411-0500 I STORAGE [ReplWriterWorker-9] createCollection: test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909 with provided UUID: 04e37496-38da-4cdd-bbf9-86733a6f175b and options: { uuid: UUID("04e37496-38da-4cdd-bbf9-86733a6f175b"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.464-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.359-0500 I STORAGE [conn88] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a (77f97fd4-9ec6-4554-ae41-2f2f9cfb33c9)'. Ident: 'index-358--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 4354)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.421-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: c29a7a55-0078-4c1c-a94d-47c779f3ad5b: test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89 ( 76651a3d-97e1-4c49-bb38-9125d0e3faf6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.495-0500 I INDEX [ReplWriterWorker-2] index build: starting on test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.359-0500 I STORAGE [conn88] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a'. Ident: collection-353--2588534479858262356, commit timestamp: Timestamp(1574796726, 4354)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.429-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.495-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.360-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 84ba541e-ef29-4fa9-a0e3-d679bfdc79cc: test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89 ( 76651a3d-97e1-4c49-bb38-9125d0e3faf6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.430-0500 I STORAGE [ReplWriterWorker-3] createCollection: test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1 with provided UUID: 1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29 and options: { uuid: UUID("1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.495-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 705aec87-e9ec-4584-8142-a947af64492c: test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8 (137831bc-02a9-4b1f-848b-320ccb8b5425 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.360-0500 I INDEX [conn85] Index build completed: 84ba541e-ef29-4fa9-a0e3-d679bfdc79cc
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.445-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.495-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.360-0500 I COMMAND [conn65] command test4_fsmdb0.agg_out appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3756149979667103142, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2367416344207954183, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796726171), clusterTime: Timestamp(1574796726, 2150) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796726, 2151), signature: { hash: BinData(0, F7EF3025DE0D0D56407335B45DB1964F5F5ED405), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796725, 1), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:990 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 187ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.463-0500 I INDEX [ReplWriterWorker-0] index build: starting on test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.495-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.360-0500 I STORAGE [conn88] createCollection: test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909 with generated UUID: 04e37496-38da-4cdd-bbf9-86733a6f175b and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.463-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.498-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.360-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.463-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 9395e690-eb65-49a3-bf1b-7447dd1f8a18: test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8 (137831bc-02a9-4b1f-848b-320ccb8b5425 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.500-0500 I COMMAND [conn90] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796726, 5190) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("cdc95860-821e-434f-a778-c3f0a4a57ae9") }, $clusterTime: { clusterTime: Timestamp(1574796726, 5254), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 15636 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 111ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.362-0500 I STORAGE [conn85] createCollection: test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1 with generated UUID: 1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.463-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.500-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d (30310ce8-9bdb-49bd-89b4-4a1d5438c6db) to test4_fsmdb0.agg_out and drop 93c63257-205e-42cd-9acf-c8363aeb1208.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.371-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.464-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.501-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test4_fsmdb0.agg_out (93c63257-205e-42cd-9acf-c8363aeb1208) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 5360), t: 1 } and commit timestamp Timestamp(1574796726, 5360)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.388-0500 I INDEX [conn88] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.465-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.501-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test4_fsmdb0.agg_out (93c63257-205e-42cd-9acf-c8363aeb1208).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.388-0500 I INDEX [conn88] Registering index build: ec7f00bc-69cf-44c9-a392-6e249c119ab1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.470-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d (30310ce8-9bdb-49bd-89b4-4a1d5438c6db) to test4_fsmdb0.agg_out and drop 93c63257-205e-42cd-9acf-c8363aeb1208.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.501-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 30310ce8-9bdb-49bd-89b4-4a1d5438c6db from test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.391-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 184553f1-566e-4ef3-b4bc-31265ebdb07b: test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8 ( 137831bc-02a9-4b1f-848b-320ccb8b5425 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.470-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test4_fsmdb0.agg_out (93c63257-205e-42cd-9acf-c8363aeb1208) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 5360), t: 1 } and commit timestamp Timestamp(1574796726, 5360)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.501-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 705aec87-e9ec-4584-8142-a947af64492c: test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8 ( 137831bc-02a9-4b1f-848b-320ccb8b5425 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.400-0500 I INDEX [conn85] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.470-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test4_fsmdb0.agg_out (93c63257-205e-42cd-9acf-c8363aeb1208).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.501-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (93c63257-205e-42cd-9acf-c8363aeb1208)'. Ident: 'index-348--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 5360)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.416-0500 I INDEX [conn88] index build: starting on test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.470-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 30310ce8-9bdb-49bd-89b4-4a1d5438c6db from test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.501-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (93c63257-205e-42cd-9acf-c8363aeb1208)'. Ident: 'index-357--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 5360)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.416-0500 I INDEX [conn88] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.470-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (93c63257-205e-42cd-9acf-c8363aeb1208)'. Ident: 'index-348--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 5360)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.501-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-347--2310912778499990807, commit timestamp: Timestamp(1574796726, 5360)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.416-0500 I STORAGE [conn88] Index build initialized: ec7f00bc-69cf-44c9-a392-6e249c119ab1: test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909 (04e37496-38da-4cdd-bbf9-86733a6f175b ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.470-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 9395e690-eb65-49a3-bf1b-7447dd1f8a18: test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8 ( 137831bc-02a9-4b1f-848b-320ccb8b5425 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.501-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89 (76651a3d-97e1-4c49-bb38-9125d0e3faf6) to test4_fsmdb0.agg_out and drop 30310ce8-9bdb-49bd-89b4-4a1d5438c6db.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.416-0500 I INDEX [conn88] Waiting for index build to complete: ec7f00bc-69cf-44c9-a392-6e249c119ab1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.470-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (93c63257-205e-42cd-9acf-c8363aeb1208)'. Ident: 'index-357--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 5360)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.501-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test4_fsmdb0.agg_out (30310ce8-9bdb-49bd-89b4-4a1d5438c6db) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 5361), t: 1 } and commit timestamp Timestamp(1574796726, 5361)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.416-0500 I INDEX [conn77] Index build completed: 184553f1-566e-4ef3-b4bc-31265ebdb07b
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.470-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-347--7234316082034423155, commit timestamp: Timestamp(1574796726, 5360)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.501-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test4_fsmdb0.agg_out (30310ce8-9bdb-49bd-89b4-4a1d5438c6db).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.416-0500 I COMMAND [conn84] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.471-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89 (76651a3d-97e1-4c49-bb38-9125d0e3faf6) to test4_fsmdb0.agg_out and drop 30310ce8-9bdb-49bd-89b4-4a1d5438c6db.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.471-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test4_fsmdb0.agg_out (30310ce8-9bdb-49bd-89b4-4a1d5438c6db) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 5361), t: 1 } and commit timestamp Timestamp(1574796726, 5361)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.416-0500 I COMMAND [conn77] command test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8 appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796726, 3845), signature: { hash: BinData(0, F7EF3025DE0D0D56407335B45DB1964F5F5ED405), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58884", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796725, 1), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 14953 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 113ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.501-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection 76651a3d-97e1-4c49-bb38-9125d0e3faf6 from test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.471-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test4_fsmdb0.agg_out (30310ce8-9bdb-49bd-89b4-4a1d5438c6db).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.416-0500 I STORAGE [conn84] dropCollection: test4_fsmdb0.agg_out (93c63257-205e-42cd-9acf-c8363aeb1208) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 5360), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.502-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (30310ce8-9bdb-49bd-89b4-4a1d5438c6db)'. Ident: 'index-368--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 5361)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.471-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection 76651a3d-97e1-4c49-bb38-9125d0e3faf6 from test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.416-0500 I STORAGE [conn84] Finishing collection drop for test4_fsmdb0.agg_out (93c63257-205e-42cd-9acf-c8363aeb1208).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.502-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (30310ce8-9bdb-49bd-89b4-4a1d5438c6db)'. Ident: 'index-375--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 5361)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.471-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (30310ce8-9bdb-49bd-89b4-4a1d5438c6db)'. Ident: 'index-368--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 5361)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.416-0500 I STORAGE [conn84] renameCollection: renaming collection 30310ce8-9bdb-49bd-89b4-4a1d5438c6db from test4_fsmdb0.tmp.agg_out.1505e70c-5ca2-4a84-97cb-243a1215766d to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.502-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-367--2310912778499990807, commit timestamp: Timestamp(1574796726, 5361)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.471-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (30310ce8-9bdb-49bd-89b4-4a1d5438c6db)'. Ident: 'index-375--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 5361)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.416-0500 I STORAGE [conn84] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (93c63257-205e-42cd-9acf-c8363aeb1208)'. Ident: 'index-343--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 5360)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.502-0500 I STORAGE [ReplWriterWorker-2] createCollection: test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983 with provided UUID: 323f8a1c-6905-415d-a9b1-3c874d19ada8 and options: { uuid: UUID("323f8a1c-6905-415d-a9b1-3c874d19ada8"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.471-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-367--7234316082034423155, commit timestamp: Timestamp(1574796726, 5361)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.417-0500 I STORAGE [conn84] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (93c63257-205e-42cd-9acf-c8363aeb1208)'. Ident: 'index-346--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 5360)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.517-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.472-0500 I STORAGE [ReplWriterWorker-9] createCollection: test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983 with provided UUID: 323f8a1c-6905-415d-a9b1-3c874d19ada8 and options: { uuid: UUID("323f8a1c-6905-415d-a9b1-3c874d19ada8"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.417-0500 I STORAGE [conn84] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-341--2588534479858262356, commit timestamp: Timestamp(1574796726, 5360)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.529-0500 I INDEX [ReplWriterWorker-13] index build: starting on test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.487-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.417-0500 I COMMAND [conn82] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.529-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.505-0500 I INDEX [ReplWriterWorker-10] index build: starting on test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.417-0500 I COMMAND [conn64] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2058495653089847322, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1590090801164080263, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796726227), clusterTime: Timestamp(1574796726, 3336) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796726, 3336), signature: { hash: BinData(0, F7EF3025DE0D0D56407335B45DB1964F5F5ED405), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45728", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796725, 1), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 17292 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 188ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.529-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 75bf41fd-0307-4a99-b746-e59ab628d513: test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909 (04e37496-38da-4cdd-bbf9-86733a6f175b ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.505-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.417-0500 I STORAGE [conn82] dropCollection: test4_fsmdb0.agg_out (30310ce8-9bdb-49bd-89b4-4a1d5438c6db) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 5361), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.529-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.505-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: cde66144-7010-40bd-abae-1b1a589ea3fa: test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909 (04e37496-38da-4cdd-bbf9-86733a6f175b ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.417-0500 I STORAGE [conn82] Finishing collection drop for test4_fsmdb0.agg_out (30310ce8-9bdb-49bd-89b4-4a1d5438c6db).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.530-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.505-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.417-0500 I STORAGE [conn82] renameCollection: renaming collection 76651a3d-97e1-4c49-bb38-9125d0e3faf6 from test4_fsmdb0.tmp.agg_out.a4cc4d72-50ac-4d5f-b8c6-56b571fdde89 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.530-0500 I STORAGE [ReplWriterWorker-12] createCollection: test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f with provided UUID: d585e869-f5cf-413f-87d8-d66ec7926809 and options: { uuid: UUID("d585e869-f5cf-413f-87d8-d66ec7926809"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.506-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.417-0500 I STORAGE [conn82] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (30310ce8-9bdb-49bd-89b4-4a1d5438c6db)'. Ident: 'index-363--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 5361)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.532-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.506-0500 I STORAGE [ReplWriterWorker-14] createCollection: test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f with provided UUID: d585e869-f5cf-413f-87d8-d66ec7926809 and options: { uuid: UUID("d585e869-f5cf-413f-87d8-d66ec7926809"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.417-0500 I STORAGE [conn82] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (30310ce8-9bdb-49bd-89b4-4a1d5438c6db)'. Ident: 'index-366--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 5361)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.541-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 75bf41fd-0307-4a99-b746-e59ab628d513: test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909 ( 04e37496-38da-4cdd-bbf9-86733a6f175b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.508-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.417-0500 I STORAGE [conn82] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-360--2588534479858262356, commit timestamp: Timestamp(1574796726, 5361)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.546-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.516-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: cde66144-7010-40bd-abae-1b1a589ea3fa: test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909 ( 04e37496-38da-4cdd-bbf9-86733a6f175b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.417-0500 I INDEX [conn85] Registering index build: 7ed65aa2-4ab9-45e0-a29d-37ace2c7e28d
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.564-0500 I INDEX [ReplWriterWorker-0] index build: starting on test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.523-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.417-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.564-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.542-0500 I INDEX [ReplWriterWorker-6] index build: starting on test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.417-0500 I COMMAND [conn62] command test4_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4059263486454278278, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7036689584590528874, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796726227), clusterTime: Timestamp(1574796726, 3336) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796726, 3336), signature: { hash: BinData(0, F7EF3025DE0D0D56407335B45DB1964F5F5ED405), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45740", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796725, 1), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 17337 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 189ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.564-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 5c056d93-02dc-4ca4-91a4-d4676e70112b: test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1 (1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.542-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.418-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.564-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.542-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: a352e920-484e-420b-9504-4544f21540e6: test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1 (1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.420-0500 I STORAGE [conn82] createCollection: test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983 with generated UUID: 323f8a1c-6905-415d-a9b1-3c874d19ada8 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.564-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.542-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.420-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.565-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8 (137831bc-02a9-4b1f-848b-320ccb8b5425) to test4_fsmdb0.agg_out and drop 76651a3d-97e1-4c49-bb38-9125d0e3faf6.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.543-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.420-0500 I STORAGE [conn77] createCollection: test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f with generated UUID: d585e869-f5cf-413f-87d8-d66ec7926809 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.567-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.543-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8 (137831bc-02a9-4b1f-848b-320ccb8b5425) to test4_fsmdb0.agg_out and drop 76651a3d-97e1-4c49-bb38-9125d0e3faf6.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.435-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: ec7f00bc-69cf-44c9-a392-6e249c119ab1: test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909 ( 04e37496-38da-4cdd-bbf9-86733a6f175b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.567-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test4_fsmdb0.agg_out (76651a3d-97e1-4c49-bb38-9125d0e3faf6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 5869), t: 1 } and commit timestamp Timestamp(1574796726, 5869)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.544-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.450-0500 I INDEX [conn85] index build: starting on test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.567-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test4_fsmdb0.agg_out (76651a3d-97e1-4c49-bb38-9125d0e3faf6).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.545-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test4_fsmdb0.agg_out (76651a3d-97e1-4c49-bb38-9125d0e3faf6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 5869), t: 1 } and commit timestamp Timestamp(1574796726, 5869)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.450-0500 I INDEX [conn85] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.567-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 137831bc-02a9-4b1f-848b-320ccb8b5425 from test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:09.222-0500 I COMMAND [conn170] command test4_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6") }, $clusterTime: { clusterTime: Timestamp(1574796726, 4354), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2860ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:09.340-0500 I COMMAND [conn52] command test4_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a") }, $clusterTime: { clusterTime: Timestamp(1574796729, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 160ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.545-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test4_fsmdb0.agg_out (76651a3d-97e1-4c49-bb38-9125d0e3faf6).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.451-0500 I STORAGE [conn85] Index build initialized: 7ed65aa2-4ab9-45e0-a29d-37ace2c7e28d: test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1 (1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.567-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (76651a3d-97e1-4c49-bb38-9125d0e3faf6)'. Ident: 'index-370--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 5869)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:09.268-0500 I COMMAND [conn164] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458") }, $clusterTime: { clusterTime: Timestamp(1574796726, 5361), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2849ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:09.397-0500 I COMMAND [conn53] command test4_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70") }, $clusterTime: { clusterTime: Timestamp(1574796729, 505), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 195ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.545-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 137831bc-02a9-4b1f-848b-320ccb8b5425 from test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.451-0500 I INDEX [conn85] Waiting for index build to complete: 7ed65aa2-4ab9-45e0-a29d-37ace2c7e28d
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.567-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (76651a3d-97e1-4c49-bb38-9125d0e3faf6)'. Ident: 'index-377--2310912778499990807', commit timestamp: 'Timestamp(1574796726, 5869)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:09.302-0500 I COMMAND [conn169] command test4_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51") }, $clusterTime: { clusterTime: Timestamp(1574796726, 5361), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2883ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.545-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (76651a3d-97e1-4c49-bb38-9125d0e3faf6)'. Ident: 'index-370--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 5869)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.451-0500 I INDEX [conn88] Index build completed: ec7f00bc-69cf-44c9-a392-6e249c119ab1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.567-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-369--2310912778499990807, commit timestamp: Timestamp(1574796726, 5869)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:09.439-0500 I COMMAND [conn170] command test4_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6") }, $clusterTime: { clusterTime: Timestamp(1574796729, 573), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 215ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.545-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (76651a3d-97e1-4c49-bb38-9125d0e3faf6)'. Ident: 'index-377--7234316082034423155', commit timestamp: 'Timestamp(1574796726, 5869)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.451-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:06.568-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 5c056d93-02dc-4ca4-91a4-d4676e70112b: test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1 ( 1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:09.443-0500 I COMMAND [conn164] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458") }, $clusterTime: { clusterTime: Timestamp(1574796729, 1141), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 173ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.545-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-369--7234316082034423155, commit timestamp: Timestamp(1574796726, 5869)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.458-0500 I INDEX [conn82] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.185-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909 (04e37496-38da-4cdd-bbf9-86733a6f175b) to test4_fsmdb0.agg_out and drop 137831bc-02a9-4b1f-848b-320ccb8b5425.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:06.547-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: a352e920-484e-420b-9504-4544f21540e6: test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1 ( 1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.467-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.185-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test4_fsmdb0.agg_out (137831bc-02a9-4b1f-848b-320ccb8b5425) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 2), t: 1 } and commit timestamp Timestamp(1574796729, 2)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.468-0500 I INDEX [conn77] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.185-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909 (04e37496-38da-4cdd-bbf9-86733a6f175b) to test4_fsmdb0.agg_out and drop 137831bc-02a9-4b1f-848b-320ccb8b5425.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.185-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test4_fsmdb0.agg_out (137831bc-02a9-4b1f-848b-320ccb8b5425).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.470-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.185-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test4_fsmdb0.agg_out (137831bc-02a9-4b1f-848b-320ccb8b5425) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 2), t: 1 } and commit timestamp Timestamp(1574796729, 2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.185-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection 04e37496-38da-4cdd-bbf9-86733a6f175b from test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.470-0500 I COMMAND [conn84] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.185-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test4_fsmdb0.agg_out (137831bc-02a9-4b1f-848b-320ccb8b5425).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.185-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (137831bc-02a9-4b1f-848b-320ccb8b5425)'. Ident: 'index-372--2310912778499990807', commit timestamp: 'Timestamp(1574796729, 2)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.470-0500 I STORAGE [conn84] dropCollection: test4_fsmdb0.agg_out (76651a3d-97e1-4c49-bb38-9125d0e3faf6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796726, 5869), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.185-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection 04e37496-38da-4cdd-bbf9-86733a6f175b from test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.185-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (137831bc-02a9-4b1f-848b-320ccb8b5425)'. Ident: 'index-383--2310912778499990807', commit timestamp: 'Timestamp(1574796729, 2)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.470-0500 I STORAGE [conn84] Finishing collection drop for test4_fsmdb0.agg_out (76651a3d-97e1-4c49-bb38-9125d0e3faf6).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.185-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (137831bc-02a9-4b1f-848b-320ccb8b5425)'. Ident: 'index-372--7234316082034423155', commit timestamp: 'Timestamp(1574796729, 2)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.185-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-371--2310912778499990807, commit timestamp: Timestamp(1574796729, 2)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.470-0500 I STORAGE [conn84] renameCollection: renaming collection 137831bc-02a9-4b1f-848b-320ccb8b5425 from test4_fsmdb0.tmp.agg_out.98768301-9096-4e1c-8121-4c322e56cec8 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.185-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (137831bc-02a9-4b1f-848b-320ccb8b5425)'. Ident: 'index-383--7234316082034423155', commit timestamp: 'Timestamp(1574796729, 2)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.205-0500 I INDEX [ReplWriterWorker-14] index build: starting on test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.471-0500 I STORAGE [conn84] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (76651a3d-97e1-4c49-bb38-9125d0e3faf6)'. Ident: 'index-364--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 5869)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.185-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-371--7234316082034423155, commit timestamp: Timestamp(1574796729, 2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.206-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.471-0500 I STORAGE [conn84] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (76651a3d-97e1-4c49-bb38-9125d0e3faf6)'. Ident: 'index-368--2588534479858262356', commit timestamp: 'Timestamp(1574796726, 5869)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.204-0500 I INDEX [ReplWriterWorker-6] index build: starting on test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.206-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 87e43f2d-fe5b-4bff-a9a8-b04e0e8a2481: test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983 (323f8a1c-6905-415d-a9b1-3c874d19ada8 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.471-0500 I STORAGE [conn84] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-361--2588534479858262356, commit timestamp: Timestamp(1574796726, 5869)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.204-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.206-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.471-0500 I INDEX [conn82] Registering index build: 771b1fe8-9035-4843-b6d2-107cf0b30816
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.204-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: c127fbae-678f-4561-ace8-cdd80e6f3aca: test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983 (323f8a1c-6905-415d-a9b1-3c874d19ada8 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.206-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.471-0500 I INDEX [conn77] Registering index build: fd0586e4-d8b8-4a9d-b7c5-c92933705eab
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.204-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.209-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.471-0500 I COMMAND [conn81] command test4_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8802192822734709917, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7865939064338315322, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796726247), clusterTime: Timestamp(1574796726, 3340) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796726, 3342), signature: { hash: BinData(0, F7EF3025DE0D0D56407335B45DB1964F5F5ED405), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58884", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796725, 1), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 223ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.205-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.211-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 87e43f2d-fe5b-4bff-a9a8-b04e0e8a2481: test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983 ( 323f8a1c-6905-415d-a9b1-3c874d19ada8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.208-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.222-0500 I STORAGE [ReplWriterWorker-10] createCollection: test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379 with provided UUID: 0919f278-8fd5-4dba-9cf5-30be42e59b7a and options: { uuid: UUID("0919f278-8fd5-4dba-9cf5-30be42e59b7a"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.471-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 7ed65aa2-4ab9-45e0-a29d-37ace2c7e28d: test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1 ( 1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.211-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: c127fbae-678f-4561-ace8-cdd80e6f3aca: test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983 ( 323f8a1c-6905-415d-a9b1-3c874d19ada8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.236-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:06.487-0500 I INDEX [conn82] index build: starting on test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.222-0500 I STORAGE [ReplWriterWorker-7] createCollection: test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379 with provided UUID: 0919f278-8fd5-4dba-9cf5-30be42e59b7a and options: { uuid: UUID("0919f278-8fd5-4dba-9cf5-30be42e59b7a"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.254-0500 I INDEX [ReplWriterWorker-8] index build: starting on test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.177-0500 I INDEX [conn82] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:11.365-0500 I COMMAND [conn169] command test4_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51") }, $clusterTime: { clusterTime: Timestamp(1574796729, 1518), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2061ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.237-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.254-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.177-0500 I STORAGE [conn82] Index build initialized: 771b1fe8-9035-4843-b6d2-107cf0b30816: test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983 (323f8a1c-6905-415d-a9b1-3c874d19ada8 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.253-0500 I INDEX [ReplWriterWorker-2] index build: starting on test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.254-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 2dd9562a-935c-413c-802c-d3f26efe2581: test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f (d585e869-f5cf-413f-87d8-d66ec7926809 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.177-0500 I INDEX [conn82] Waiting for index build to complete: 771b1fe8-9035-4843-b6d2-107cf0b30816
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.253-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.254-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.177-0500 I INDEX [conn85] Index build completed: 7ed65aa2-4ab9-45e0-a29d-37ace2c7e28d
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.253-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: a146523b-43b7-41d2-9e5a-d085d48149a1: test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f (d585e869-f5cf-413f-87d8-d66ec7926809 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.255-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.177-0500 I COMMAND [conn88] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.253-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.255-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1 (1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29) to test4_fsmdb0.agg_out and drop 04e37496-38da-4cdd-bbf9-86733a6f175b.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.178-0500 I COMMAND [conn85] command test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1 appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796726, 5358), signature: { hash: BinData(0, F7EF3025DE0D0D56407335B45DB1964F5F5ED405), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796725, 1), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 16920 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2777ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.254-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:11.366-0500 I COMMAND [conn52] command test4_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a") }, $clusterTime: { clusterTime: Timestamp(1574796729, 2027), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2005ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.258-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.178-0500 I STORAGE [conn88] dropCollection: test4_fsmdb0.agg_out (137831bc-02a9-4b1f-848b-320ccb8b5425) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 2), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.254-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1 (1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29) to test4_fsmdb0.agg_out and drop 04e37496-38da-4cdd-bbf9-86733a6f175b.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.258-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test4_fsmdb0.agg_out (04e37496-38da-4cdd-bbf9-86733a6f175b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 509), t: 1 } and commit timestamp Timestamp(1574796729, 509)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.178-0500 I STORAGE [conn88] Finishing collection drop for test4_fsmdb0.agg_out (137831bc-02a9-4b1f-848b-320ccb8b5425).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.257-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.258-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test4_fsmdb0.agg_out (04e37496-38da-4cdd-bbf9-86733a6f175b).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.178-0500 I STORAGE [conn88] renameCollection: renaming collection 04e37496-38da-4cdd-bbf9-86733a6f175b from test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.257-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test4_fsmdb0.agg_out (04e37496-38da-4cdd-bbf9-86733a6f175b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 509), t: 1 } and commit timestamp Timestamp(1574796729, 509)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.258-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29 from test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.178-0500 I STORAGE [conn88] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (137831bc-02a9-4b1f-848b-320ccb8b5425)'. Ident: 'index-365--2588534479858262356', commit timestamp: 'Timestamp(1574796729, 2)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.257-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test4_fsmdb0.agg_out (04e37496-38da-4cdd-bbf9-86733a6f175b).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.258-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (04e37496-38da-4cdd-bbf9-86733a6f175b)'. Ident: 'index-380--2310912778499990807', commit timestamp: 'Timestamp(1574796729, 509)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.178-0500 I STORAGE [conn88] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (137831bc-02a9-4b1f-848b-320ccb8b5425)'. Ident: 'index-370--2588534479858262356', commit timestamp: 'Timestamp(1574796729, 2)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.257-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection 1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29 from test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.258-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (04e37496-38da-4cdd-bbf9-86733a6f175b)'. Ident: 'index-387--2310912778499990807', commit timestamp: 'Timestamp(1574796729, 509)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.178-0500 I STORAGE [conn88] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-362--2588534479858262356, commit timestamp: Timestamp(1574796729, 2)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.257-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (04e37496-38da-4cdd-bbf9-86733a6f175b)'. Ident: 'index-380--7234316082034423155', commit timestamp: 'Timestamp(1574796729, 509)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.258-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-379--2310912778499990807, commit timestamp: Timestamp(1574796729, 509)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.178-0500 I COMMAND [conn88] command test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909 appName: "tid:3" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test4_fsmdb0.tmp.agg_out.1b7cd958-979e-4ceb-9c11-4a09b9283909", to: "test4_fsmdb0.agg_out", collectionOptions: { validationLevel: "strict", validationAction: "error" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796726, 6369), signature: { hash: BinData(0, F7EF3025DE0D0D56407335B45DB1964F5F5ED405), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58880", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796725, 1), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 2688540 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2689ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.257-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (04e37496-38da-4cdd-bbf9-86733a6f175b)'. Ident: 'index-387--7234316082034423155', commit timestamp: 'Timestamp(1574796729, 509)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.259-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 2dd9562a-935c-413c-802c-d3f26efe2581: test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f ( d585e869-f5cf-413f-87d8-d66ec7926809 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.178-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.257-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-379--7234316082034423155, commit timestamp: Timestamp(1574796729, 509)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.272-0500 I STORAGE [ReplWriterWorker-5] createCollection: test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914 with provided UUID: 2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74 and options: { uuid: UUID("2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.178-0500 I COMMAND [conn197] command test4_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796726, 5190), lsid: { id: UUID("3f42616f-e948-49d9-97ee-c2eb72d5ff98") }, $clusterTime: { clusterTime: Timestamp(1574796726, 5254), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796726, 5190). Collection minimum timestamp is Timestamp(1574796729, 2)" errName:SnapshotUnavailable errCode:246 reslen:579 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2676596 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 2676ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.259-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: a146523b-43b7-41d2-9e5a-d085d48149a1: test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f ( d585e869-f5cf-413f-87d8-d66ec7926809 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.286-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.178-0500 I COMMAND [conn80] command test4_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1230831981949385450, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5526929691526539742, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796726339), clusterTime: Timestamp(1574796726, 4350) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796726, 4353), signature: { hash: BinData(0, F7EF3025DE0D0D56407335B45DB1964F5F5ED405), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58880", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796725, 1), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2819ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.261-0500 I STORAGE [ReplWriterWorker-4] createCollection: test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914 with provided UUID: 2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74 and options: { uuid: UUID("2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.287-0500 I STORAGE [ReplWriterWorker-3] createCollection: test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730 with provided UUID: 35f6d45d-1abe-4d20-bf42-28a41d2a4f01 and options: { uuid: UUID("35f6d45d-1abe-4d20-bf42-28a41d2a4f01"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.179-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.275-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.301-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.179-0500 I COMMAND [conn80] CMD: dropIndexes test4_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.276-0500 I STORAGE [ReplWriterWorker-5] createCollection: test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730 with provided UUID: 35f6d45d-1abe-4d20-bf42-28a41d2a4f01 and options: { uuid: UUID("35f6d45d-1abe-4d20-bf42-28a41d2a4f01"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.306-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983 (323f8a1c-6905-415d-a9b1-3c874d19ada8) to test4_fsmdb0.agg_out and drop 1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.181-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.290-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.306-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test4_fsmdb0.agg_out (1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 1077), t: 1 } and commit timestamp Timestamp(1574796729, 1077)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.188-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 771b1fe8-9035-4843-b6d2-107cf0b30816: test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983 ( 323f8a1c-6905-415d-a9b1-3c874d19ada8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.295-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983 (323f8a1c-6905-415d-a9b1-3c874d19ada8) to test4_fsmdb0.agg_out and drop 1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.306-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test4_fsmdb0.agg_out (1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.200-0500 I INDEX [conn77] index build: starting on test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.295-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test4_fsmdb0.agg_out (1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 1077), t: 1 } and commit timestamp Timestamp(1574796729, 1077)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.307-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 323f8a1c-6905-415d-a9b1-3c874d19ada8 from test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.200-0500 I INDEX [conn77] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.295-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test4_fsmdb0.agg_out (1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.307-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29)'. Ident: 'index-382--2310912778499990807', commit timestamp: 'Timestamp(1574796729, 1077)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.200-0500 I STORAGE [conn77] Index build initialized: fd0586e4-d8b8-4a9d-b7c5-c92933705eab: test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f (d585e869-f5cf-413f-87d8-d66ec7926809 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.295-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection 323f8a1c-6905-415d-a9b1-3c874d19ada8 from test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.307-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29)'. Ident: 'index-391--2310912778499990807', commit timestamp: 'Timestamp(1574796729, 1077)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.200-0500 I INDEX [conn77] Waiting for index build to complete: fd0586e4-d8b8-4a9d-b7c5-c92933705eab
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.295-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29)'. Ident: 'index-382--7234316082034423155', commit timestamp: 'Timestamp(1574796729, 1077)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.307-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-381--2310912778499990807, commit timestamp: Timestamp(1574796729, 1077)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.200-0500 I INDEX [conn82] Index build completed: 771b1fe8-9035-4843-b6d2-107cf0b30816
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.295-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29)'. Ident: 'index-391--7234316082034423155', commit timestamp: 'Timestamp(1574796729, 1077)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.310-0500 I STORAGE [ReplWriterWorker-4] createCollection: test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843 with provided UUID: e443177f-bf1f-4b57-962d-119402c8d5ba and options: { uuid: UUID("e443177f-bf1f-4b57-962d-119402c8d5ba"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.200-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.295-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-381--7234316082034423155, commit timestamp: Timestamp(1574796729, 1077)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.326-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.200-0500 I COMMAND [conn82] command test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796726, 5866), signature: { hash: BinData(0, F7EF3025DE0D0D56407335B45DB1964F5F5ED405), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45728", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796725, 1), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 11767 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2741ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.306-0500 I STORAGE [ReplWriterWorker-7] createCollection: test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843 with provided UUID: e443177f-bf1f-4b57-962d-119402c8d5ba and options: { uuid: UUID("e443177f-bf1f-4b57-962d-119402c8d5ba"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.345-0500 I INDEX [ReplWriterWorker-3] index build: starting on test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.201-0500 I STORAGE [conn88] createCollection: test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379 with generated UUID: 0919f278-8fd5-4dba-9cf5-30be42e59b7a and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.321-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.345-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.201-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.340-0500 I INDEX [ReplWriterWorker-14] index build: starting on test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.345-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: b2769ba9-e162-4a3b-8357-fdd30271f00d: test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379 (0919f278-8fd5-4dba-9cf5-30be42e59b7a ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.203-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.340-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.346-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.213-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: fd0586e4-d8b8-4a9d-b7c5-c92933705eab: test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f ( d585e869-f5cf-413f-87d8-d66ec7926809 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.340-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 11a13b46-0ca3-44c6-ab23-5eb6f73d5f3a: test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379 (0919f278-8fd5-4dba-9cf5-30be42e59b7a ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.346-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.347-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f (d585e869-f5cf-413f-87d8-d66ec7926809) to test4_fsmdb0.agg_out and drop 323f8a1c-6905-415d-a9b1-3c874d19ada8.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.340-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.213-0500 I INDEX [conn77] Index build completed: fd0586e4-d8b8-4a9d-b7c5-c92933705eab
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.348-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.341-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.214-0500 I COMMAND [conn77] command test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796726, 5866), signature: { hash: BinData(0, F7EF3025DE0D0D56407335B45DB1964F5F5ED405), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45740", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796725, 1), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 2820 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2745ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.349-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test4_fsmdb0.agg_out (323f8a1c-6905-415d-a9b1-3c874d19ada8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 1518), t: 1 } and commit timestamp Timestamp(1574796729, 1518)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.343-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f (d585e869-f5cf-413f-87d8-d66ec7926809) to test4_fsmdb0.agg_out and drop 323f8a1c-6905-415d-a9b1-3c874d19ada8.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.221-0500 I INDEX [conn88] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:11.407-0500 I COMMAND [conn53] command test4_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70") }, $clusterTime: { clusterTime: Timestamp(1574796729, 2532), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2009ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:11.461-0500 I COMMAND [conn170] command test4_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6") }, $clusterTime: { clusterTime: Timestamp(1574796729, 3537), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2021ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:11.602-0500 I SHARDING [conn22] distributed lock 'test4_fsmdb0' acquired for 'enableSharding', ts : 5ddd7dbb5cde74b6784bb98c
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.349-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test4_fsmdb0.agg_out (323f8a1c-6905-415d-a9b1-3c874d19ada8).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.344-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.221-0500 I COMMAND [conn85] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:11.571-0500 I COMMAND [conn52] command test4_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a") }, $clusterTime: { clusterTime: Timestamp(1574796731, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 204ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:11.496-0500 I COMMAND [conn164] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458") }, $clusterTime: { clusterTime: Timestamp(1574796729, 3540), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2052ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:11.603-0500 I SHARDING [conn22] Enabling sharding for database [test4_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.349-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection d585e869-f5cf-413f-87d8-d66ec7926809 from test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.344-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test4_fsmdb0.agg_out (323f8a1c-6905-415d-a9b1-3c874d19ada8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 1518), t: 1 } and commit timestamp Timestamp(1574796729, 1518)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.221-0500 I STORAGE [conn85] dropCollection: test4_fsmdb0.agg_out (04e37496-38da-4cdd-bbf9-86733a6f175b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 509), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:11.599-0500 I COMMAND [conn53] command test4_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70") }, $clusterTime: { clusterTime: Timestamp(1574796731, 576), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 172ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:11.531-0500 I COMMAND [conn169] command test4_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51") }, $clusterTime: { clusterTime: Timestamp(1574796729, 4046), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 165ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:11.604-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7dbb5cde74b6784bb98c' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.349-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (323f8a1c-6905-415d-a9b1-3c874d19ada8)'. Ident: 'index-386--2310912778499990807', commit timestamp: 'Timestamp(1574796729, 1518)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.344-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test4_fsmdb0.agg_out (323f8a1c-6905-415d-a9b1-3c874d19ada8).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.221-0500 I STORAGE [conn85] Finishing collection drop for test4_fsmdb0.agg_out (04e37496-38da-4cdd-bbf9-86733a6f175b).
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:11.671-0500 I COMMAND [conn170] command test4_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6") }, $clusterTime: { clusterTime: Timestamp(1574796731, 1145), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: cannot rename to a sharded collection" errName:IllegalOperation errCode:20 reslen:629 protocol:op_msg 208ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:11.607-0500 I SHARDING [conn22] distributed lock 'test4_fsmdb0' acquired for 'shardCollection', ts : 5ddd7dbb5cde74b6784bb992
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.349-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (323f8a1c-6905-415d-a9b1-3c874d19ada8)'. Ident: 'index-393--2310912778499990807', commit timestamp: 'Timestamp(1574796729, 1518)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.344-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection d585e869-f5cf-413f-87d8-d66ec7926809 from test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.221-0500 I STORAGE [conn85] renameCollection: renaming collection 1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29 from test4_fsmdb0.tmp.agg_out.fc6d3e46-4c2b-4ac4-8389-0518368897e1 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:11.608-0500 I SHARDING [conn22] distributed lock 'test4_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7dbb5cde74b6784bb994
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.349-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-385--2310912778499990807, commit timestamp: Timestamp(1574796729, 1518)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.344-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (323f8a1c-6905-415d-a9b1-3c874d19ada8)'. Ident: 'index-386--7234316082034423155', commit timestamp: 'Timestamp(1574796729, 1518)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.221-0500 I STORAGE [conn85] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (04e37496-38da-4cdd-bbf9-86733a6f175b)'. Ident: 'index-374--2588534479858262356', commit timestamp: 'Timestamp(1574796729, 509)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:11.615-0500 D4 TXN [conn49] New transaction started with txnNumber: 0 on session with lsid 9d626323-b536-4271-be6e-9b63288c184b
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.349-0500 I STORAGE [ReplWriterWorker-10] createCollection: test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83 with provided UUID: 0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a and options: { uuid: UUID("0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.344-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (323f8a1c-6905-415d-a9b1-3c874d19ada8)'. Ident: 'index-393--7234316082034423155', commit timestamp: 'Timestamp(1574796729, 1518)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.221-0500 I STORAGE [conn85] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (04e37496-38da-4cdd-bbf9-86733a6f175b)'. Ident: 'index-376--2588534479858262356', commit timestamp: 'Timestamp(1574796729, 509)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.352-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: b2769ba9-e162-4a3b-8357-fdd30271f00d: test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379 ( 0919f278-8fd5-4dba-9cf5-30be42e59b7a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.344-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-385--7234316082034423155, commit timestamp: Timestamp(1574796729, 1518)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.221-0500 I STORAGE [conn85] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-372--2588534479858262356, commit timestamp: Timestamp(1574796729, 509)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.366-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.345-0500 I STORAGE [ReplWriterWorker-7] createCollection: test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83 with provided UUID: 0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a and options: { uuid: UUID("0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.221-0500 I INDEX [conn88] Registering index build: bb8482c2-b80f-4ff0-9951-9a8fa999f189
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.385-0500 I INDEX [ReplWriterWorker-3] index build: starting on test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.346-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 11a13b46-0ca3-44c6-ab23-5eb6f73d5f3a: test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379 ( 0919f278-8fd5-4dba-9cf5-30be42e59b7a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.221-0500 I COMMAND [conn65] command test4_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3317642003492017166, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2538240716945250440, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796726361), clusterTime: Timestamp(1574796726, 4354) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796726, 4355), signature: { hash: BinData(0, F7EF3025DE0D0D56407335B45DB1964F5F5ED405), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796725, 1), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2859ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.385-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.361-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.222-0500 I STORAGE [conn85] createCollection: test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914 with generated UUID: 2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.385-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 3330e88d-7edc-421f-8bd8-195f1ac0ff7a: test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914 (2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.378-0500 I INDEX [ReplWriterWorker-13] index build: starting on test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.224-0500 I STORAGE [conn84] createCollection: test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730 with generated UUID: 35f6d45d-1abe-4d20-bf42-28a41d2a4f01 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.385-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.378-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.253-0500 I INDEX [conn88] index build: starting on test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.386-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.378-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 13b1e018-4ca0-4a47-a07e-22da268761db: test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914 (2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.253-0500 I INDEX [conn88] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.388-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.378-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.253-0500 I STORAGE [conn88] Index build initialized: bb8482c2-b80f-4ff0-9951-9a8fa999f189: test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379 (0919f278-8fd5-4dba-9cf5-30be42e59b7a ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.390-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379 (0919f278-8fd5-4dba-9cf5-30be42e59b7a) to test4_fsmdb0.agg_out and drop d585e869-f5cf-413f-87d8-d66ec7926809.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.379-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.253-0500 I INDEX [conn88] Waiting for index build to complete: bb8482c2-b80f-4ff0-9951-9a8fa999f189
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.390-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test4_fsmdb0.agg_out (d585e869-f5cf-413f-87d8-d66ec7926809) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 2023), t: 1 } and commit timestamp Timestamp(1574796729, 2023)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.382-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.259-0500 I INDEX [conn85] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.390-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test4_fsmdb0.agg_out (d585e869-f5cf-413f-87d8-d66ec7926809).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.382-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379 (0919f278-8fd5-4dba-9cf5-30be42e59b7a) to test4_fsmdb0.agg_out and drop d585e869-f5cf-413f-87d8-d66ec7926809.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.266-0500 I INDEX [conn84] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.390-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 0919f278-8fd5-4dba-9cf5-30be42e59b7a from test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.382-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test4_fsmdb0.agg_out (d585e869-f5cf-413f-87d8-d66ec7926809) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 2023), t: 1 } and commit timestamp Timestamp(1574796729, 2023)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.267-0500 I COMMAND [conn82] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.390-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (d585e869-f5cf-413f-87d8-d66ec7926809)'. Ident: 'index-390--2310912778499990807', commit timestamp: 'Timestamp(1574796729, 2023)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.382-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test4_fsmdb0.agg_out (d585e869-f5cf-413f-87d8-d66ec7926809).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.267-0500 I STORAGE [conn82] dropCollection: test4_fsmdb0.agg_out (1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 1077), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.390-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (d585e869-f5cf-413f-87d8-d66ec7926809)'. Ident: 'index-397--2310912778499990807', commit timestamp: 'Timestamp(1574796729, 2023)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.383-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection 0919f278-8fd5-4dba-9cf5-30be42e59b7a from test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.267-0500 I STORAGE [conn82] Finishing collection drop for test4_fsmdb0.agg_out (1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.390-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-389--2310912778499990807, commit timestamp: Timestamp(1574796729, 2023)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.383-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (d585e869-f5cf-413f-87d8-d66ec7926809)'. Ident: 'index-390--7234316082034423155', commit timestamp: 'Timestamp(1574796729, 2023)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.267-0500 I STORAGE [conn82] renameCollection: renaming collection 323f8a1c-6905-415d-a9b1-3c874d19ada8 from test4_fsmdb0.tmp.agg_out.07b94d48-f821-497c-9ed2-2824f0a9d983 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.391-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 3330e88d-7edc-421f-8bd8-195f1ac0ff7a: test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914 ( 2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.383-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (d585e869-f5cf-413f-87d8-d66ec7926809)'. Ident: 'index-397--7234316082034423155', commit timestamp: 'Timestamp(1574796729, 2023)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.267-0500 I STORAGE [conn82] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29)'. Ident: 'index-375--2588534479858262356', commit timestamp: 'Timestamp(1574796729, 1077)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.407-0500 I INDEX [ReplWriterWorker-14] index build: starting on test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.383-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-389--7234316082034423155, commit timestamp: Timestamp(1574796729, 2023)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.267-0500 I STORAGE [conn82] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (1076f2f3-5dcb-4991-aaf0-9ca8db3eeb29)'. Ident: 'index-378--2588534479858262356', commit timestamp: 'Timestamp(1574796729, 1077)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.407-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.385-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 13b1e018-4ca0-4a47-a07e-22da268761db: test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914 ( 2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.267-0500 I STORAGE [conn82] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-373--2588534479858262356, commit timestamp: Timestamp(1574796729, 1077)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.407-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 3af000cf-c397-45f1-8312-3556aa6754c4: test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730 (35f6d45d-1abe-4d20-bf42-28a41d2a4f01 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.400-0500 I INDEX [ReplWriterWorker-15] index build: starting on test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.267-0500 I INDEX [conn85] Registering index build: 7aaadd13-1666-4906-ab72-7afffcbd1087
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.407-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.400-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.267-0500 I INDEX [conn84] Registering index build: 999a9b2c-2c63-4465-983f-3a0f2c4f55b7
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.407-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.400-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 6a89ae53-642a-4cc9-8da9-b8a36799b55c: test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730 (35f6d45d-1abe-4d20-bf42-28a41d2a4f01 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.267-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.409-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.400-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.267-0500 I COMMAND [conn62] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6812715320256564844, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 9992387399964898, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796726418), clusterTime: Timestamp(1574796726, 5361) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796726, 5361), signature: { hash: BinData(0, F7EF3025DE0D0D56407335B45DB1964F5F5ED405), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45728", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796725, 1), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2848ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.411-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 3af000cf-c397-45f1-8312-3556aa6754c4: test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730 ( 35f6d45d-1abe-4d20-bf42-28a41d2a4f01 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.401-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.268-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.413-0500 I STORAGE [ReplWriterWorker-3] createCollection: test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311 with provided UUID: 8a82459e-f737-476a-a69b-6b7105d3aef7 and options: { uuid: UUID("8a82459e-f737-476a-a69b-6b7105d3aef7"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.405-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.270-0500 I STORAGE [conn82] createCollection: test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843 with generated UUID: e443177f-bf1f-4b57-962d-119402c8d5ba and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.427-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.406-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 6a89ae53-642a-4cc9-8da9-b8a36799b55c: test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730 ( 35f6d45d-1abe-4d20-bf42-28a41d2a4f01 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.279-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.444-0500 I INDEX [ReplWriterWorker-7] index build: starting on test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.408-0500 I STORAGE [ReplWriterWorker-7] createCollection: test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311 with provided UUID: 8a82459e-f737-476a-a69b-6b7105d3aef7 and options: { uuid: UUID("8a82459e-f737-476a-a69b-6b7105d3aef7"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.294-0500 I INDEX [conn85] index build: starting on test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.444-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.423-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.294-0500 I INDEX [conn85] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.444-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 7e43f849-bd0f-4f9e-8506-8aabaf647e3c: test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843 (e443177f-bf1f-4b57-962d-119402c8d5ba ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.440-0500 I INDEX [ReplWriterWorker-2] index build: starting on test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.294-0500 I STORAGE [conn85] Index build initialized: 7aaadd13-1666-4906-ab72-7afffcbd1087: test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914 (2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.444-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.440-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.294-0500 I INDEX [conn85] Waiting for index build to complete: 7aaadd13-1666-4906-ab72-7afffcbd1087
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.445-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.441-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: b84990a1-9a45-4ea5-84ce-d0f6615cabf4: test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843 (e443177f-bf1f-4b57-962d-119402c8d5ba ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.295-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: bb8482c2-b80f-4ff0-9951-9a8fa999f189: test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379 ( 0919f278-8fd5-4dba-9cf5-30be42e59b7a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.448-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.441-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.295-0500 I INDEX [conn88] Index build completed: bb8482c2-b80f-4ff0-9951-9a8fa999f189
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.448-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914 (2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74) to test4_fsmdb0.agg_out and drop 0919f278-8fd5-4dba-9cf5-30be42e59b7a.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.441-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.301-0500 I INDEX [conn82] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.448-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test4_fsmdb0.agg_out (0919f278-8fd5-4dba-9cf5-30be42e59b7a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 2532), t: 1 } and commit timestamp Timestamp(1574796729, 2532)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.444-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.301-0500 I COMMAND [conn77] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.448-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test4_fsmdb0.agg_out (0919f278-8fd5-4dba-9cf5-30be42e59b7a).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.445-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914 (2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74) to test4_fsmdb0.agg_out and drop 0919f278-8fd5-4dba-9cf5-30be42e59b7a.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.301-0500 I STORAGE [conn77] dropCollection: test4_fsmdb0.agg_out (323f8a1c-6905-415d-a9b1-3c874d19ada8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 1518), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.448-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74 from test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.445-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test4_fsmdb0.agg_out (0919f278-8fd5-4dba-9cf5-30be42e59b7a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 2532), t: 1 } and commit timestamp Timestamp(1574796729, 2532)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.301-0500 I STORAGE [conn77] Finishing collection drop for test4_fsmdb0.agg_out (323f8a1c-6905-415d-a9b1-3c874d19ada8).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.448-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (0919f278-8fd5-4dba-9cf5-30be42e59b7a)'. Ident: 'index-396--2310912778499990807', commit timestamp: 'Timestamp(1574796729, 2532)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.445-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test4_fsmdb0.agg_out (0919f278-8fd5-4dba-9cf5-30be42e59b7a).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.301-0500 I STORAGE [conn77] renameCollection: renaming collection d585e869-f5cf-413f-87d8-d66ec7926809 from test4_fsmdb0.tmp.agg_out.b6960d80-e591-4d17-b303-d3fbfb5e369f to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.448-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (0919f278-8fd5-4dba-9cf5-30be42e59b7a)'. Ident: 'index-405--2310912778499990807', commit timestamp: 'Timestamp(1574796729, 2532)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.445-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74 from test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.301-0500 I STORAGE [conn77] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (323f8a1c-6905-415d-a9b1-3c874d19ada8)'. Ident: 'index-382--2588534479858262356', commit timestamp: 'Timestamp(1574796729, 1518)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.448-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-395--2310912778499990807, commit timestamp: Timestamp(1574796729, 2532)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.445-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (0919f278-8fd5-4dba-9cf5-30be42e59b7a)'. Ident: 'index-396--7234316082034423155', commit timestamp: 'Timestamp(1574796729, 2532)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.301-0500 I STORAGE [conn77] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (323f8a1c-6905-415d-a9b1-3c874d19ada8)'. Ident: 'index-384--2588534479858262356', commit timestamp: 'Timestamp(1574796729, 1518)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.449-0500 I STORAGE [ReplWriterWorker-5] createCollection: test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad with provided UUID: 1b702c2b-89f3-46c8-be2f-a0e511288776 and options: { uuid: UUID("1b702c2b-89f3-46c8-be2f-a0e511288776"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.445-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (0919f278-8fd5-4dba-9cf5-30be42e59b7a)'. Ident: 'index-405--7234316082034423155', commit timestamp: 'Timestamp(1574796729, 2532)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.301-0500 I STORAGE [conn77] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-379--2588534479858262356, commit timestamp: Timestamp(1574796729, 1518)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.450-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 7e43f849-bd0f-4f9e-8506-8aabaf647e3c: test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843 ( e443177f-bf1f-4b57-962d-119402c8d5ba ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.445-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-395--7234316082034423155, commit timestamp: Timestamp(1574796729, 2532)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.301-0500 I INDEX [conn82] Registering index build: 3a2946cd-6b6f-42ab-9d1e-744d1c24d8e8
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.464-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.446-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: b84990a1-9a45-4ea5-84ce-d0f6615cabf4: test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843 ( e443177f-bf1f-4b57-962d-119402c8d5ba ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.302-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.496-0500 I INDEX [ReplWriterWorker-11] index build: starting on test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.448-0500 I STORAGE [ReplWriterWorker-2] createCollection: test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad with provided UUID: 1b702c2b-89f3-46c8-be2f-a0e511288776 and options: { uuid: UUID("1b702c2b-89f3-46c8-be2f-a0e511288776"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.302-0500 I COMMAND [conn64] command test4_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8087960283155430920, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3070741844422755094, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796726419), clusterTime: Timestamp(1574796726, 5361) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796726, 5361), signature: { hash: BinData(0, F7EF3025DE0D0D56407335B45DB1964F5F5ED405), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45740", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796725, 1), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2881ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.496-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.464-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.302-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.496-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 8bcc9c63-482a-445a-9ee7-5c1a66a1d35b: test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83 (0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.484-0500 I INDEX [ReplWriterWorker-0] index build: starting on test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.305-0500 I STORAGE [conn77] createCollection: test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83 with generated UUID: 0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.496-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.484-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.311-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.497-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.484-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: b76891c5-eafd-4076-a96f-fb9742ad26be: test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83 (0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.329-0500 I INDEX [conn84] index build: starting on test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.500-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.484-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.329-0500 I INDEX [conn84] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.502-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730 (35f6d45d-1abe-4d20-bf42-28a41d2a4f01) to test4_fsmdb0.agg_out and drop 2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.484-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.329-0500 I STORAGE [conn84] Index build initialized: 999a9b2c-2c63-4465-983f-3a0f2c4f55b7: test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730 (35f6d45d-1abe-4d20-bf42-28a41d2a4f01 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.502-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test4_fsmdb0.agg_out (2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 3537), t: 1 } and commit timestamp Timestamp(1574796729, 3537)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.486-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.329-0500 I INDEX [conn84] Waiting for index build to complete: 999a9b2c-2c63-4465-983f-3a0f2c4f55b7
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.502-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test4_fsmdb0.agg_out (2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.489-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730 (35f6d45d-1abe-4d20-bf42-28a41d2a4f01) to test4_fsmdb0.agg_out and drop 2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.331-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 7aaadd13-1666-4906-ab72-7afffcbd1087: test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914 ( 2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.502-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection 35f6d45d-1abe-4d20-bf42-28a41d2a4f01 from test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.489-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test4_fsmdb0.agg_out (2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 3537), t: 1 } and commit timestamp Timestamp(1574796729, 3537)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.331-0500 I INDEX [conn85] Index build completed: 7aaadd13-1666-4906-ab72-7afffcbd1087
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.502-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74)'. Ident: 'index-400--2310912778499990807', commit timestamp: 'Timestamp(1574796729, 3537)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.489-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test4_fsmdb0.agg_out (2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.339-0500 I INDEX [conn77] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.502-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74)'. Ident: 'index-409--2310912778499990807', commit timestamp: 'Timestamp(1574796729, 3537)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.489-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 35f6d45d-1abe-4d20-bf42-28a41d2a4f01 from test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.339-0500 I COMMAND [conn88] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.502-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-399--2310912778499990807, commit timestamp: Timestamp(1574796729, 3537)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.489-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74)'. Ident: 'index-400--7234316082034423155', commit timestamp: 'Timestamp(1574796729, 3537)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.339-0500 I STORAGE [conn88] dropCollection: test4_fsmdb0.agg_out (d585e869-f5cf-413f-87d8-d66ec7926809) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 2023), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.504-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 8bcc9c63-482a-445a-9ee7-5c1a66a1d35b: test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83 ( 0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.489-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74)'. Ident: 'index-409--7234316082034423155', commit timestamp: 'Timestamp(1574796729, 3537)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.339-0500 I STORAGE [conn88] Finishing collection drop for test4_fsmdb0.agg_out (d585e869-f5cf-413f-87d8-d66ec7926809).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.519-0500 I INDEX [ReplWriterWorker-3] index build: starting on test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.489-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-399--7234316082034423155, commit timestamp: Timestamp(1574796729, 3537)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.339-0500 I STORAGE [conn88] renameCollection: renaming collection 0919f278-8fd5-4dba-9cf5-30be42e59b7a from test4_fsmdb0.tmp.agg_out.cf956cf8-ae33-4a6f-bbc8-f3ec37005379 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.519-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.490-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: b76891c5-eafd-4076-a96f-fb9742ad26be: test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83 ( 0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.339-0500 I STORAGE [conn88] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (d585e869-f5cf-413f-87d8-d66ec7926809)'. Ident: 'index-383--2588534479858262356', commit timestamp: 'Timestamp(1574796729, 2023)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.519-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 4e093a5d-5403-435c-9b72-ba4be621a11e: test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311 (8a82459e-f737-476a-a69b-6b7105d3aef7 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.506-0500 I INDEX [ReplWriterWorker-11] index build: starting on test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.339-0500 I STORAGE [conn88] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (d585e869-f5cf-413f-87d8-d66ec7926809)'. Ident: 'index-386--2588534479858262356', commit timestamp: 'Timestamp(1574796729, 2023)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.519-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.506-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.339-0500 I STORAGE [conn88] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-380--2588534479858262356, commit timestamp: Timestamp(1574796729, 2023)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.519-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.506-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 16a6e4e5-5cbb-427c-9668-0e37f4ec7587: test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311 (8a82459e-f737-476a-a69b-6b7105d3aef7 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.339-0500 I INDEX [conn77] Registering index build: 8b3d6682-675a-4053-9f5d-51415ded7e11
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.520-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843 (e443177f-bf1f-4b57-962d-119402c8d5ba) to test4_fsmdb0.agg_out and drop 35f6d45d-1abe-4d20-bf42-28a41d2a4f01.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.506-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.339-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.521-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.507-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.340-0500 I COMMAND [conn81] command test4_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7928116127500570018, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8602580428191992584, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796729180), clusterTime: Timestamp(1574796729, 2) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796729, 2), signature: { hash: BinData(0, 1B85D1C27347EBF9FE8EFCAA3F67C7E41A1AA28E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58880", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796727, 1), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 19760 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 159ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.521-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test4_fsmdb0.agg_out (35f6d45d-1abe-4d20-bf42-28a41d2a4f01) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 3540), t: 1 } and commit timestamp Timestamp(1574796729, 3540)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.508-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843 (e443177f-bf1f-4b57-962d-119402c8d5ba) to test4_fsmdb0.agg_out and drop 35f6d45d-1abe-4d20-bf42-28a41d2a4f01.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.341-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.521-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test4_fsmdb0.agg_out (35f6d45d-1abe-4d20-bf42-28a41d2a4f01).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.508-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.352-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.521-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection e443177f-bf1f-4b57-962d-119402c8d5ba from test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.509-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test4_fsmdb0.agg_out (35f6d45d-1abe-4d20-bf42-28a41d2a4f01) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 3540), t: 1 } and commit timestamp Timestamp(1574796729, 3540)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.359-0500 I INDEX [conn82] index build: starting on test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.521-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (35f6d45d-1abe-4d20-bf42-28a41d2a4f01)'. Ident: 'index-402--2310912778499990807', commit timestamp: 'Timestamp(1574796729, 3540)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.509-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test4_fsmdb0.agg_out (35f6d45d-1abe-4d20-bf42-28a41d2a4f01).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.359-0500 I INDEX [conn82] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.521-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (35f6d45d-1abe-4d20-bf42-28a41d2a4f01)'. Ident: 'index-411--2310912778499990807', commit timestamp: 'Timestamp(1574796729, 3540)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.509-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection e443177f-bf1f-4b57-962d-119402c8d5ba from test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.359-0500 I STORAGE [conn82] Index build initialized: 3a2946cd-6b6f-42ab-9d1e-744d1c24d8e8: test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843 (e443177f-bf1f-4b57-962d-119402c8d5ba ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.522-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-401--2310912778499990807, commit timestamp: Timestamp(1574796729, 3540)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.509-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (35f6d45d-1abe-4d20-bf42-28a41d2a4f01)'. Ident: 'index-402--7234316082034423155', commit timestamp: 'Timestamp(1574796729, 3540)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.359-0500 I INDEX [conn82] Waiting for index build to complete: 3a2946cd-6b6f-42ab-9d1e-744d1c24d8e8
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.522-0500 I STORAGE [ReplWriterWorker-14] createCollection: test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766 with provided UUID: 657ce8ff-5de2-4302-ab06-b6667813fc84 and options: { uuid: UUID("657ce8ff-5de2-4302-ab06-b6667813fc84"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.509-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (35f6d45d-1abe-4d20-bf42-28a41d2a4f01)'. Ident: 'index-411--7234316082034423155', commit timestamp: 'Timestamp(1574796729, 3540)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.359-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.523-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 4e093a5d-5403-435c-9b72-ba4be621a11e: test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311 ( 8a82459e-f737-476a-a69b-6b7105d3aef7 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.509-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-401--7234316082034423155, commit timestamp: Timestamp(1574796729, 3540)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.360-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 999a9b2c-2c63-4465-983f-3a0f2c4f55b7: test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730 ( 35f6d45d-1abe-4d20-bf42-28a41d2a4f01 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.539-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.509-0500 I STORAGE [ReplWriterWorker-7] createCollection: test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766 with provided UUID: 657ce8ff-5de2-4302-ab06-b6667813fc84 and options: { uuid: UUID("657ce8ff-5de2-4302-ab06-b6667813fc84"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.360-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.540-0500 I STORAGE [ReplWriterWorker-0] createCollection: test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817 with provided UUID: 3a2e094a-ab6f-4d64-ba6a-e752ef678c07 and options: { uuid: UUID("3a2e094a-ab6f-4d64-ba6a-e752ef678c07"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.512-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 16a6e4e5-5cbb-427c-9668-0e37f4ec7587: test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311 ( 8a82459e-f737-476a-a69b-6b7105d3aef7 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.362-0500 I STORAGE [conn88] createCollection: test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311 with generated UUID: 8a82459e-f737-476a-a69b-6b7105d3aef7 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.552-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.527-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.369-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.573-0500 I INDEX [ReplWriterWorker-5] index build: starting on test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.528-0500 I STORAGE [ReplWriterWorker-13] createCollection: test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817 with provided UUID: 3a2e094a-ab6f-4d64-ba6a-e752ef678c07 and options: { uuid: UUID("3a2e094a-ab6f-4d64-ba6a-e752ef678c07"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.387-0500 I INDEX [conn77] index build: starting on test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.573-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.542-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.387-0500 I INDEX [conn77] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.573-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 2967b576-74a1-4a69-bd8e-0d843321fe67: test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad (1b702c2b-89f3-46c8-be2f-a0e511288776 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.560-0500 I INDEX [ReplWriterWorker-15] index build: starting on test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.387-0500 I STORAGE [conn77] Index build initialized: 8b3d6682-675a-4053-9f5d-51415ded7e11: test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83 (0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.573-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.560-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.387-0500 I INDEX [conn77] Waiting for index build to complete: 8b3d6682-675a-4053-9f5d-51415ded7e11
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.573-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.560-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: ae89100c-f432-4a99-890e-7d68efd74f22: test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad (1b702c2b-89f3-46c8-be2f-a0e511288776 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.387-0500 I INDEX [conn84] Index build completed: 999a9b2c-2c63-4465-983f-3a0f2c4f55b7
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.574-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83 (0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a) to test4_fsmdb0.agg_out and drop e443177f-bf1f-4b57-962d-119402c8d5ba.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.560-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.387-0500 I COMMAND [conn84] command test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730 appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796729, 1077), signature: { hash: BinData(0, 1B85D1C27347EBF9FE8EFCAA3F67C7E41A1AA28E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796727, 1), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 7496 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 120ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.576-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.560-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.388-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 3a2946cd-6b6f-42ab-9d1e-744d1c24d8e8: test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843 ( e443177f-bf1f-4b57-962d-119402c8d5ba ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.576-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test4_fsmdb0.agg_out (e443177f-bf1f-4b57-962d-119402c8d5ba) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 4046), t: 1 } and commit timestamp Timestamp(1574796729, 4046)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.561-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83 (0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a) to test4_fsmdb0.agg_out and drop e443177f-bf1f-4b57-962d-119402c8d5ba.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.388-0500 I INDEX [conn82] Index build completed: 3a2946cd-6b6f-42ab-9d1e-744d1c24d8e8
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.576-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test4_fsmdb0.agg_out (e443177f-bf1f-4b57-962d-119402c8d5ba).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.562-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.396-0500 I INDEX [conn88] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.576-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a from test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.562-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test4_fsmdb0.agg_out (e443177f-bf1f-4b57-962d-119402c8d5ba) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 4046), t: 1 } and commit timestamp Timestamp(1574796729, 4046)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.396-0500 I COMMAND [conn85] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.576-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (e443177f-bf1f-4b57-962d-119402c8d5ba)'. Ident: 'index-404--2310912778499990807', commit timestamp: 'Timestamp(1574796729, 4046)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.562-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test4_fsmdb0.agg_out (e443177f-bf1f-4b57-962d-119402c8d5ba).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.396-0500 I STORAGE [conn85] dropCollection: test4_fsmdb0.agg_out (0919f278-8fd5-4dba-9cf5-30be42e59b7a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 2532), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.576-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (e443177f-bf1f-4b57-962d-119402c8d5ba)'. Ident: 'index-415--2310912778499990807', commit timestamp: 'Timestamp(1574796729, 4046)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.562-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a from test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.396-0500 I STORAGE [conn85] Finishing collection drop for test4_fsmdb0.agg_out (0919f278-8fd5-4dba-9cf5-30be42e59b7a).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.576-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-403--2310912778499990807, commit timestamp: Timestamp(1574796729, 4046)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.562-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (e443177f-bf1f-4b57-962d-119402c8d5ba)'. Ident: 'index-404--7234316082034423155', commit timestamp: 'Timestamp(1574796729, 4046)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.396-0500 I STORAGE [conn85] renameCollection: renaming collection 2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74 from test4_fsmdb0.tmp.agg_out.5f877c89-d48a-46a2-9592-152235a2a914 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:09.577-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 2967b576-74a1-4a69-bd8e-0d843321fe67: test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad ( 1b702c2b-89f3-46c8-be2f-a0e511288776 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.562-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (e443177f-bf1f-4b57-962d-119402c8d5ba)'. Ident: 'index-415--7234316082034423155', commit timestamp: 'Timestamp(1574796729, 4046)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.396-0500 I STORAGE [conn85] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (0919f278-8fd5-4dba-9cf5-30be42e59b7a)'. Ident: 'index-389--2588534479858262356', commit timestamp: 'Timestamp(1574796729, 2532)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.369-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311 (8a82459e-f737-476a-a69b-6b7105d3aef7) to test4_fsmdb0.agg_out and drop 0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.562-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-403--7234316082034423155, commit timestamp: Timestamp(1574796729, 4046)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.396-0500 I STORAGE [conn85] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (0919f278-8fd5-4dba-9cf5-30be42e59b7a)'. Ident: 'index-390--2588534479858262356', commit timestamp: 'Timestamp(1574796729, 2532)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.370-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test4_fsmdb0.agg_out (0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 2), t: 1 } and commit timestamp Timestamp(1574796731, 2)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:09.564-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: ae89100c-f432-4a99-890e-7d68efd74f22: test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad ( 1b702c2b-89f3-46c8-be2f-a0e511288776 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.396-0500 I STORAGE [conn85] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-388--2588534479858262356, commit timestamp: Timestamp(1574796729, 2532)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.370-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test4_fsmdb0.agg_out (0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.369-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311 (8a82459e-f737-476a-a69b-6b7105d3aef7) to test4_fsmdb0.agg_out and drop 0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.396-0500 I INDEX [conn88] Registering index build: b32a09ee-27aa-4e9b-b44f-df74f21ea227
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.370-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 8a82459e-f737-476a-a69b-6b7105d3aef7 from test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.369-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test4_fsmdb0.agg_out (0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 2), t: 1 } and commit timestamp Timestamp(1574796731, 2)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.396-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.370-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a)'. Ident: 'index-408--2310912778499990807', commit timestamp: 'Timestamp(1574796731, 2)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.369-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test4_fsmdb0.agg_out (0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.397-0500 I COMMAND [conn80] command test4_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7791259154972353191, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 762275439629223015, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796729201), clusterTime: Timestamp(1574796729, 505) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796729, 506), signature: { hash: BinData(0, 1B85D1C27347EBF9FE8EFCAA3F67C7E41A1AA28E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58884", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796727, 1), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 19120 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 194ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.370-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a)'. Ident: 'index-419--2310912778499990807', commit timestamp: 'Timestamp(1574796731, 2)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.369-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection 8a82459e-f737-476a-a69b-6b7105d3aef7 from test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.397-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.370-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-407--2310912778499990807, commit timestamp: Timestamp(1574796731, 2)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.369-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a)'. Ident: 'index-408--7234316082034423155', commit timestamp: 'Timestamp(1574796731, 2)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.400-0500 I STORAGE [conn85] createCollection: test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad with generated UUID: 1b702c2b-89f3-46c8-be2f-a0e511288776 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.400-0500 I STORAGE [ReplWriterWorker-8] createCollection: test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d with provided UUID: 33c3810a-89c2-4b7d-91d1-9199ea59da61 and options: { uuid: UUID("33c3810a-89c2-4b7d-91d1-9199ea59da61"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.369-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a)'. Ident: 'index-419--7234316082034423155', commit timestamp: 'Timestamp(1574796731, 2)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.408-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.416-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.370-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-407--7234316082034423155, commit timestamp: Timestamp(1574796731, 2)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.426-0500 I INDEX [conn88] index build: starting on test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.417-0500 I STORAGE [ReplWriterWorker-4] createCollection: test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192 with provided UUID: 7ea18e16-bafc-4b63-ae0b-6446e5352548 and options: { uuid: UUID("7ea18e16-bafc-4b63-ae0b-6446e5352548"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.400-0500 I STORAGE [ReplWriterWorker-9] createCollection: test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d with provided UUID: 33c3810a-89c2-4b7d-91d1-9199ea59da61 and options: { uuid: UUID("33c3810a-89c2-4b7d-91d1-9199ea59da61"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.426-0500 I INDEX [conn88] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.431-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.416-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.426-0500 I STORAGE [conn88] Index build initialized: b32a09ee-27aa-4e9b-b44f-df74f21ea227: test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311 (8a82459e-f737-476a-a69b-6b7105d3aef7 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.450-0500 I INDEX [ReplWriterWorker-3] index build: starting on test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.417-0500 I STORAGE [ReplWriterWorker-3] createCollection: test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192 with provided UUID: 7ea18e16-bafc-4b63-ae0b-6446e5352548 and options: { uuid: UUID("7ea18e16-bafc-4b63-ae0b-6446e5352548"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.426-0500 I INDEX [conn88] Waiting for index build to complete: b32a09ee-27aa-4e9b-b44f-df74f21ea227
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.450-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.430-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.430-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 8b3d6682-675a-4053-9f5d-51415ded7e11: test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83 ( 0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.450-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: ab2e8e38-d866-45cb-a934-0c7d7c3029a8: test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766 (657ce8ff-5de2-4302-ab06-b6667813fc84 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.459-0500 I INDEX [ReplWriterWorker-14] index build: starting on test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.430-0500 I INDEX [conn77] Index build completed: 8b3d6682-675a-4053-9f5d-51415ded7e11
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.450-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.459-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.438-0500 I INDEX [conn85] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.451-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.459-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: adcb9d7c-8b09-4c4a-97d4-b2501f207ff1: test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766 (657ce8ff-5de2-4302-ab06-b6667813fc84 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.438-0500 I COMMAND [conn84] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.453-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.459-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.438-0500 I STORAGE [conn84] dropCollection: test4_fsmdb0.agg_out (2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 3537), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.456-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: ab2e8e38-d866-45cb-a934-0c7d7c3029a8: test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766 ( 657ce8ff-5de2-4302-ab06-b6667813fc84 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.460-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.438-0500 I STORAGE [conn84] Finishing collection drop for test4_fsmdb0.agg_out (2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.470-0500 I INDEX [ReplWriterWorker-3] index build: starting on test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.462-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.438-0500 I STORAGE [conn84] renameCollection: renaming collection 35f6d45d-1abe-4d20-bf42-28a41d2a4f01 from test4_fsmdb0.tmp.agg_out.dd2e4c98-95d1-45ee-ba00-b5d93d54e730 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.470-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.466-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: adcb9d7c-8b09-4c4a-97d4-b2501f207ff1: test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766 ( 657ce8ff-5de2-4302-ab06-b6667813fc84 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.438-0500 I STORAGE [conn84] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74)'. Ident: 'index-394--2588534479858262356', commit timestamp: 'Timestamp(1574796729, 3537)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.470-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 3b3f01b8-06ce-4a10-aa28-49c24d0d93a0: test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817 (3a2e094a-ab6f-4d64-ba6a-e752ef678c07 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.480-0500 I INDEX [ReplWriterWorker-10] index build: starting on test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.438-0500 I STORAGE [conn84] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (2d12ba5c-f87f-4d6a-9b19-d9e0dcf52c74)'. Ident: 'index-396--2588534479858262356', commit timestamp: 'Timestamp(1574796729, 3537)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.470-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.480-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.438-0500 I STORAGE [conn84] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-391--2588534479858262356, commit timestamp: Timestamp(1574796729, 3537)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.471-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.480-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: bf02ab4d-8c09-49d1-b512-c13ee9c9d75f: test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817 (3a2e094a-ab6f-4d64-ba6a-e752ef678c07 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.438-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.472-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad (1b702c2b-89f3-46c8-be2f-a0e511288776) to test4_fsmdb0.agg_out and drop 8a82459e-f737-476a-a69b-6b7105d3aef7.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.480-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.439-0500 I COMMAND [conn65] command test4_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4478411392911233328, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5664105381170884826, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796729223), clusterTime: Timestamp(1574796729, 573) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796729, 574), signature: { hash: BinData(0, 1B85D1C27347EBF9FE8EFCAA3F67C7E41A1AA28E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796727, 1), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 214ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.473-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.481-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.439-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.473-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test4_fsmdb0.agg_out (8a82459e-f737-476a-a69b-6b7105d3aef7) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 510), t: 1 } and commit timestamp Timestamp(1574796731, 510)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.482-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad (1b702c2b-89f3-46c8-be2f-a0e511288776) to test4_fsmdb0.agg_out and drop 8a82459e-f737-476a-a69b-6b7105d3aef7.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.441-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.473-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test4_fsmdb0.agg_out (8a82459e-f737-476a-a69b-6b7105d3aef7).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.484-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.442-0500 I COMMAND [conn82] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.473-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 1b702c2b-89f3-46c8-be2f-a0e511288776 from test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.484-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test4_fsmdb0.agg_out (8a82459e-f737-476a-a69b-6b7105d3aef7) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 510), t: 1 } and commit timestamp Timestamp(1574796731, 510)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.442-0500 I STORAGE [conn82] dropCollection: test4_fsmdb0.agg_out (35f6d45d-1abe-4d20-bf42-28a41d2a4f01) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 3540), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.473-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (8a82459e-f737-476a-a69b-6b7105d3aef7)'. Ident: 'index-414--2310912778499990807', commit timestamp: 'Timestamp(1574796731, 510)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.484-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test4_fsmdb0.agg_out (8a82459e-f737-476a-a69b-6b7105d3aef7).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.442-0500 I STORAGE [conn82] Finishing collection drop for test4_fsmdb0.agg_out (35f6d45d-1abe-4d20-bf42-28a41d2a4f01).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.473-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (8a82459e-f737-476a-a69b-6b7105d3aef7)'. Ident: 'index-421--2310912778499990807', commit timestamp: 'Timestamp(1574796731, 510)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.484-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection 1b702c2b-89f3-46c8-be2f-a0e511288776 from test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.442-0500 I STORAGE [conn82] renameCollection: renaming collection e443177f-bf1f-4b57-962d-119402c8d5ba from test4_fsmdb0.tmp.agg_out.8e6fc5ce-dd72-4871-88db-2c60cf86f843 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.474-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-413--2310912778499990807, commit timestamp: Timestamp(1574796731, 510)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.484-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (8a82459e-f737-476a-a69b-6b7105d3aef7)'. Ident: 'index-414--7234316082034423155', commit timestamp: 'Timestamp(1574796731, 510)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.442-0500 I STORAGE [conn82] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (35f6d45d-1abe-4d20-bf42-28a41d2a4f01)'. Ident: 'index-395--2588534479858262356', commit timestamp: 'Timestamp(1574796729, 3540)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.476-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 3b3f01b8-06ce-4a10-aa28-49c24d0d93a0: test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817 ( 3a2e094a-ab6f-4d64-ba6a-e752ef678c07 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.484-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (8a82459e-f737-476a-a69b-6b7105d3aef7)'. Ident: 'index-421--7234316082034423155', commit timestamp: 'Timestamp(1574796731, 510)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.442-0500 I STORAGE [conn82] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (35f6d45d-1abe-4d20-bf42-28a41d2a4f01)'. Ident: 'index-400--2588534479858262356', commit timestamp: 'Timestamp(1574796729, 3540)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.480-0500 I STORAGE [ReplWriterWorker-6] createCollection: test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403 with provided UUID: 6d7b1b53-805f-4e82-a6e8-dfd96f7e7393 and options: { uuid: UUID("6d7b1b53-805f-4e82-a6e8-dfd96f7e7393"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.484-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-413--7234316082034423155, commit timestamp: Timestamp(1574796731, 510)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.442-0500 I STORAGE [conn82] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-392--2588534479858262356, commit timestamp: Timestamp(1574796729, 3540)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.495-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.485-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: bf02ab4d-8c09-49d1-b512-c13ee9c9d75f: test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817 ( 3a2e094a-ab6f-4d64-ba6a-e752ef678c07 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.442-0500 I INDEX [conn85] Registering index build: 653f7886-1432-4e98-8d75-839b420d4ed2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.516-0500 I INDEX [ReplWriterWorker-3] index build: starting on test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.491-0500 I STORAGE [ReplWriterWorker-12] createCollection: test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403 with provided UUID: 6d7b1b53-805f-4e82-a6e8-dfd96f7e7393 and options: { uuid: UUID("6d7b1b53-805f-4e82-a6e8-dfd96f7e7393"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.442-0500 I COMMAND [conn62] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7996875786321154070, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 920379813867812284, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796729269), clusterTime: Timestamp(1574796729, 1141) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796729, 1205), signature: { hash: BinData(0, 1B85D1C27347EBF9FE8EFCAA3F67C7E41A1AA28E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45728", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796727, 1), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 172ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.516-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.506-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.443-0500 I STORAGE [conn82] createCollection: test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766 with generated UUID: 657ce8ff-5de2-4302-ab06-b6667813fc84 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.516-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 94a72bb0-f495-41eb-a7b5-9579089660ec: test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d (33c3810a-89c2-4b7d-91d1-9199ea59da61 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.525-0500 I INDEX [ReplWriterWorker-2] index build: starting on test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.445-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: b32a09ee-27aa-4e9b-b44f-df74f21ea227: test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311 ( 8a82459e-f737-476a-a69b-6b7105d3aef7 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.516-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.525-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.445-0500 I STORAGE [conn84] createCollection: test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817 with generated UUID: 3a2e094a-ab6f-4d64-ba6a-e752ef678c07 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.517-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.525-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 3b8bd78c-aa70-4ebb-8b77-6fc7100d0b32: test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d (33c3810a-89c2-4b7d-91d1-9199ea59da61 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.476-0500 I INDEX [conn85] index build: starting on test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.518-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766 (657ce8ff-5de2-4302-ab06-b6667813fc84) to test4_fsmdb0.agg_out and drop 1b702c2b-89f3-46c8-be2f-a0e511288776.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.526-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.476-0500 I INDEX [conn85] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.519-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.526-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.476-0500 I STORAGE [conn85] Index build initialized: 653f7886-1432-4e98-8d75-839b420d4ed2: test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad (1b702c2b-89f3-46c8-be2f-a0e511288776 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.519-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test4_fsmdb0.agg_out (1b702c2b-89f3-46c8-be2f-a0e511288776) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 1081), t: 1 } and commit timestamp Timestamp(1574796731, 1081)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.527-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766 (657ce8ff-5de2-4302-ab06-b6667813fc84) to test4_fsmdb0.agg_out and drop 1b702c2b-89f3-46c8-be2f-a0e511288776.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.476-0500 I INDEX [conn85] Waiting for index build to complete: 653f7886-1432-4e98-8d75-839b420d4ed2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.519-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test4_fsmdb0.agg_out (1b702c2b-89f3-46c8-be2f-a0e511288776).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.528-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.476-0500 I INDEX [conn88] Index build completed: b32a09ee-27aa-4e9b-b44f-df74f21ea227
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.520-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection 657ce8ff-5de2-4302-ab06-b6667813fc84 from test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.529-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test4_fsmdb0.agg_out (1b702c2b-89f3-46c8-be2f-a0e511288776) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 1081), t: 1 } and commit timestamp Timestamp(1574796731, 1081)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.476-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.520-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (1b702c2b-89f3-46c8-be2f-a0e511288776)'. Ident: 'index-418--2310912778499990807', commit timestamp: 'Timestamp(1574796731, 1081)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.529-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test4_fsmdb0.agg_out (1b702c2b-89f3-46c8-be2f-a0e511288776).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.483-0500 I INDEX [conn82] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.520-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (1b702c2b-89f3-46c8-be2f-a0e511288776)'. Ident: 'index-427--2310912778499990807', commit timestamp: 'Timestamp(1574796731, 1081)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.529-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection 657ce8ff-5de2-4302-ab06-b6667813fc84 from test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.489-0500 I INDEX [conn84] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.520-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-417--2310912778499990807, commit timestamp: Timestamp(1574796731, 1081)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.529-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (1b702c2b-89f3-46c8-be2f-a0e511288776)'. Ident: 'index-418--7234316082034423155', commit timestamp: 'Timestamp(1574796731, 1081)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.489-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.522-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 94a72bb0-f495-41eb-a7b5-9579089660ec: test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d ( 33c3810a-89c2-4b7d-91d1-9199ea59da61 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.529-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (1b702c2b-89f3-46c8-be2f-a0e511288776)'. Ident: 'index-427--7234316082034423155', commit timestamp: 'Timestamp(1574796731, 1081)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.492-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.522-0500 I STORAGE [ReplWriterWorker-7] createCollection: test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 with provided UUID: 4c5457b7-3c92-4f4b-9fb9-dc1b2c664aad and options: { uuid: UUID("4c5457b7-3c92-4f4b-9fb9-dc1b2c664aad"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.529-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-417--7234316082034423155, commit timestamp: Timestamp(1574796731, 1081)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.492-0500 I COMMAND [conn77] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.536-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.530-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 3b8bd78c-aa70-4ebb-8b77-6fc7100d0b32: test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d ( 33c3810a-89c2-4b7d-91d1-9199ea59da61 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.492-0500 I STORAGE [conn77] dropCollection: test4_fsmdb0.agg_out (e443177f-bf1f-4b57-962d-119402c8d5ba) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796729, 4046), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.555-0500 I INDEX [ReplWriterWorker-3] index build: starting on test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.531-0500 I STORAGE [ReplWriterWorker-4] createCollection: test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 with provided UUID: 4c5457b7-3c92-4f4b-9fb9-dc1b2c664aad and options: { uuid: UUID("4c5457b7-3c92-4f4b-9fb9-dc1b2c664aad"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.492-0500 I STORAGE [conn77] Finishing collection drop for test4_fsmdb0.agg_out (e443177f-bf1f-4b57-962d-119402c8d5ba).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.555-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.548-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.492-0500 I STORAGE [conn77] renameCollection: renaming collection 0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a from test4_fsmdb0.tmp.agg_out.b137f4e2-46c9-4108-b0e9-a269500ebe83 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.555-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: e7888025-1e29-4a90-a4a2-ec2812495f28: test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192 (7ea18e16-bafc-4b63-ae0b-6446e5352548 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.565-0500 I INDEX [ReplWriterWorker-7] index build: starting on test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.492-0500 I STORAGE [conn77] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (e443177f-bf1f-4b57-962d-119402c8d5ba)'. Ident: 'index-399--2588534479858262356', commit timestamp: 'Timestamp(1574796729, 4046)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.555-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.565-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.492-0500 I STORAGE [conn77] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (e443177f-bf1f-4b57-962d-119402c8d5ba)'. Ident: 'index-404--2588534479858262356', commit timestamp: 'Timestamp(1574796729, 4046)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.556-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.565-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: d48a40bd-9b11-4615-9b75-ad5c3f20a29e: test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192 (7ea18e16-bafc-4b63-ae0b-6446e5352548 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.492-0500 I STORAGE [conn77] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-397--2588534479858262356, commit timestamp: Timestamp(1574796729, 4046)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.558-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.565-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.492-0500 I INDEX [conn82] Registering index build: 714579cd-5493-45ed-8ed5-bf8197f9f5f6
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.560-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817 (3a2e094a-ab6f-4d64-ba6a-e752ef678c07) to test4_fsmdb0.agg_out and drop 657ce8ff-5de2-4302-ab06-b6667813fc84.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.566-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.492-0500 I INDEX [conn84] Registering index build: affbdbfb-0cd1-4059-a196-ef5be6d283e1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.560-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test4_fsmdb0.agg_out (657ce8ff-5de2-4302-ab06-b6667813fc84) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 1650), t: 1 } and commit timestamp Timestamp(1574796731, 1650)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.568-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.493-0500 I COMMAND [conn64] command test4_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7868223314117226042, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2230785086522962771, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796729303), clusterTime: Timestamp(1574796729, 1518) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796729, 1518), signature: { hash: BinData(0, 1B85D1C27347EBF9FE8EFCAA3F67C7E41A1AA28E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45740", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796727, 1), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 188ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.560-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test4_fsmdb0.agg_out (657ce8ff-5de2-4302-ab06-b6667813fc84).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.570-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817 (3a2e094a-ab6f-4d64-ba6a-e752ef678c07) to test4_fsmdb0.agg_out and drop 657ce8ff-5de2-4302-ab06-b6667813fc84.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.560-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection 3a2e094a-ab6f-4d64-ba6a-e752ef678c07 from test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.570-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test4_fsmdb0.agg_out (657ce8ff-5de2-4302-ab06-b6667813fc84) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 1650), t: 1 } and commit timestamp Timestamp(1574796731, 1650)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.496-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 653f7886-1432-4e98-8d75-839b420d4ed2: test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad ( 1b702c2b-89f3-46c8-be2f-a0e511288776 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.560-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (657ce8ff-5de2-4302-ab06-b6667813fc84)'. Ident: 'index-424--2310912778499990807', commit timestamp: 'Timestamp(1574796731, 1650)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.570-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test4_fsmdb0.agg_out (657ce8ff-5de2-4302-ab06-b6667813fc84).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:09.514-0500 I INDEX [conn82] index build: starting on test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.560-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (657ce8ff-5de2-4302-ab06-b6667813fc84)'. Ident: 'index-433--2310912778499990807', commit timestamp: 'Timestamp(1574796731, 1650)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.570-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 3a2e094a-ab6f-4d64-ba6a-e752ef678c07 from test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.365-0500 I INDEX [conn82] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.270-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test4_fsmdb0 from version {} to version { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.560-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-423--2310912778499990807, commit timestamp: Timestamp(1574796731, 1650)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.570-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (657ce8ff-5de2-4302-ab06-b6667813fc84)'. Ident: 'index-424--7234316082034423155', commit timestamp: 'Timestamp(1574796731, 1650)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.365-0500 I STORAGE [conn82] Index build initialized: 714579cd-5493-45ed-8ed5-bf8197f9f5f6: test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766 (657ce8ff-5de2-4302-ab06-b6667813fc84 ): indexes: 1
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:12.270-0500 I COMMAND [conn169] command test4_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51") }, $clusterTime: { clusterTime: Timestamp(1574796731, 2155), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: cannot rename to a sharded collection" errName:IllegalOperation errCode:20 reslen:629 protocol:op_msg 738ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.562-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: e7888025-1e29-4a90-a4a2-ec2812495f28: test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192 ( 7ea18e16-bafc-4b63-ae0b-6446e5352548 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.570-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (657ce8ff-5de2-4302-ab06-b6667813fc84)'. Ident: 'index-433--7234316082034423155', commit timestamp: 'Timestamp(1574796731, 1650)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.365-0500 I INDEX [conn82] Waiting for index build to complete: 714579cd-5493-45ed-8ed5-bf8197f9f5f6
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.562-0500 I STORAGE [ReplWriterWorker-10] createCollection: test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 with provided UUID: d928bc79-920f-4cd2-b7dd-0b7f7408581a and options: { uuid: UUID("d928bc79-920f-4cd2-b7dd-0b7f7408581a"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:12.271-0500 I COMMAND [conn164] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458") }, $clusterTime: { clusterTime: Timestamp(1574796731, 1714), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: cannot rename to a sharded collection" errName:IllegalOperation errCode:20 reslen:629 protocol:op_msg 772ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.271-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.agg_out to version 1|0||5ddd7dbbcf8184c2e1494ea3 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.570-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-423--7234316082034423155, commit timestamp: Timestamp(1574796731, 1650)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.365-0500 I INDEX [conn85] Index build completed: 653f7886-1432-4e98-8d75-839b420d4ed2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.578-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.571-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: d48a40bd-9b11-4615-9b75-ad5c3f20a29e: test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192 ( 7ea18e16-bafc-4b63-ae0b-6446e5352548 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.365-0500 I COMMAND [conn88] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.595-0500 I INDEX [ReplWriterWorker-0] index build: starting on test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.573-0500 I STORAGE [ReplWriterWorker-5] createCollection: test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 with provided UUID: d928bc79-920f-4cd2-b7dd-0b7f7408581a and options: { uuid: UUID("d928bc79-920f-4cd2-b7dd-0b7f7408581a"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.365-0500 I COMMAND [conn85] command test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796729, 3537), signature: { hash: BinData(0, 1B85D1C27347EBF9FE8EFCAA3F67C7E41A1AA28E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58884", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796727, 1), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 3905 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 1926ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.595-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.587-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.365-0500 I STORAGE [conn88] dropCollection: test4_fsmdb0.agg_out (0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 2), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.595-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: d7528958-3e70-4b9a-aea2-51388f77f7cf: test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403 (6d7b1b53-805f-4e82-a6e8-dfd96f7e7393 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.604-0500 I INDEX [ReplWriterWorker-12] index build: starting on test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.365-0500 I STORAGE [conn88] Finishing collection drop for test4_fsmdb0.agg_out (0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.596-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.604-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.365-0500 I STORAGE [conn88] renameCollection: renaming collection 8a82459e-f737-476a-a69b-6b7105d3aef7 from test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.596-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.604-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: a06cfa7a-2517-4d28-8e06-9ee7589ac3db: test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403 (6d7b1b53-805f-4e82-a6e8-dfd96f7e7393 ): indexes: 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.272-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7dbb5cde74b6784bb994' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.365-0500 I STORAGE [conn88] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a)'. Ident: 'index-403--2588534479858262356', commit timestamp: 'Timestamp(1574796731, 2)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.599-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.604-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.365-0500 I STORAGE [conn88] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (0d45e71c-47a1-4d4f-bcac-0b0e5bc6f81a)'. Ident: 'index-406--2588534479858262356', commit timestamp: 'Timestamp(1574796731, 2)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.602-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: d7528958-3e70-4b9a-aea2-51388f77f7cf: test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403 ( 6d7b1b53-805f-4e82-a6e8-dfd96f7e7393 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.605-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.365-0500 I STORAGE [conn88] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-401--2588534479858262356, commit timestamp: Timestamp(1574796731, 2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.603-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d (33c3810a-89c2-4b7d-91d1-9199ea59da61) to test4_fsmdb0.agg_out and drop 3a2e094a-ab6f-4d64-ba6a-e752ef678c07.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.607-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.365-0500 I COMMAND [conn88] command test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311 appName: "tid:3" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test4_fsmdb0.tmp.agg_out.89b925c4-6c9f-4579-a23e-958f2e236311", to: "test4_fsmdb0.agg_out", collectionOptions: { validationLevel: "strict", validationAction: "error" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796729, 4546), signature: { hash: BinData(0, 1B85D1C27347EBF9FE8EFCAA3F67C7E41A1AA28E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58880", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796727, 1), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 1848292 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 1848ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.603-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test4_fsmdb0.agg_out (3a2e094a-ab6f-4d64-ba6a-e752ef678c07) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 2091), t: 1 } and commit timestamp Timestamp(1574796731, 2091)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.611-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d (33c3810a-89c2-4b7d-91d1-9199ea59da61) to test4_fsmdb0.agg_out and drop 3a2e094a-ab6f-4d64-ba6a-e752ef678c07.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.365-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.603-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test4_fsmdb0.agg_out (3a2e094a-ab6f-4d64-ba6a-e752ef678c07).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.611-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test4_fsmdb0.agg_out (3a2e094a-ab6f-4d64-ba6a-e752ef678c07) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 2091), t: 1 } and commit timestamp Timestamp(1574796731, 2091)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.366-0500 I COMMAND [conn81] command test4_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2535705742887748078, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5849053394420991655, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796729360), clusterTime: Timestamp(1574796729, 2027) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796729, 2091), signature: { hash: BinData(0, 1B85D1C27347EBF9FE8EFCAA3F67C7E41A1AA28E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58880", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796727, 1), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2004ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.603-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 33c3810a-89c2-4b7d-91d1-9199ea59da61 from test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.611-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test4_fsmdb0.agg_out (3a2e094a-ab6f-4d64-ba6a-e752ef678c07).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.366-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.603-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (3a2e094a-ab6f-4d64-ba6a-e752ef678c07)'. Ident: 'index-426--2310912778499990807', commit timestamp: 'Timestamp(1574796731, 2091)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.611-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 33c3810a-89c2-4b7d-91d1-9199ea59da61 from test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.274-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7dbb5cde74b6784bb992' unlocked.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.368-0500 I COMMAND [conn197] command test4_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796729, 3540), lsid: { id: UUID("3f42616f-e948-49d9-97ee-c2eb72d5ff98") }, $clusterTime: { clusterTime: Timestamp(1574796729, 3606), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796729, 3540). Collection minimum timestamp is Timestamp(1574796729, 4045)" errName:SnapshotUnavailable errCode:246 reslen:582 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 1841851 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 1844ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.603-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (3a2e094a-ab6f-4d64-ba6a-e752ef678c07)'. Ident: 'index-435--2310912778499990807', commit timestamp: 'Timestamp(1574796731, 2091)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.611-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (3a2e094a-ab6f-4d64-ba6a-e752ef678c07)'. Ident: 'index-426--7234316082034423155', commit timestamp: 'Timestamp(1574796731, 2091)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.274-0500 I COMMAND [conn22] command admin.$cmd appName: "tid:1" command: _configsvrShardCollection { _configsvrShardCollection: "test4_fsmdb0.agg_out", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1574796731, 3230), signature: { hash: BinData(0, E2360818581976968E8FEF9467289F3A6EA54891), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58884", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796731, 3230), t: 1 } }, $db: "admin" } numYields:0 reslen:586 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 6 } }, Global: { acquireCount: { r: 2, w: 4 } }, Database: { acquireCount: { r: 2, w: 4 } }, Collection: { acquireCount: { r: 2, w: 4 } }, Mutex: { acquireCount: { r: 10, W: 1 } } } flowControl:{ acquireCount: 4 } storage:{} protocol:op_msg 668ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.368-0500 I STORAGE [conn85] createCollection: test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d with generated UUID: 33c3810a-89c2-4b7d-91d1-9199ea59da61 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:12.274-0500 I COMMAND [conn53] command test4_fsmdb0.agg_out appName: "tid:1" command: shardCollection { shardCollection: "test4_fsmdb0.agg_out", key: { _id: "hashed" }, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70") }, $clusterTime: { clusterTime: Timestamp(1574796731, 3230), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:244 protocol:op_msg 669ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.603-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-425--2310912778499990807, commit timestamp: Timestamp(1574796731, 2091)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.611-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (3a2e094a-ab6f-4d64-ba6a-e752ef678c07)'. Ident: 'index-435--7234316082034423155', commit timestamp: 'Timestamp(1574796731, 2091)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.369-0500 I STORAGE [conn88] createCollection: test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192 with generated UUID: 7ea18e16-bafc-4b63-ae0b-6446e5352548 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.620-0500 I INDEX [ReplWriterWorker-6] index build: starting on test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.611-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-425--7234316082034423155, commit timestamp: Timestamp(1574796731, 2091)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.375-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.620-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.612-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: a06cfa7a-2517-4d28-8e06-9ee7589ac3db: test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403 ( 6d7b1b53-805f-4e82-a6e8-dfd96f7e7393 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.391-0500 I INDEX [conn84] index build: starting on test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.620-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: d490bb21-c1fb-4269-aaac-c99302426fab: test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 (4c5457b7-3c92-4f4b-9fb9-dc1b2c664aad ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.628-0500 I INDEX [ReplWriterWorker-15] index build: starting on test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.391-0500 I INDEX [conn84] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.620-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.628-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.391-0500 I STORAGE [conn84] Index build initialized: affbdbfb-0cd1-4059-a196-ef5be6d283e1: test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817 (3a2e094a-ab6f-4d64-ba6a-e752ef678c07 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.621-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.628-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 3c3e57fd-1670-4602-91a9-5f6a927497bd: test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 (4c5457b7-3c92-4f4b-9fb9-dc1b2c664aad ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.391-0500 I INDEX [conn84] Waiting for index build to complete: affbdbfb-0cd1-4059-a196-ef5be6d283e1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.621-0500 I STORAGE [ReplWriterWorker-10] createCollection: test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 with provided UUID: 85b70d54-2e55-44ce-8459-084207afda61 and options: { uuid: UUID("85b70d54-2e55-44ce-8459-084207afda61"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.628-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.391-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.623-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:12.276-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test4_fsmdb0 from version {} to version { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.629-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.392-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 714579cd-5493-45ed-8ed5-bf8197f9f5f6: test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766 ( 657ce8ff-5de2-4302-ab06-b6667813fc84 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.634-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: d490bb21-c1fb-4269-aaac-c99302426fab: test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 ( 4c5457b7-3c92-4f4b-9fb9-dc1b2c664aad ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.631-0500 I STORAGE [ReplWriterWorker-9] createCollection: test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 with provided UUID: 85b70d54-2e55-44ce-8459-084207afda61 and options: { uuid: UUID("85b70d54-2e55-44ce-8459-084207afda61"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.392-0500 I INDEX [conn82] Index build completed: 714579cd-5493-45ed-8ed5-bf8197f9f5f6
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.640-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.631-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.392-0500 I COMMAND [conn82] command test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766 appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796729, 4043), signature: { hash: BinData(0, 1B85D1C27347EBF9FE8EFCAA3F67C7E41A1AA28E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796727, 1), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 8899 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 1908ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.645-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192 (7ea18e16-bafc-4b63-ae0b-6446e5352548) to test4_fsmdb0.agg_out and drop 33c3810a-89c2-4b7d-91d1-9199ea59da61.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.641-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 3c3e57fd-1670-4602-91a9-5f6a927497bd: test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 ( 4c5457b7-3c92-4f4b-9fb9-dc1b2c664aad ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.398-0500 I INDEX [conn85] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.645-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test4_fsmdb0.agg_out (33c3810a-89c2-4b7d-91d1-9199ea59da61) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 2596), t: 1 } and commit timestamp Timestamp(1574796731, 2596)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.649-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.403-0500 I INDEX [conn88] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.645-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test4_fsmdb0.agg_out (33c3810a-89c2-4b7d-91d1-9199ea59da61).
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:12.277-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.agg_out to version 1|0||5ddd7dbbcf8184c2e1494ea3 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.654-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192 (7ea18e16-bafc-4b63-ae0b-6446e5352548) to test4_fsmdb0.agg_out and drop 33c3810a-89c2-4b7d-91d1-9199ea59da61.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.404-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.645-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 7ea18e16-bafc-4b63-ae0b-6446e5352548 from test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.654-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test4_fsmdb0.agg_out (33c3810a-89c2-4b7d-91d1-9199ea59da61) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 2596), t: 1 } and commit timestamp Timestamp(1574796731, 2596)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.406-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.645-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (33c3810a-89c2-4b7d-91d1-9199ea59da61)'. Ident: 'index-430--2310912778499990807', commit timestamp: 'Timestamp(1574796731, 2596)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.654-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test4_fsmdb0.agg_out (33c3810a-89c2-4b7d-91d1-9199ea59da61).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.406-0500 I COMMAND [conn77] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.645-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (33c3810a-89c2-4b7d-91d1-9199ea59da61)'. Ident: 'index-439--2310912778499990807', commit timestamp: 'Timestamp(1574796731, 2596)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.654-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection 7ea18e16-bafc-4b63-ae0b-6446e5352548 from test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.406-0500 I STORAGE [conn77] dropCollection: test4_fsmdb0.agg_out (8a82459e-f737-476a-a69b-6b7105d3aef7) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 510), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.645-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-429--2310912778499990807, commit timestamp: Timestamp(1574796731, 2596)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.654-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (33c3810a-89c2-4b7d-91d1-9199ea59da61)'. Ident: 'index-430--7234316082034423155', commit timestamp: 'Timestamp(1574796731, 2596)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.406-0500 I STORAGE [conn77] Finishing collection drop for test4_fsmdb0.agg_out (8a82459e-f737-476a-a69b-6b7105d3aef7).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.662-0500 I INDEX [ReplWriterWorker-14] index build: starting on test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.278-0500 I COMMAND [conn70] CMD: dropIndexes test4_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.654-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (33c3810a-89c2-4b7d-91d1-9199ea59da61)'. Ident: 'index-439--7234316082034423155', commit timestamp: 'Timestamp(1574796731, 2596)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.406-0500 I STORAGE [conn77] renameCollection: renaming collection 1b702c2b-89f3-46c8-be2f-a0e511288776 from test4_fsmdb0.tmp.agg_out.976f88df-cb06-4670-8a1e-5cb8b684a2ad to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.662-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.654-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-429--7234316082034423155, commit timestamp: Timestamp(1574796731, 2596)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.406-0500 I STORAGE [conn77] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (8a82459e-f737-476a-a69b-6b7105d3aef7)'. Ident: 'index-409--2588534479858262356', commit timestamp: 'Timestamp(1574796731, 510)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.662-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 104e3529-a7c6-4e0b-a0d3-a227fe844cff: test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 (d928bc79-920f-4cd2-b7dd-0b7f7408581a ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.670-0500 I INDEX [ReplWriterWorker-9] index build: starting on test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.406-0500 I STORAGE [conn77] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (8a82459e-f737-476a-a69b-6b7105d3aef7)'. Ident: 'index-410--2588534479858262356', commit timestamp: 'Timestamp(1574796731, 510)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.662-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.670-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.406-0500 I STORAGE [conn77] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-407--2588534479858262356, commit timestamp: Timestamp(1574796731, 510)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.662-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.670-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 4d597274-952b-4f82-9a78-84750becfc5e: test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 (d928bc79-920f-4cd2-b7dd-0b7f7408581a ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.406-0500 I INDEX [conn85] Registering index build: b73daf32-2e91-4aea-8c04-47bb81def696
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.665-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.670-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.407-0500 I INDEX [conn88] Registering index build: 2e12b17c-dba4-479a-a540-c86bdc8b3cb6
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.668-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403 (6d7b1b53-805f-4e82-a6e8-dfd96f7e7393) to test4_fsmdb0.agg_out and drop 7ea18e16-bafc-4b63-ae0b-6446e5352548.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.671-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.407-0500 I COMMAND [conn80] command test4_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2502075433573081738, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8676836328309034790, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796729398), clusterTime: Timestamp(1574796729, 2532) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796729, 2532), signature: { hash: BinData(0, 1B85D1C27347EBF9FE8EFCAA3F67C7E41A1AA28E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58884", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796727, 1), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2007ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.668-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test4_fsmdb0.agg_out (7ea18e16-bafc-4b63-ae0b-6446e5352548) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 3164), t: 1 } and commit timestamp Timestamp(1574796731, 3164)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.674-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.407-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: affbdbfb-0cd1-4059-a196-ef5be6d283e1: test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817 ( 3a2e094a-ab6f-4d64-ba6a-e752ef678c07 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.668-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test4_fsmdb0.agg_out (7ea18e16-bafc-4b63-ae0b-6446e5352548).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.676-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 4d597274-952b-4f82-9a78-84750becfc5e: test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 ( d928bc79-920f-4cd2-b7dd-0b7f7408581a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.425-0500 I INDEX [conn85] index build: starting on test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.668-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 6d7b1b53-805f-4e82-a6e8-dfd96f7e7393 from test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.676-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403 (6d7b1b53-805f-4e82-a6e8-dfd96f7e7393) to test4_fsmdb0.agg_out and drop 7ea18e16-bafc-4b63-ae0b-6446e5352548.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.425-0500 I INDEX [conn85] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.668-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (7ea18e16-bafc-4b63-ae0b-6446e5352548)'. Ident: 'index-432--2310912778499990807', commit timestamp: 'Timestamp(1574796731, 3164)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.676-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test4_fsmdb0.agg_out (7ea18e16-bafc-4b63-ae0b-6446e5352548) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 3164), t: 1 } and commit timestamp Timestamp(1574796731, 3164)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.425-0500 I STORAGE [conn85] Index build initialized: b73daf32-2e91-4aea-8c04-47bb81def696: test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d (33c3810a-89c2-4b7d-91d1-9199ea59da61 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.668-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (7ea18e16-bafc-4b63-ae0b-6446e5352548)'. Ident: 'index-443--2310912778499990807', commit timestamp: 'Timestamp(1574796731, 3164)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.676-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test4_fsmdb0.agg_out (7ea18e16-bafc-4b63-ae0b-6446e5352548).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.425-0500 I INDEX [conn85] Waiting for index build to complete: b73daf32-2e91-4aea-8c04-47bb81def696
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.668-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-431--2310912778499990807, commit timestamp: Timestamp(1574796731, 3164)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.676-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 6d7b1b53-805f-4e82-a6e8-dfd96f7e7393 from test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.425-0500 I INDEX [conn84] Index build completed: affbdbfb-0cd1-4059-a196-ef5be6d283e1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.669-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 104e3529-a7c6-4e0b-a0d3-a227fe844cff: test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 ( d928bc79-920f-4cd2-b7dd-0b7f7408581a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.676-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (7ea18e16-bafc-4b63-ae0b-6446e5352548)'. Ident: 'index-432--7234316082034423155', commit timestamp: 'Timestamp(1574796731, 3164)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.425-0500 I COMMAND [conn84] command test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796729, 4043), signature: { hash: BinData(0, 1B85D1C27347EBF9FE8EFCAA3F67C7E41A1AA28E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45728", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796727, 1), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 2869 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 1935ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.670-0500 I STORAGE [ReplWriterWorker-12] createCollection: test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 with provided UUID: 6dd883ae-d2e0-4193-aa8f-48b1aec8ae00 and options: { uuid: UUID("6dd883ae-d2e0-4193-aa8f-48b1aec8ae00"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.676-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (7ea18e16-bafc-4b63-ae0b-6446e5352548)'. Ident: 'index-443--7234316082034423155', commit timestamp: 'Timestamp(1574796731, 3164)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.425-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.684-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.676-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-431--7234316082034423155, commit timestamp: Timestamp(1574796731, 3164)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.426-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.699-0500 I INDEX [ReplWriterWorker-6] index build: starting on test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.678-0500 I STORAGE [ReplWriterWorker-4] createCollection: test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 with provided UUID: 6dd883ae-d2e0-4193-aa8f-48b1aec8ae00 and options: { uuid: UUID("6dd883ae-d2e0-4193-aa8f-48b1aec8ae00"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.428-0500 I STORAGE [conn77] createCollection: test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403 with generated UUID: 6d7b1b53-805f-4e82-a6e8-dfd96f7e7393 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.699-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.691-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.437-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.699-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 3fa50333-44bd-45dc-b6e3-76d398181e69: test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 (85b70d54-2e55-44ce-8459-084207afda61 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.705-0500 I INDEX [ReplWriterWorker-9] index build: starting on test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.452-0500 I INDEX [conn88] index build: starting on test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.699-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.705-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.452-0500 I INDEX [conn88] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.700-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.705-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 38d13903-7a98-4ddb-ba22-ed7146d45ceb: test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 (85b70d54-2e55-44ce-8459-084207afda61 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.452-0500 I STORAGE [conn88] Index build initialized: 2e12b17c-dba4-479a-a540-c86bdc8b3cb6: test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192 (7ea18e16-bafc-4b63-ae0b-6446e5352548 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.702-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.706-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.452-0500 I INDEX [conn88] Waiting for index build to complete: 2e12b17c-dba4-479a-a540-c86bdc8b3cb6
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.704-0500 I STORAGE [ReplWriterWorker-8] createCollection: config.cache.chunks.test4_fsmdb0.agg_out with provided UUID: b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2 and options: { uuid: UUID("b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2") }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.706-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.282-0500 I COMMAND [conn70] CMD: dropIndexes test4_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.453-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: b73daf32-2e91-4aea-8c04-47bb81def696: test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d ( 33c3810a-89c2-4b7d-91d1-9199ea59da61 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.705-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 3fa50333-44bd-45dc-b6e3-76d398181e69: test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 ( 85b70d54-2e55-44ce-8459-084207afda61 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.709-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.453-0500 I INDEX [conn85] Index build completed: b73daf32-2e91-4aea-8c04-47bb81def696
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.719-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns config.cache.chunks.test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.710-0500 I STORAGE [ReplWriterWorker-2] createCollection: config.cache.chunks.test4_fsmdb0.agg_out with provided UUID: b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2 and options: { uuid: UUID("b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2") }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.460-0500 I INDEX [conn77] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.726-0500 I COMMAND [ReplWriterWorker-6] CMD: drop test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.713-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 38d13903-7a98-4ddb-ba22-ed7146d45ceb: test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 ( 85b70d54-2e55-44ce-8459-084207afda61 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.460-0500 I COMMAND [conn82] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.727-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 (4c5457b7-3c92-4f4b-9fb9-dc1b2c664aad) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 4546), t: 1 } and commit timestamp Timestamp(1574796731, 4546)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.726-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns config.cache.chunks.test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.460-0500 I STORAGE [conn82] dropCollection: test4_fsmdb0.agg_out (1b702c2b-89f3-46c8-be2f-a0e511288776) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 1081), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.727-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 (4c5457b7-3c92-4f4b-9fb9-dc1b2c664aad).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.733-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.460-0500 I STORAGE [conn82] Finishing collection drop for test4_fsmdb0.agg_out (1b702c2b-89f3-46c8-be2f-a0e511288776).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.727-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 (4c5457b7-3c92-4f4b-9fb9-dc1b2c664aad)'. Ident: 'index-442--2310912778499990807', commit timestamp: 'Timestamp(1574796731, 4546)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.733-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 (4c5457b7-3c92-4f4b-9fb9-dc1b2c664aad) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 4546), t: 1 } and commit timestamp Timestamp(1574796731, 4546)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.460-0500 I STORAGE [conn82] renameCollection: renaming collection 657ce8ff-5de2-4302-ab06-b6667813fc84 from test4_fsmdb0.tmp.agg_out.d0a73811-add9-41c9-b5f7-023a5bebf766 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.727-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 (4c5457b7-3c92-4f4b-9fb9-dc1b2c664aad)'. Ident: 'index-449--2310912778499990807', commit timestamp: 'Timestamp(1574796731, 4546)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.733-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 (4c5457b7-3c92-4f4b-9fb9-dc1b2c664aad).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.460-0500 I STORAGE [conn82] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (1b702c2b-89f3-46c8-be2f-a0e511288776)'. Ident: 'index-413--2588534479858262356', commit timestamp: 'Timestamp(1574796731, 1081)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.727-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948'. Ident: collection-441--2310912778499990807, commit timestamp: Timestamp(1574796731, 4546)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.733-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 (4c5457b7-3c92-4f4b-9fb9-dc1b2c664aad)'. Ident: 'index-442--7234316082034423155', commit timestamp: 'Timestamp(1574796731, 4546)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.461-0500 I STORAGE [conn82] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (1b702c2b-89f3-46c8-be2f-a0e511288776)'. Ident: 'index-414--2588534479858262356', commit timestamp: 'Timestamp(1574796731, 1081)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.727-0500 I COMMAND [ReplWriterWorker-8] CMD: drop test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.733-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 (4c5457b7-3c92-4f4b-9fb9-dc1b2c664aad)'. Ident: 'index-449--7234316082034423155', commit timestamp: 'Timestamp(1574796731, 4546)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.461-0500 I STORAGE [conn82] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-411--2588534479858262356, commit timestamp: Timestamp(1574796731, 1081)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.727-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 (85b70d54-2e55-44ce-8459-084207afda61) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 4547), t: 1 } and commit timestamp Timestamp(1574796731, 4547)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.733-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948'. Ident: collection-441--7234316082034423155, commit timestamp: Timestamp(1574796731, 4546)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.461-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.727-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 (85b70d54-2e55-44ce-8459-084207afda61).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.734-0500 I COMMAND [ReplWriterWorker-9] CMD: drop test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.461-0500 I INDEX [conn77] Registering index build: 6650763b-d4ba-4af5-be8b-d52b68a959d7
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.727-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 (85b70d54-2e55-44ce-8459-084207afda61)'. Ident: 'index-452--2310912778499990807', commit timestamp: 'Timestamp(1574796731, 4547)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.734-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 (85b70d54-2e55-44ce-8459-084207afda61) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 4547), t: 1 } and commit timestamp Timestamp(1574796731, 4547)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.461-0500 I COMMAND [conn65] command test4_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7287026075569155799, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3602419483368529366, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796729440), clusterTime: Timestamp(1574796729, 3537) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796729, 3540), signature: { hash: BinData(0, 1B85D1C27347EBF9FE8EFCAA3F67C7E41A1AA28E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796727, 1), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2018ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.727-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 (85b70d54-2e55-44ce-8459-084207afda61)'. Ident: 'index-457--2310912778499990807', commit timestamp: 'Timestamp(1574796731, 4547)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.734-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 (85b70d54-2e55-44ce-8459-084207afda61).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.461-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:11.727-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94'. Ident: collection-451--2310912778499990807, commit timestamp: Timestamp(1574796731, 4547)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.734-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 (85b70d54-2e55-44ce-8459-084207afda61)'. Ident: 'index-452--7234316082034423155', commit timestamp: 'Timestamp(1574796731, 4547)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.464-0500 I STORAGE [conn82] createCollection: test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 with generated UUID: 4c5457b7-3c92-4f4b-9fb9-dc1b2c664aad and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.734-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 (85b70d54-2e55-44ce-8459-084207afda61)'. Ident: 'index-457--7234316082034423155', commit timestamp: 'Timestamp(1574796731, 4547)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.471-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:11.734-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94'. Ident: collection-451--7234316082034423155, commit timestamp: Timestamp(1574796731, 4547)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.486-0500 I INDEX [conn77] index build: starting on test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.486-0500 I INDEX [conn77] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.486-0500 I STORAGE [conn77] Index build initialized: 6650763b-d4ba-4af5-be8b-d52b68a959d7: test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403 (6d7b1b53-805f-4e82-a6e8-dfd96f7e7393 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.486-0500 I INDEX [conn77] Waiting for index build to complete: 6650763b-d4ba-4af5-be8b-d52b68a959d7
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.486-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 2e12b17c-dba4-479a-a540-c86bdc8b3cb6: test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192 ( 7ea18e16-bafc-4b63-ae0b-6446e5352548 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.487-0500 I INDEX [conn88] Index build completed: 2e12b17c-dba4-479a-a540-c86bdc8b3cb6
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.495-0500 I INDEX [conn82] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.495-0500 I COMMAND [conn84] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.495-0500 I STORAGE [conn84] dropCollection: test4_fsmdb0.agg_out (657ce8ff-5de2-4302-ab06-b6667813fc84) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 1650), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.495-0500 I STORAGE [conn84] Finishing collection drop for test4_fsmdb0.agg_out (657ce8ff-5de2-4302-ab06-b6667813fc84).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.495-0500 I STORAGE [conn84] renameCollection: renaming collection 3a2e094a-ab6f-4d64-ba6a-e752ef678c07 from test4_fsmdb0.tmp.agg_out.38b47748-fb53-420b-957c-ad16f685f817 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.495-0500 I STORAGE [conn84] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (657ce8ff-5de2-4302-ab06-b6667813fc84)'. Ident: 'index-418--2588534479858262356', commit timestamp: 'Timestamp(1574796731, 1650)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.495-0500 I STORAGE [conn84] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (657ce8ff-5de2-4302-ab06-b6667813fc84)'. Ident: 'index-420--2588534479858262356', commit timestamp: 'Timestamp(1574796731, 1650)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.495-0500 I STORAGE [conn84] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-415--2588534479858262356, commit timestamp: Timestamp(1574796731, 1650)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.496-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.496-0500 I INDEX [conn82] Registering index build: 9c7f75fa-68c6-4e5f-a1ec-3edaa1146e83
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.496-0500 I COMMAND [conn62] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7719453987603473345, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7952230771120872979, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796729444), clusterTime: Timestamp(1574796729, 3540) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796729, 3541), signature: { hash: BinData(0, 1B85D1C27347EBF9FE8EFCAA3F67C7E41A1AA28E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45728", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796727, 1), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2051ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.496-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.500-0500 I STORAGE [conn84] createCollection: test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 with generated UUID: d928bc79-920f-4cd2-b7dd-0b7f7408581a and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.504-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.521-0500 I INDEX [conn82] index build: starting on test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.521-0500 I INDEX [conn82] build may temporarily use up to 500 megabytes of RAM
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:12.621-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js finished.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:15.564-0500 Starting JSTest jstests/hooks/run_check_repl_dbhash_background.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_check_repl_dbhash_background"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_check_repl_dbhash_background.js
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.287-0500 I INDEX [ReplWriterWorker-11] index build: starting on config.cache.chunks.test4_fsmdb0.agg_out properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.287-0500 I INDEX [ReplWriterWorker-15] index build: starting on config.cache.chunks.test4_fsmdb0.agg_out properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.319-0500 I COMMAND [conn70] CMD: dropIndexes test4_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:12.377-0500 I COMMAND [conn52] command test4_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a") }, $clusterTime: { clusterTime: Timestamp(1574796731, 3163), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:821 protocol:op_msg 777ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.412-0500 I SHARDING [conn22] distributed lock 'test4_fsmdb0' acquired for 'enableSharding', ts : 5ddd7dbc5cde74b6784bb9b2
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:12.423-0500 I COMMAND [conn170] command test4_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6") }, $clusterTime: { clusterTime: Timestamp(1574796731, 4546), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:818 protocol:op_msg 750ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:12.620-0500 I NETWORK [conn93] end connection 127.0.0.1:52954 (14 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:12.620-0500 I NETWORK [conn93] end connection 127.0.0.1:53844 (15 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.521-0500 I STORAGE [conn82] Index build initialized: 9c7f75fa-68c6-4e5f-a1ec-3edaa1146e83: test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 (4c5457b7-3c92-4f4b-9fb9-dc1b2c664aad ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.287-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.287-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.337-0500 I COMMAND [conn70] CMD: dropIndexes test4_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:12.414-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test4_fsmdb0 from version {} to version { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.412-0500 I SHARDING [conn22] Enabling sharding for database [test4_fsmdb0] in config db
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:12.424-0500 I COMMAND [conn164] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458") }, $clusterTime: { clusterTime: Timestamp(1574796732, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:818 protocol:op_msg 152ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.521-0500 I INDEX [conn82] Waiting for index build to complete: 9c7f75fa-68c6-4e5f-a1ec-3edaa1146e83
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.287-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 700b7607-21d1-42ed-bfd2-d2a34a254a02: config.cache.chunks.test4_fsmdb0.agg_out (b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.287-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: c0507c2d-9577-427f-a3ed-dd381a2759a5: config.cache.chunks.test4_fsmdb0.agg_out (b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.388-0500 I COMMAND [conn70] CMD: dropIndexes test4_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:12.415-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.agg_out to version 1|0||5ddd7dbbcf8184c2e1494ea3 took 0 ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.413-0500 I SHARDING [conn22] distributed lock with ts: 5ddd7dbc5cde74b6784bb9b2' unlocked.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:12.424-0500 I COMMAND [conn169] command test4_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51") }, $clusterTime: { clusterTime: Timestamp(1574796732, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:823 protocol:op_msg 152ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.522-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 6650763b-d4ba-4af5-be8b-d52b68a959d7: test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403 ( 6d7b1b53-805f-4e82-a6e8-dfd96f7e7393 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.287-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.287-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.390-0500 I COMMAND [conn70] CMD: dropIndexes test4_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:12.431-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test4_fsmdb0 from version {} to version { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.416-0500 I SHARDING [conn23] distributed lock 'test4_fsmdb0' acquired for 'shardCollection', ts : 5ddd7dbc5cde74b6784bb9ba
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:12.427-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.agg_out to version 1|0||5ddd7dbbcf8184c2e1494ea3 took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.522-0500 I INDEX [conn77] Index build completed: 6650763b-d4ba-4af5-be8b-d52b68a959d7
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.288-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.288-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.393-0500 I COMMAND [conn70] CMD: dropIndexes test4_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:12.432-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.agg_out to version 1|0||5ddd7dbbcf8184c2e1494ea3 took 0 ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.417-0500 I SHARDING [conn23] distributed lock 'test4_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7dbc5cde74b6784bb9be
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:12.525-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test4_fsmdb0 from version {} to version { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.530-0500 I INDEX [conn84] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.530-0500 I COMMAND [conn85] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.290-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index lastmod_1 on ns config.cache.chunks.test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.404-0500 I COMMAND [conn71] CMD: dropIndexes test4_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:12.525-0500 I NETWORK [conn53] end connection 127.0.0.1:58884 (2 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.418-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test4_fsmdb0 from version {} to version { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:12.526-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.agg_out to version 1|0||5ddd7dbbcf8184c2e1494ea3 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.290-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index lastmod_1 on ns config.cache.chunks.test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.530-0500 I STORAGE [conn85] dropCollection: test4_fsmdb0.agg_out (3a2e094a-ab6f-4d64-ba6a-e752ef678c07) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 2091), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.530-0500 I STORAGE [conn85] Finishing collection drop for test4_fsmdb0.agg_out (3a2e094a-ab6f-4d64-ba6a-e752ef678c07).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.530-0500 I STORAGE [conn85] renameCollection: renaming collection 33c3810a-89c2-4b7d-91d1-9199ea59da61 from test4_fsmdb0.tmp.agg_out.f7805274-f9ac-4b9b-9f9c-9614d848c75d to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:12.543-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test4_fsmdb0 from version {} to version { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.419-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.agg_out to version 1|0||5ddd7dbbcf8184c2e1494ea3 took 0 ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:12.548-0500 I COMMAND [conn164] command test4_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458") }, $clusterTime: { clusterTime: Timestamp(1574796732, 2030), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:826 protocol:op_msg 123ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.292-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 700b7607-21d1-42ed-bfd2-d2a34a254a02: config.cache.chunks.test4_fsmdb0.agg_out ( b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.291-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: c0507c2d-9577-427f-a3ed-dd381a2759a5: config.cache.chunks.test4_fsmdb0.agg_out ( b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.434-0500 I COMMAND [conn71] CMD: dropIndexes test4_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.531-0500 I STORAGE [conn85] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (3a2e094a-ab6f-4d64-ba6a-e752ef678c07)'. Ident: 'index-419--2588534479858262356', commit timestamp: 'Timestamp(1574796731, 2091)'
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:12.544-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.agg_out to version 1|0||5ddd7dbbcf8184c2e1494ea3 took 0 ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.420-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7dbc5cde74b6784bb9be' unlocked.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:12.559-0500 I COMMAND [conn169] command test4_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51") }, $clusterTime: { clusterTime: Timestamp(1574796732, 2030), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:826 protocol:op_msg 134ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.304-0500 I STORAGE [ReplWriterWorker-14] createCollection: test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 with provided UUID: 805b2a49-d858-4218-972a-c3eb9cb3ee43 and options: { uuid: UUID("805b2a49-d858-4218-972a-c3eb9cb3ee43"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.304-0500 I STORAGE [ReplWriterWorker-8] createCollection: test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 with provided UUID: 805b2a49-d858-4218-972a-c3eb9cb3ee43 and options: { uuid: UUID("805b2a49-d858-4218-972a-c3eb9cb3ee43"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.492-0500 I COMMAND [conn71] CMD: dropIndexes test4_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.531-0500 I STORAGE [conn85] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (3a2e094a-ab6f-4d64-ba6a-e752ef678c07)'. Ident: 'index-422--2588534479858262356', commit timestamp: 'Timestamp(1574796731, 2091)'
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:12.564-0500 I NETWORK [conn52] end connection 127.0.0.1:58880 (1 connection now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.422-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7dbc5cde74b6784bb9ba' unlocked.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:12.565-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test4_fsmdb0 from version {} to version { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.322-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.319-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.320-0500 I COMMAND [ReplWriterWorker-5] CMD: drop test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.531-0500 I STORAGE [conn85] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-416--2588534479858262356, commit timestamp: Timestamp(1574796731, 2091)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.429-0500 I SHARDING [conn23] distributed lock 'test4_fsmdb0' acquired for 'enableSharding', ts : 5ddd7dbc5cde74b6784bb9cf
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:12.566-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.agg_out to version 1|0||5ddd7dbbcf8184c2e1494ea3 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.323-0500 I COMMAND [ReplWriterWorker-1] CMD: drop test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.500-0500 I COMMAND [conn71] CMD: dropIndexes test4_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.320-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 (d928bc79-920f-4cd2-b7dd-0b7f7408581a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796732, 1), t: 1 } and commit timestamp Timestamp(1574796732, 1)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.531-0500 I INDEX [conn84] Registering index build: 24e9342e-5ce9-4019-b8e9-d440e0754f96
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.429-0500 I SHARDING [conn23] Enabling sharding for database [test4_fsmdb0] in config db
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:15.575-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js started with pid 16268.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:12.576-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test4_fsmdb0 from version {} to version { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.323-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 (d928bc79-920f-4cd2-b7dd-0b7f7408581a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796732, 1), t: 1 } and commit timestamp Timestamp(1574796732, 1)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.528-0500 I COMMAND [conn67] CMD: dropIndexes test4_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.320-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 (d928bc79-920f-4cd2-b7dd-0b7f7408581a).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.531-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.430-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7dbc5cde74b6784bb9cf' unlocked.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:12.578-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.agg_out to version 1|0||5ddd7dbbcf8184c2e1494ea3 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.323-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 (d928bc79-920f-4cd2-b7dd-0b7f7408581a).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.534-0500 I COMMAND [conn67] CMD: dropIndexes test4_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.320-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 (d928bc79-920f-4cd2-b7dd-0b7f7408581a)'. Ident: 'index-446--7234316082034423155', commit timestamp: 'Timestamp(1574796732, 1)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.531-0500 I COMMAND [conn64] command test4_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 617242559858646341, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2078823672816764179, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796731366), clusterTime: Timestamp(1574796729, 4046) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796731, 2), signature: { hash: BinData(0, E2360818581976968E8FEF9467289F3A6EA54891), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45740", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796727, 1), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 163ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.433-0500 I SHARDING [conn23] distributed lock 'test4_fsmdb0' acquired for 'shardCollection', ts : 5ddd7dbc5cde74b6784bb9d6
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:12.607-0500 I NETWORK [conn170] end connection 127.0.0.1:45744 (7 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.323-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 (d928bc79-920f-4cd2-b7dd-0b7f7408581a)'. Ident: 'index-446--2310912778499990807', commit timestamp: 'Timestamp(1574796732, 1)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.547-0500 I COMMAND [conn67] CMD: dropIndexes test4_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.320-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 (d928bc79-920f-4cd2-b7dd-0b7f7408581a)'. Ident: 'index-453--7234316082034423155', commit timestamp: 'Timestamp(1574796732, 1)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.531-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.434-0500 I SHARDING [conn23] distributed lock 'test4_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7dbc5cde74b6784bb9db
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:12.609-0500 I NETWORK [conn173] end connection 127.0.0.1:45796 (6 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.323-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 (d928bc79-920f-4cd2-b7dd-0b7f7408581a)'. Ident: 'index-453--2310912778499990807', commit timestamp: 'Timestamp(1574796732, 1)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.552-0500 I COMMAND [conn71] CMD: dropIndexes test4_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.320-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7'. Ident: collection-445--7234316082034423155, commit timestamp: Timestamp(1574796732, 1)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.534-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.488-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test4_fsmdb0 from version {} to version { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:12.613-0500 I NETWORK [conn171] end connection 127.0.0.1:45770 (5 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.323-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7'. Ident: collection-445--2310912778499990807, commit timestamp: Timestamp(1574796732, 1)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.552-0500 I COMMAND [conn67] CMD: dropIndexes test4_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.337-0500 I INDEX [ReplWriterWorker-4] index build: starting on test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.534-0500 I STORAGE [conn85] createCollection: test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 with generated UUID: 85b70d54-2e55-44ce-8459-084207afda61 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.489-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.agg_out to version 1|0||5ddd7dbbcf8184c2e1494ea3 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.339-0500 I INDEX [ReplWriterWorker-11] index build: starting on test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.561-0500 I COMMAND [conn67] CMD: dropIndexes test4_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.337-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.548-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 9c7f75fa-68c6-4e5f-a1ec-3edaa1146e83: test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 ( 4c5457b7-3c92-4f4b-9fb9-dc1b2c664aad ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.491-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7dbc5cde74b6784bb9db' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.339-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.562-0500 I COMMAND [conn67] CMD: dropIndexes test4_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.337-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: db1774a4-18e1-4c76-9196-3a18af2b3c18: test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 (6dd883ae-d2e0-4193-aa8f-48b1aec8ae00 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.564-0500 I INDEX [conn84] index build: starting on test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.494-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7dbc5cde74b6784bb9d6' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.339-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: c1086226-90e6-46f0-8eaf-6a41243430a2: test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 (6dd883ae-d2e0-4193-aa8f-48b1aec8ae00 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.570-0500 I COMMAND [conn67] CMD: dropIndexes test4_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.337-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.564-0500 I INDEX [conn84] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.495-0500 I SHARDING [conn17] distributed lock 'test4_fsmdb0' acquired for 'enableSharding', ts : 5ddd7dbc5cde74b6784bb9e6
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.339-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.572-0500 I COMMAND [conn67] CMD: dropIndexes test4_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.337-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.564-0500 I STORAGE [conn84] Index build initialized: 24e9342e-5ce9-4019-b8e9-d440e0754f96: test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 (d928bc79-920f-4cd2-b7dd-0b7f7408581a ): indexes: 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.495-0500 I SHARDING [conn17] Enabling sharding for database [test4_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.339-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.576-0500 I COMMAND [conn67] CMD: dropIndexes test4_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.338-0500 I STORAGE [ReplWriterWorker-7] createCollection: test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 with provided UUID: e4dc439b-a1ea-454c-8508-2a174ac32e17 and options: { uuid: UUID("e4dc439b-a1ea-454c-8508-2a174ac32e17"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.564-0500 I INDEX [conn84] Waiting for index build to complete: 24e9342e-5ce9-4019-b8e9-d440e0754f96
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.497-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dbc5cde74b6784bb9e6' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.340-0500 I STORAGE [ReplWriterWorker-6] createCollection: test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 with provided UUID: e4dc439b-a1ea-454c-8508-2a174ac32e17 and options: { uuid: UUID("e4dc439b-a1ea-454c-8508-2a174ac32e17"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.578-0500 I COMMAND [conn67] CMD: dropIndexes test4_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.340-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.565-0500 I INDEX [conn82] Index build completed: 9c7f75fa-68c6-4e5f-a1ec-3edaa1146e83
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.500-0500 I SHARDING [conn17] distributed lock 'test4_fsmdb0' acquired for 'shardCollection', ts : 5ddd7dbc5cde74b6784bb9ee
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.342-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.580-0500 I COMMAND [conn67] CMD: dropIndexes test4_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.586-0500 I COMMAND [conn67] CMD: dropIndexes test4_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.570-0500 I INDEX [conn85] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.501-0500 I SHARDING [conn17] distributed lock 'test4_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7dbc5cde74b6784bb9f0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.352-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: c1086226-90e6-46f0-8eaf-6a41243430a2: test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 ( 6dd883ae-d2e0-4193-aa8f-48b1aec8ae00 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.349-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: db1774a4-18e1-4c76-9196-3a18af2b3c18: test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 ( 6dd883ae-d2e0-4193-aa8f-48b1aec8ae00 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.597-0500 I COMMAND [conn65] CMD: dropIndexes test4_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.600-0500 I COMMAND [conn65] CMD: dropIndexes test4_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.520-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test4_fsmdb0 from version {} to version { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.360-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.356-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.570-0500 I COMMAND [conn88] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.602-0500 I COMMAND [conn65] CMD: dropIndexes test4_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.521-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.agg_out to version 1|0||5ddd7dbbcf8184c2e1494ea3 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.360-0500 I STORAGE [ReplWriterWorker-8] createCollection: test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 with provided UUID: 238fea4c-3ff7-4edd-b81a-26f46b89b22d and options: { uuid: UUID("238fea4c-3ff7-4edd-b81a-26f46b89b22d"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.357-0500 I STORAGE [ReplWriterWorker-9] createCollection: test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 with provided UUID: 238fea4c-3ff7-4edd-b81a-26f46b89b22d and options: { uuid: UUID("238fea4c-3ff7-4edd-b81a-26f46b89b22d"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.571-0500 I STORAGE [conn88] dropCollection: test4_fsmdb0.agg_out (33c3810a-89c2-4b7d-91d1-9199ea59da61) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 2596), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.610-0500 I COMMAND [conn65] CMD: dropIndexes test4_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.522-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dbc5cde74b6784bb9f0' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.376-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.371-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.571-0500 I STORAGE [conn88] Finishing collection drop for test4_fsmdb0.agg_out (33c3810a-89c2-4b7d-91d1-9199ea59da61).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.614-0500 I NETWORK [conn191] end connection 127.0.0.1:39924 (46 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.524-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dbc5cde74b6784bb9ee' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.377-0500 I SHARDING [ReplWriterWorker-15] Marking collection config.cache.chunks.test4_fsmdb0.agg_out as collection version:
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.371-0500 I SHARDING [ReplWriterWorker-11] Marking collection config.cache.chunks.test4_fsmdb0.agg_out as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.571-0500 I STORAGE [conn88] renameCollection: renaming collection 7ea18e16-bafc-4b63-ae0b-6446e5352548 from test4_fsmdb0.tmp.agg_out.4acba4cc-a092-44a6-967e-482ba8977192 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.614-0500 I COMMAND [conn65] CMD: dropIndexes test4_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.532-0500 I SHARDING [conn23] distributed lock 'test4_fsmdb0' acquired for 'enableSharding', ts : 5ddd7dbc5cde74b6784bb9ff
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.407-0500 I INDEX [ReplWriterWorker-14] index build: starting on test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.395-0500 I INDEX [ReplWriterWorker-7] index build: starting on test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.571-0500 I STORAGE [conn88] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (33c3810a-89c2-4b7d-91d1-9199ea59da61)'. Ident: 'index-426--2588534479858262356', commit timestamp: 'Timestamp(1574796731, 2596)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:12.620-0500 I NETWORK [conn190] end connection 127.0.0.1:39918 (45 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.532-0500 I SHARDING [conn23] Enabling sharding for database [test4_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.407-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.395-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.571-0500 I STORAGE [conn88] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (33c3810a-89c2-4b7d-91d1-9199ea59da61)'. Ident: 'index-428--2588534479858262356', commit timestamp: 'Timestamp(1574796731, 2596)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.534-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7dbc5cde74b6784bb9ff' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.407-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 18f1a734-b0e7-4a75-ba24-bc05174d0f4f: test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 (805b2a49-d858-4218-972a-c3eb9cb3ee43 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.395-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 395b28d0-ef02-4e43-b125-1afb4fae5542: test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 (805b2a49-d858-4218-972a-c3eb9cb3ee43 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.571-0500 I STORAGE [conn88] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-423--2588534479858262356, commit timestamp: Timestamp(1574796731, 2596)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.535-0500 I SHARDING [conn23] distributed lock 'test4_fsmdb0' acquired for 'shardCollection', ts : 5ddd7dbc5cde74b6784bba05
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.407-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.395-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.571-0500 I INDEX [conn85] Registering index build: ce6cc8c9-c8c6-48f6-b542-0830a553ed64
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.537-0500 I SHARDING [conn23] distributed lock 'test4_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7dbc5cde74b6784bba07
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.408-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.396-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.571-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.538-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test4_fsmdb0 from version {} to version { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.410-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.398-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.571-0500 I COMMAND [conn81] command test4_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2783097929295026417, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2056248185664087906, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796731367), clusterTime: Timestamp(1574796731, 2) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796731, 3), signature: { hash: BinData(0, E2360818581976968E8FEF9467289F3A6EA54891), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58880", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796727, 1), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 202ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.539-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.agg_out to version 1|0||5ddd7dbbcf8184c2e1494ea3 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.420-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 18f1a734-b0e7-4a75-ba24-bc05174d0f4f: test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 ( 805b2a49-d858-4218-972a-c3eb9cb3ee43 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.407-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 395b28d0-ef02-4e43-b125-1afb4fae5542: test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 ( 805b2a49-d858-4218-972a-c3eb9cb3ee43 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.571-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.540-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7dbc5cde74b6784bba07' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.429-0500 I INDEX [ReplWriterWorker-5] index build: starting on test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.414-0500 I INDEX [ReplWriterWorker-10] index build: starting on test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.573-0500 I COMMAND [conn81] CMD: dropIndexes test4_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.542-0500 I SHARDING [conn23] distributed lock with ts: 5ddd7dbc5cde74b6784bba05' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.429-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.414-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.574-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.562-0500 I SHARDING [conn17] distributed lock 'test4_fsmdb0' acquired for 'enableSharding', ts : 5ddd7dbc5cde74b6784bba16
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.429-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 6c7ef3b5-81d2-4aa8-8219-32c783fcc110: test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 (e4dc439b-a1ea-454c-8508-2a174ac32e17 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.414-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 21dbc8b7-193d-41f6-9b51-221b56097eaa: test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 (e4dc439b-a1ea-454c-8508-2a174ac32e17 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.586-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 24e9342e-5ce9-4019-b8e9-d440e0754f96: test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 ( d928bc79-920f-4cd2-b7dd-0b7f7408581a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.563-0500 I SHARDING [conn17] Enabling sharding for database [test4_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.429-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.414-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.598-0500 I INDEX [conn85] index build: starting on test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.564-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dbc5cde74b6784bba16' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.430-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.415-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.598-0500 I INDEX [conn85] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.566-0500 I SHARDING [conn19] distributed lock 'test4_fsmdb0' acquired for 'shardCollection', ts : 5ddd7dbc5cde74b6784bba1d
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.431-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.416-0500 I COMMAND [ReplWriterWorker-5] CMD: drop test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.598-0500 I STORAGE [conn85] Index build initialized: ce6cc8c9-c8c6-48f6-b542-0830a553ed64: test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 (85b70d54-2e55-44ce-8459-084207afda61 ): indexes: 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.567-0500 I SHARDING [conn19] distributed lock 'test4_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7dbc5cde74b6784bba22
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.432-0500 I COMMAND [ReplWriterWorker-4] CMD: drop test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.417-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 (6dd883ae-d2e0-4193-aa8f-48b1aec8ae00) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796732, 517), t: 1 } and commit timestamp Timestamp(1574796732, 517)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.598-0500 I INDEX [conn85] Waiting for index build to complete: ce6cc8c9-c8c6-48f6-b542-0830a553ed64
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.568-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test4_fsmdb0 from version {} to version { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.432-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 (6dd883ae-d2e0-4193-aa8f-48b1aec8ae00) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796732, 517), t: 1 } and commit timestamp Timestamp(1574796732, 517)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.417-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 (6dd883ae-d2e0-4193-aa8f-48b1aec8ae00).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.598-0500 I INDEX [conn84] Index build completed: 24e9342e-5ce9-4019-b8e9-d440e0754f96
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.569-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.agg_out to version 1|0||5ddd7dbbcf8184c2e1494ea3 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.432-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 (6dd883ae-d2e0-4193-aa8f-48b1aec8ae00).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.417-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 (6dd883ae-d2e0-4193-aa8f-48b1aec8ae00)'. Ident: 'index-456--7234316082034423155', commit timestamp: 'Timestamp(1574796732, 517)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.598-0500 I COMMAND [conn77] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.570-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7dbc5cde74b6784bba22' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.432-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 (6dd883ae-d2e0-4193-aa8f-48b1aec8ae00)'. Ident: 'index-456--2310912778499990807', commit timestamp: 'Timestamp(1574796732, 517)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.417-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 (6dd883ae-d2e0-4193-aa8f-48b1aec8ae00)'. Ident: 'index-465--7234316082034423155', commit timestamp: 'Timestamp(1574796732, 517)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.598-0500 I STORAGE [conn77] dropCollection: test4_fsmdb0.agg_out (7ea18e16-bafc-4b63-ae0b-6446e5352548) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796731, 3164), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.572-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7dbc5cde74b6784bba1d' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.432-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 (6dd883ae-d2e0-4193-aa8f-48b1aec8ae00)'. Ident: 'index-465--2310912778499990807', commit timestamp: 'Timestamp(1574796732, 517)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.417-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810'. Ident: collection-455--7234316082034423155, commit timestamp: Timestamp(1574796732, 517)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.598-0500 I STORAGE [conn77] Finishing collection drop for test4_fsmdb0.agg_out (7ea18e16-bafc-4b63-ae0b-6446e5352548).
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.574-0500 I SHARDING [conn19] distributed lock 'test4_fsmdb0' acquired for 'enableSharding', ts : 5ddd7dbc5cde74b6784bba2d
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.432-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810'. Ident: collection-455--2310912778499990807, commit timestamp: Timestamp(1574796732, 517)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.417-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.598-0500 I STORAGE [conn77] renameCollection: renaming collection 6d7b1b53-805f-4e82-a6e8-dfd96f7e7393 from test4_fsmdb0.tmp.agg_out.38d542e2-1a8c-43df-89c2-047f00d6f403 to test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.574-0500 I SHARDING [conn19] Enabling sharding for database [test4_fsmdb0] in config db
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.433-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 6c7ef3b5-81d2-4aa8-8219-32c783fcc110: test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 ( e4dc439b-a1ea-454c-8508-2a174ac32e17 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.421-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 21dbc8b7-193d-41f6-9b51-221b56097eaa: test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 ( e4dc439b-a1ea-454c-8508-2a174ac32e17 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.598-0500 I STORAGE [conn77] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (7ea18e16-bafc-4b63-ae0b-6446e5352548)'. Ident: 'index-427--2588534479858262356', commit timestamp: 'Timestamp(1574796731, 3164)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.575-0500 I SHARDING [conn19] distributed lock with ts: 5ddd7dbc5cde74b6784bba2d' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.448-0500 I INDEX [ReplWriterWorker-10] index build: starting on test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.436-0500 I INDEX [ReplWriterWorker-14] index build: starting on test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.598-0500 I STORAGE [conn77] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (7ea18e16-bafc-4b63-ae0b-6446e5352548)'. Ident: 'index-430--2588534479858262356', commit timestamp: 'Timestamp(1574796731, 3164)'
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.578-0500 I SHARDING [conn17] distributed lock 'test4_fsmdb0' acquired for 'shardCollection', ts : 5ddd7dbc5cde74b6784bba36
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.448-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.436-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.598-0500 I STORAGE [conn77] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-424--2588534479858262356, commit timestamp: Timestamp(1574796731, 3164)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.579-0500 I SHARDING [conn17] distributed lock 'test4_fsmdb0.agg_out' acquired for 'shardCollection', ts : 5ddd7dbc5cde74b6784bba3b
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.448-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: c3a32d6f-386a-49f1-b1a7-a2275999e510: test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 (238fea4c-3ff7-4edd-b81a-26f46b89b22d ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.436-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 80a09f68-69c6-41da-9559-d00ef01507cd: test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 (238fea4c-3ff7-4edd-b81a-26f46b89b22d ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.598-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.580-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test4_fsmdb0 from version {} to version { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.448-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.437-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.599-0500 I COMMAND [conn80] command test4_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1947101549906595301, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5404826603718382184, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796731426), clusterTime: Timestamp(1574796731, 576) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796731, 640), signature: { hash: BinData(0, E2360818581976968E8FEF9467289F3A6EA54891), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58884", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796727, 1), t: 1 } }, $db: "test4_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 171ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.581-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.agg_out to version 1|0||5ddd7dbbcf8184c2e1494ea3 took 0 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.448-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.437-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.599-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.582-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dbc5cde74b6784bba3b' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.450-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.440-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.601-0500 I STORAGE [conn77] createCollection: test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 with generated UUID: 6dd883ae-d2e0-4193-aa8f-48b1aec8ae00 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.584-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dbc5cde74b6784bba36' unlocked.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.481-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: c3a32d6f-386a-49f1-b1a7-a2275999e510: test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 ( 238fea4c-3ff7-4edd-b81a-26f46b89b22d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.443-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 80a09f68-69c6-41da-9559-d00ef01507cd: test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 ( 238fea4c-3ff7-4edd-b81a-26f46b89b22d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.602-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.613-0500 I NETWORK [conn138] end connection 127.0.0.1:57176 (26 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.500-0500 I COMMAND [ReplWriterWorker-10] CMD: drop test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.461-0500 I COMMAND [ReplWriterWorker-1] CMD: drop test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.612-0500 I SHARDING [conn55] CMD: shardcollection: { _shardsvrShardCollection: "test4_fsmdb0.agg_out", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796731, 3359), signature: { hash: BinData(0, E2360818581976968E8FEF9467289F3A6EA54891), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58884", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796731, 3232), t: 1 } }, $db: "admin" }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:12.620-0500 I NETWORK [conn137] end connection 127.0.0.1:57174 (25 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.500-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 (805b2a49-d858-4218-972a-c3eb9cb3ee43) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796732, 2027), t: 1 } and commit timestamp Timestamp(1574796732, 2027)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.461-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 (805b2a49-d858-4218-972a-c3eb9cb3ee43) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796732, 2027), t: 1 } and commit timestamp Timestamp(1574796732, 2027)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.612-0500 I SHARDING [conn55] about to log metadata event into changelog: { _id: "nz_desktop:20004-2019-11-26T14:32:11.612-0500-5ddd7dbbcf8184c2e1494e9c", server: "nz_desktop:20004", shard: "shard-rs1", clientAddr: "127.0.0.1:46028", time: new Date(1574796731612), what: "shardCollection.start", ns: "test4_fsmdb0.agg_out", details: { shardKey: { _id: "hashed" }, collection: "test4_fsmdb0.agg_out", uuid: UUID("6d7b1b53-805f-4e82-a6e8-dfd96f7e7393"), empty: false, fromMapReduce: false, primary: "shard-rs1:shard-rs1/localhost:20004,localhost:20005,localhost:20006", numChunks: 1 } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.500-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 (805b2a49-d858-4218-972a-c3eb9cb3ee43).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.461-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 (805b2a49-d858-4218-972a-c3eb9cb3ee43).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.613-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: ce6cc8c9-c8c6-48f6-b542-0830a553ed64: test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 ( 85b70d54-2e55-44ce-8459-084207afda61 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.500-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 (805b2a49-d858-4218-972a-c3eb9cb3ee43)'. Ident: 'index-464--2310912778499990807', commit timestamp: 'Timestamp(1574796732, 2027)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.461-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 (805b2a49-d858-4218-972a-c3eb9cb3ee43)'. Ident: 'index-464--7234316082034423155', commit timestamp: 'Timestamp(1574796732, 2027)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.613-0500 I INDEX [conn85] Index build completed: ce6cc8c9-c8c6-48f6-b542-0830a553ed64
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.500-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 (805b2a49-d858-4218-972a-c3eb9cb3ee43)'. Ident: 'index-471--2310912778499990807', commit timestamp: 'Timestamp(1574796732, 2027)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.461-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 (805b2a49-d858-4218-972a-c3eb9cb3ee43)'. Ident: 'index-471--7234316082034423155', commit timestamp: 'Timestamp(1574796732, 2027)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.618-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.agg_out to version 1|0||5ddd7dbbcf8184c2e1494ea3 took 1 ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.500-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204'. Ident: collection-463--2310912778499990807, commit timestamp: Timestamp(1574796732, 2027)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.461-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204'. Ident: collection-463--7234316082034423155, commit timestamp: Timestamp(1574796732, 2027)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.619-0500 I SHARDING [conn55] Marking collection test4_fsmdb0.agg_out as collection version: 1|0||5ddd7dbbcf8184c2e1494ea3, shard version: 1|0||5ddd7dbbcf8184c2e1494ea3
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.501-0500 I COMMAND [ReplWriterWorker-6] CMD: drop test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.462-0500 I COMMAND [ReplWriterWorker-4] CMD: drop test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.619-0500 I SHARDING [conn55] Created 1 chunk(s) for: test4_fsmdb0.agg_out, producing collection version 1|0||5ddd7dbbcf8184c2e1494ea3
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.501-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 (e4dc439b-a1ea-454c-8508-2a174ac32e17) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796732, 2029), t: 1 } and commit timestamp Timestamp(1574796732, 2029)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.462-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 (e4dc439b-a1ea-454c-8508-2a174ac32e17) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796732, 2029), t: 1 } and commit timestamp Timestamp(1574796732, 2029)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.619-0500 I SHARDING [conn55] about to log metadata event into changelog: { _id: "nz_desktop:20004-2019-11-26T14:32:11.619-0500-5ddd7dbbcf8184c2e1494ea8", server: "nz_desktop:20004", shard: "shard-rs1", clientAddr: "127.0.0.1:46028", time: new Date(1574796731619), what: "shardCollection.end", ns: "test4_fsmdb0.agg_out", details: { version: "1|0||5ddd7dbbcf8184c2e1494ea3" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.501-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 (e4dc439b-a1ea-454c-8508-2a174ac32e17).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.462-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 (e4dc439b-a1ea-454c-8508-2a174ac32e17).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.619-0500 I STORAGE [ShardServerCatalogCacheLoader-2] createCollection: config.cache.chunks.test4_fsmdb0.agg_out with provided UUID: b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2 and options: { uuid: UUID("b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2") }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.502-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 (e4dc439b-a1ea-454c-8508-2a174ac32e17)'. Ident: 'index-468--2310912778499990807', commit timestamp: 'Timestamp(1574796732, 2029)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.462-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 (e4dc439b-a1ea-454c-8508-2a174ac32e17)'. Ident: 'index-468--7234316082034423155', commit timestamp: 'Timestamp(1574796732, 2029)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.622-0500 I INDEX [conn77] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.502-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 (e4dc439b-a1ea-454c-8508-2a174ac32e17)'. Ident: 'index-473--2310912778499990807', commit timestamp: 'Timestamp(1574796732, 2029)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.462-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 (e4dc439b-a1ea-454c-8508-2a174ac32e17)'. Ident: 'index-473--7234316082034423155', commit timestamp: 'Timestamp(1574796732, 2029)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.632-0500 I COMMAND [conn82] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.502-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0'. Ident: collection-467--2310912778499990807, commit timestamp: Timestamp(1574796732, 2029)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.462-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0'. Ident: collection-467--7234316082034423155, commit timestamp: Timestamp(1574796732, 2029)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.632-0500 I INDEX [conn77] Registering index build: ff8b62a5-e87f-4d38-ba3b-9e2694fd65cc
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.502-0500 I COMMAND [ReplWriterWorker-15] CMD: drop test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.463-0500 I COMMAND [ReplWriterWorker-14] CMD: drop test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.632-0500 I COMMAND [conn82] CMD: drop test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.502-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 (238fea4c-3ff7-4edd-b81a-26f46b89b22d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796732, 2030), t: 1 } and commit timestamp Timestamp(1574796732, 2030)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.463-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 (238fea4c-3ff7-4edd-b81a-26f46b89b22d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796732, 2030), t: 1 } and commit timestamp Timestamp(1574796732, 2030)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.638-0500 I INDEX [ShardServerCatalogCacheLoader-2] index build: done building index _id_ on ns config.cache.chunks.test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.502-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 (238fea4c-3ff7-4edd-b81a-26f46b89b22d).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.463-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 (238fea4c-3ff7-4edd-b81a-26f46b89b22d).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.638-0500 I INDEX [ShardServerCatalogCacheLoader-2] Registering index build: 1b5f706e-d38e-4b5e-abd4-5112774b5f61
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.502-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 (238fea4c-3ff7-4edd-b81a-26f46b89b22d)'. Ident: 'index-470--2310912778499990807', commit timestamp: 'Timestamp(1574796732, 2030)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.463-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 (238fea4c-3ff7-4edd-b81a-26f46b89b22d)'. Ident: 'index-470--7234316082034423155', commit timestamp: 'Timestamp(1574796732, 2030)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.655-0500 I INDEX [conn77] index build: starting on test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.502-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 (238fea4c-3ff7-4edd-b81a-26f46b89b22d)'. Ident: 'index-475--2310912778499990807', commit timestamp: 'Timestamp(1574796732, 2030)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.463-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 (238fea4c-3ff7-4edd-b81a-26f46b89b22d)'. Ident: 'index-475--7234316082034423155', commit timestamp: 'Timestamp(1574796732, 2030)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.655-0500 I INDEX [conn77] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.502-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8'. Ident: collection-469--2310912778499990807, commit timestamp: Timestamp(1574796732, 2030)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.463-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8'. Ident: collection-469--7234316082034423155, commit timestamp: Timestamp(1574796732, 2030)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.655-0500 I STORAGE [conn77] Index build initialized: ff8b62a5-e87f-4d38-ba3b-9e2694fd65cc: test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 (6dd883ae-d2e0-4193-aa8f-48b1aec8ae00 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.503-0500 I STORAGE [ReplWriterWorker-2] createCollection: test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 with provided UUID: 0c1484b7-7ff8-4053-b91a-813f178aa00a and options: { uuid: UUID("0c1484b7-7ff8-4053-b91a-813f178aa00a"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.463-0500 I STORAGE [ReplWriterWorker-11] createCollection: test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 with provided UUID: 0c1484b7-7ff8-4053-b91a-813f178aa00a and options: { uuid: UUID("0c1484b7-7ff8-4053-b91a-813f178aa00a"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.655-0500 I INDEX [conn77] Waiting for index build to complete: ff8b62a5-e87f-4d38-ba3b-9e2694fd65cc
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.518-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.497-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.670-0500 I INDEX [ShardServerCatalogCacheLoader-2] index build: starting on config.cache.chunks.test4_fsmdb0.agg_out properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.519-0500 I STORAGE [ReplWriterWorker-4] createCollection: test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e with provided UUID: cf46e68b-d632-49be-82f6-c82d66fa191f and options: { uuid: UUID("cf46e68b-d632-49be-82f6-c82d66fa191f"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.498-0500 I STORAGE [ReplWriterWorker-7] createCollection: test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e with provided UUID: cf46e68b-d632-49be-82f6-c82d66fa191f and options: { uuid: UUID("cf46e68b-d632-49be-82f6-c82d66fa191f"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.670-0500 I INDEX [ShardServerCatalogCacheLoader-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.536-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.514-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.670-0500 I STORAGE [ShardServerCatalogCacheLoader-2] Index build initialized: 1b5f706e-d38e-4b5e-abd4-5112774b5f61: config.cache.chunks.test4_fsmdb0.agg_out (b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.553-0500 I INDEX [ReplWriterWorker-10] index build: starting on test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.531-0500 I INDEX [ReplWriterWorker-1] index build: starting on test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.670-0500 I INDEX [ShardServerCatalogCacheLoader-2] Waiting for index build to complete: 1b5f706e-d38e-4b5e-abd4-5112774b5f61
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.553-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.531-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.670-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.553-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 98731420-2fda-4b72-9f3a-ad09529de30f: test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 (0c1484b7-7ff8-4053-b91a-813f178aa00a ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.531-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 82590f03-d3a5-43b9-b2f8-23468c512581: test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 (0c1484b7-7ff8-4053-b91a-813f178aa00a ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.670-0500 I STORAGE [conn82] dropCollection: test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 (4c5457b7-3c92-4f4b-9fb9-dc1b2c664aad) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.553-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.531-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.670-0500 I STORAGE [conn82] Finishing collection drop for test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 (4c5457b7-3c92-4f4b-9fb9-dc1b2c664aad).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.554-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.531-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.670-0500 I STORAGE [conn82] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 (4c5457b7-3c92-4f4b-9fb9-dc1b2c664aad)'. Ident: 'index-437--2588534479858262356', commit timestamp: 'Timestamp(1574796731, 4546)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.557-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.534-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.670-0500 I STORAGE [conn82] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948 (4c5457b7-3c92-4f4b-9fb9-dc1b2c664aad)'. Ident: 'index-438--2588534479858262356', commit timestamp: 'Timestamp(1574796731, 4546)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.560-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 98731420-2fda-4b72-9f3a-ad09529de30f: test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 ( 0c1484b7-7ff8-4053-b91a-813f178aa00a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.536-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 82590f03-d3a5-43b9-b2f8-23468c512581: test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 ( 0c1484b7-7ff8-4053-b91a-813f178aa00a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.670-0500 I STORAGE [conn82] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948'. Ident: collection-435--2588534479858262356, commit timestamp: Timestamp(1574796731, 4546)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.572-0500 I INDEX [ReplWriterWorker-2] index build: starting on test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.552-0500 I INDEX [ReplWriterWorker-11] index build: starting on test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.670-0500 I COMMAND [conn84] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.572-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.552-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.671-0500 I COMMAND [conn85] renameCollectionForCommand: rename test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 to test4_fsmdb0.agg_out and drop test4_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.572-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 57361672-7727-43f3-b7b1-d31e4f718337: test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e (cf46e68b-d632-49be-82f6-c82d66fa191f ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.552-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 4cc8a0fb-85e6-4c32-9845-c3f956efff3d: test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e (cf46e68b-d632-49be-82f6-c82d66fa191f ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.671-0500 I COMMAND [conn65] command test4_fsmdb0.agg_out appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8685586252720697335, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5297054686904644310, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796731463), clusterTime: Timestamp(1574796731, 1145) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796731, 1209), signature: { hash: BinData(0, E2360818581976968E8FEF9467289F3A6EA54891), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796727, 1), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: cannot rename to a sharded collection" errName:IllegalOperation errCode:20 reslen:799 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 207ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.572-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.552-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.671-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.573-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.552-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.671-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.575-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.555-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.671-0500 I COMMAND [conn85] CMD: drop test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.577-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 57361672-7727-43f3-b7b1-d31e4f718337: test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e ( cf46e68b-d632-49be-82f6-c82d66fa191f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.558-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 4cc8a0fb-85e6-4c32-9845-c3f956efff3d: test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e ( cf46e68b-d632-49be-82f6-c82d66fa191f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.671-0500 I STORAGE [conn85] dropCollection: test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 (85b70d54-2e55-44ce-8459-084207afda61) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.583-0500 I COMMAND [ReplWriterWorker-11] CMD: drop test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.565-0500 I COMMAND [ReplWriterWorker-0] CMD: drop test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.671-0500 I COMMAND [conn84] CMD: drop test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.583-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e (cf46e68b-d632-49be-82f6-c82d66fa191f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796732, 2809), t: 1 } and commit timestamp Timestamp(1574796732, 2809)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.565-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e (cf46e68b-d632-49be-82f6-c82d66fa191f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796732, 2809), t: 1 } and commit timestamp Timestamp(1574796732, 2809)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.671-0500 I STORAGE [conn85] Finishing collection drop for test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 (85b70d54-2e55-44ce-8459-084207afda61).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.583-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e (cf46e68b-d632-49be-82f6-c82d66fa191f).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.565-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e (cf46e68b-d632-49be-82f6-c82d66fa191f).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.671-0500 I STORAGE [conn85] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 (85b70d54-2e55-44ce-8459-084207afda61)'. Ident: 'index-445--2588534479858262356', commit timestamp: 'Timestamp(1574796731, 4547)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.583-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e (cf46e68b-d632-49be-82f6-c82d66fa191f)'. Ident: 'index-480--2310912778499990807', commit timestamp: 'Timestamp(1574796732, 2809)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.565-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e (cf46e68b-d632-49be-82f6-c82d66fa191f)'. Ident: 'index-480--7234316082034423155', commit timestamp: 'Timestamp(1574796732, 2809)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.671-0500 I STORAGE [conn85] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 (85b70d54-2e55-44ce-8459-084207afda61)'. Ident: 'index-446--2588534479858262356', commit timestamp: 'Timestamp(1574796731, 4547)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.583-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e (cf46e68b-d632-49be-82f6-c82d66fa191f)'. Ident: 'index-483--2310912778499990807', commit timestamp: 'Timestamp(1574796732, 2809)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.565-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e (cf46e68b-d632-49be-82f6-c82d66fa191f)'. Ident: 'index-483--7234316082034423155', commit timestamp: 'Timestamp(1574796732, 2809)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.671-0500 I STORAGE [conn85] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94'. Ident: collection-443--2588534479858262356, commit timestamp: Timestamp(1574796731, 4547)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.583-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e'. Ident: collection-479--2310912778499990807, commit timestamp: Timestamp(1574796732, 2809)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.565-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e'. Ident: collection-479--7234316082034423155, commit timestamp: Timestamp(1574796732, 2809)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.589-0500 I COMMAND [ReplWriterWorker-15] CMD: drop test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.671-0500 I STORAGE [conn84] dropCollection: test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 (d928bc79-920f-4cd2-b7dd-0b7f7408581a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.571-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.589-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 (0c1484b7-7ff8-4053-b91a-813f178aa00a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796732, 3056), t: 1 } and commit timestamp Timestamp(1574796732, 3056)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.671-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.571-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 (0c1484b7-7ff8-4053-b91a-813f178aa00a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796732, 3056), t: 1 } and commit timestamp Timestamp(1574796732, 3056)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.589-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 (0c1484b7-7ff8-4053-b91a-813f178aa00a).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.674-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index lastmod_1 on ns config.cache.chunks.test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.571-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 (0c1484b7-7ff8-4053-b91a-813f178aa00a).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.589-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 (0c1484b7-7ff8-4053-b91a-813f178aa00a)'. Ident: 'index-478--2310912778499990807', commit timestamp: 'Timestamp(1574796732, 3056)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.674-0500 I STORAGE [conn82] createCollection: test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 with generated UUID: 805b2a49-d858-4218-972a-c3eb9cb3ee43 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.571-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 (0c1484b7-7ff8-4053-b91a-813f178aa00a)'. Ident: 'index-478--7234316082034423155', commit timestamp: 'Timestamp(1574796732, 3056)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:15.596-0500 I COMMAND [conn164] command test4_fsmdb0.agg_out appName: "tid:0" command: dropIndexes { dropIndexes: "agg_out", index: { flag: 1.0 }, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458") }, $clusterTime: { clusterTime: Timestamp(1574796732, 3076), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } numYields:0 reslen:550 protocol:op_msg 2985ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.589-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 (0c1484b7-7ff8-4053-b91a-813f178aa00a)'. Ident: 'index-481--2310912778499990807', commit timestamp: 'Timestamp(1574796732, 3056)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:11.728-0500 I COMMAND [conn55] command admin.$cmd appName: "tid:1" command: _shardsvrShardCollection { _shardsvrShardCollection: "test4_fsmdb0.agg_out", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("3a910b40-797a-442e-8f27-720741d58d70"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796731, 3359), signature: { hash: BinData(0, E2360818581976968E8FEF9467289F3A6EA54891), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58884", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796731, 3232), t: 1 } }, $db: "admin" } numYields:0 reslen:414 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 6 } }, ReplicationStateTransition: { acquireCount: { w: 8 } }, Global: { acquireCount: { r: 4, w: 4 } }, Database: { acquireCount: { r: 4, w: 4 } }, Collection: { acquireCount: { r: 5, w: 2, W: 2 } }, Mutex: { acquireCount: { r: 8, W: 4 } } } flowControl:{ acquireCount: 3 } storage:{} protocol:op_msg 118ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.571-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 (0c1484b7-7ff8-4053-b91a-813f178aa00a)'. Ident: 'index-481--7234316082034423155', commit timestamp: 'Timestamp(1574796732, 3056)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:15.596-0500 I COMMAND [conn169] command test4_fsmdb0.agg_out appName: "tid:4" command: dropIndexes { dropIndexes: "agg_out", index: { padding: "text" }, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51") }, $clusterTime: { clusterTime: Timestamp(1574796732, 3076), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test4_fsmdb0" } numYields:0 reslen:556 protocol:op_msg 2982ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.589-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298'. Ident: collection-477--2310912778499990807, commit timestamp: Timestamp(1574796732, 3056)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.269-0500 I STORAGE [conn84] Finishing collection drop for test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 (d928bc79-920f-4cd2-b7dd-0b7f7408581a).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.571-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298'. Ident: collection-477--7234316082034423155, commit timestamp: Timestamp(1574796732, 3056)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.608-0500 W CONTROL [conn90] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 603 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.269-0500 I COMMAND [conn85] command test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94 command: drop { drop: "tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94", databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, $clusterTime: { clusterTime: Timestamp(1574796731, 4546), signature: { hash: BinData(0, E2360818581976968E8FEF9467289F3A6EA54891), keyId: 6763700092420489256 } }, $configServerState: { opTime: { ts: Timestamp(1574796731, 3555), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:420 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 598ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.608-0500 W CONTROL [conn90] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 492 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.610-0500 I NETWORK [conn90] end connection 127.0.0.1:52614 (14 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.270-0500 I STORAGE [conn84] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 (d928bc79-920f-4cd2-b7dd-0b7f7408581a)'. Ident: 'index-441--2588534479858262356', commit timestamp: 'Timestamp(1574796732, 1)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.610-0500 I NETWORK [conn90] end connection 127.0.0.1:35976 (14 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:12.620-0500 I NETWORK [conn89] end connection 127.0.0.1:52592 (13 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.270-0500 I STORAGE [conn84] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 (d928bc79-920f-4cd2-b7dd-0b7f7408581a)'. Ident: 'index-442--2588534479858262356', commit timestamp: 'Timestamp(1574796732, 1)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:12.620-0500 I NETWORK [conn89] end connection 127.0.0.1:35950 (13 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.270-0500 I STORAGE [conn84] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7'. Ident: collection-439--2588534479858262356, commit timestamp: Timestamp(1574796732, 1)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.270-0500 I COMMAND [conn84] command test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7 command: drop { drop: "tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7", databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, $clusterTime: { clusterTime: Timestamp(1574796731, 4546), signature: { hash: BinData(0, E2360818581976968E8FEF9467289F3A6EA54891), keyId: 6763700092420489256 } }, $configServerState: { opTime: { ts: Timestamp(1574796731, 3555), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:420 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 598ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.270-0500 I COMMAND [conn197] command test4_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796731, 4546), lsid: { id: UUID("3f42616f-e948-49d9-97ee-c2eb72d5ff98") }, $clusterTime: { clusterTime: Timestamp(1574796731, 4546), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test4_fsmdb0" } numYields:0 reslen:753 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 531322 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 535ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.270-0500 I COMMAND [conn64] command test4_fsmdb0.agg_out appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7681051939275376174, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5556841890733102401, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796731532), clusterTime: Timestamp(1574796731, 2155) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796731, 2155), signature: { hash: BinData(0, E2360818581976968E8FEF9467289F3A6EA54891), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45740", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796727, 1), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: cannot rename to a sharded collection" errName:IllegalOperation errCode:20 reslen:799 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 736ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.270-0500 I COMMAND [conn62] command test4_fsmdb0.agg_out appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 613961141674195514, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6198348230528968561, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796731498), clusterTime: Timestamp(1574796731, 1714) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796731, 1714), signature: { hash: BinData(0, E2360818581976968E8FEF9467289F3A6EA54891), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45728", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796727, 1), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: cannot rename to a sharded collection" errName:IllegalOperation errCode:20 reslen:799 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 771ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.272-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.273-0500 I STORAGE [conn84] createCollection: test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 with generated UUID: e4dc439b-a1ea-454c-8508-2a174ac32e17 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.273-0500 I STORAGE [conn85] createCollection: test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 with generated UUID: 238fea4c-3ff7-4edd-b81a-26f46b89b22d and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.278-0500 I COMMAND [conn81] CMD: dropIndexes test4_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.282-0500 I COMMAND [conn81] CMD: dropIndexes test4_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.282-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 1b5f706e-d38e-4b5e-abd4-5112774b5f61: config.cache.chunks.test4_fsmdb0.agg_out ( b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.282-0500 I INDEX [ShardServerCatalogCacheLoader-2] Index build completed: 1b5f706e-d38e-4b5e-abd4-5112774b5f61
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:15.598-0500 MongoDB shell version v0.0.0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.283-0500 I COMMAND [ShardServerCatalogCacheLoader-2] command config.cache.chunks.test4_fsmdb0.agg_out command: createIndexes { createIndexes: "cache.chunks.test4_fsmdb0.agg_out", indexes: [ { name: "lastmod_1", key: { lastmod: 1 } } ], $db: "config" } numYields:0 reslen:427 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { r: 2, w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 3 } storage:{} protocol:op_msg 663ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.283-0500 I SHARDING [ShardServerCatalogCacheLoader-2] Marking collection config.cache.chunks.test4_fsmdb0.agg_out as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.284-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: ff8b62a5-e87f-4d38-ba3b-9e2694fd65cc: test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 ( 6dd883ae-d2e0-4193-aa8f-48b1aec8ae00 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.284-0500 I INDEX [conn77] Index build completed: ff8b62a5-e87f-4d38-ba3b-9e2694fd65cc
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.284-0500 I COMMAND [conn77] command test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796731, 3671), signature: { hash: BinData(0, E2360818581976968E8FEF9467289F3A6EA54891), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58880", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796731, 3555), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 9545 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 661ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.302-0500 I INDEX [conn82] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.302-0500 I COMMAND [conn82] command test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 appName: "tid:2" command: create { create: "tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204", temp: true, validationLevel: "strict", validationAction: "error", databaseVersion: { uuid: UUID("f0228eca-5881-45e6-b34b-b9781fd1a8a0"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796731, 4547), signature: { hash: BinData(0, E2360818581976968E8FEF9467289F3A6EA54891), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796731, 3555), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 628ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.308-0500 I INDEX [conn84] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.316-0500 I INDEX [conn85] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.316-0500 I INDEX [conn82] Registering index build: 0d9d0a05-f45f-4152-be3b-00bead8b1374
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.316-0500 I INDEX [conn84] Registering index build: 5218bddf-4068-4121-983b-d5c398e74237
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.316-0500 I INDEX [conn85] Registering index build: a8b62fe6-3e7c-465b-a9ed-905778814e92
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.319-0500 I COMMAND [conn81] CMD: dropIndexes test4_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.334-0500 I INDEX [conn82] index build: starting on test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.334-0500 I INDEX [conn82] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.334-0500 I STORAGE [conn82] Index build initialized: 0d9d0a05-f45f-4152-be3b-00bead8b1374: test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 (805b2a49-d858-4218-972a-c3eb9cb3ee43 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.334-0500 I INDEX [conn82] Waiting for index build to complete: 0d9d0a05-f45f-4152-be3b-00bead8b1374
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.337-0500 I COMMAND [conn81] CMD: dropIndexes test4_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.353-0500 I INDEX [conn84] index build: starting on test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.353-0500 I INDEX [conn84] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.353-0500 I STORAGE [conn84] Index build initialized: 5218bddf-4068-4121-983b-d5c398e74237: test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 (e4dc439b-a1ea-454c-8508-2a174ac32e17 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.353-0500 I INDEX [conn84] Waiting for index build to complete: 5218bddf-4068-4121-983b-d5c398e74237
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.353-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.353-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.354-0500 I COMMAND [conn77] CMD: drop test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.354-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.354-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.364-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.367-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.376-0500 I INDEX [conn85] index build: starting on test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.376-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 0d9d0a05-f45f-4152-be3b-00bead8b1374: test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 ( 805b2a49-d858-4218-972a-c3eb9cb3ee43 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.376-0500 I INDEX [conn85] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.376-0500 I STORAGE [conn85] Index build initialized: a8b62fe6-3e7c-465b-a9ed-905778814e92: test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 (238fea4c-3ff7-4edd-b81a-26f46b89b22d ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.376-0500 I INDEX [conn85] Waiting for index build to complete: a8b62fe6-3e7c-465b-a9ed-905778814e92
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.376-0500 I INDEX [conn82] Index build completed: 0d9d0a05-f45f-4152-be3b-00bead8b1374
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.376-0500 I STORAGE [conn77] dropCollection: test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 (6dd883ae-d2e0-4193-aa8f-48b1aec8ae00) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.376-0500 I STORAGE [conn77] Finishing collection drop for test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 (6dd883ae-d2e0-4193-aa8f-48b1aec8ae00).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.376-0500 I STORAGE [conn77] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 (6dd883ae-d2e0-4193-aa8f-48b1aec8ae00)'. Ident: 'index-449--2588534479858262356', commit timestamp: 'Timestamp(1574796732, 517)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:15.598-0500 I COMMAND [conn65] CMD: dropIndexes test4_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.376-0500 I STORAGE [conn77] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810 (6dd883ae-d2e0-4193-aa8f-48b1aec8ae00)'. Ident: 'index-452--2588534479858262356', commit timestamp: 'Timestamp(1574796732, 517)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.376-0500 I STORAGE [conn77] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810'. Ident: collection-448--2588534479858262356, commit timestamp: Timestamp(1574796732, 517)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.377-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.377-0500 I COMMAND [conn80] command test4_fsmdb0.agg_out appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7018933367562359243, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3160740746558208066, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796731599), clusterTime: Timestamp(1574796731, 3228) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d96ceddb-6f14-4a13-8b8f-91b83e68595a"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796731, 3228), signature: { hash: BinData(0, E2360818581976968E8FEF9467289F3A6EA54891), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:58880", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796727, 1), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:991 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 776ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.377-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 5218bddf-4068-4121-983b-d5c398e74237: test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 ( e4dc439b-a1ea-454c-8508-2a174ac32e17 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.377-0500 I INDEX [conn84] Index build completed: 5218bddf-4068-4121-983b-d5c398e74237
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.378-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.381-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.383-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: a8b62fe6-3e7c-465b-a9ed-905778814e92: test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 ( 238fea4c-3ff7-4edd-b81a-26f46b89b22d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.383-0500 I INDEX [conn85] Index build completed: a8b62fe6-3e7c-465b-a9ed-905778814e92
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.388-0500 I COMMAND [conn80] CMD: dropIndexes test4_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.390-0500 I COMMAND [conn80] CMD: dropIndexes test4_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.393-0500 I COMMAND [conn80] CMD: dropIndexes test4_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.408-0500 I COMMAND [conn80] CMD: dropIndexes test4_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.421-0500 I COMMAND [conn85] CMD: drop test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.422-0500 I STORAGE [conn85] dropCollection: test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 (805b2a49-d858-4218-972a-c3eb9cb3ee43) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.422-0500 I STORAGE [conn85] Finishing collection drop for test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 (805b2a49-d858-4218-972a-c3eb9cb3ee43).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.422-0500 I STORAGE [conn85] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 (805b2a49-d858-4218-972a-c3eb9cb3ee43)'. Ident: 'index-459--2588534479858262356', commit timestamp: 'Timestamp(1574796732, 2027)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.422-0500 I STORAGE [conn85] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204 (805b2a49-d858-4218-972a-c3eb9cb3ee43)'. Ident: 'index-462--2588534479858262356', commit timestamp: 'Timestamp(1574796732, 2027)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.422-0500 I STORAGE [conn85] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204'. Ident: collection-456--2588534479858262356, commit timestamp: Timestamp(1574796732, 2027)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.423-0500 I COMMAND [conn84] CMD: drop test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.423-0500 I COMMAND [conn65] command test4_fsmdb0.agg_out appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2332137529041513578, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1556587466198726157, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796731672), clusterTime: Timestamp(1574796731, 4546) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b136c8da-872d-4f23-9c1c-591ca6d496a6"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796731, 4547), signature: { hash: BinData(0, E2360818581976968E8FEF9467289F3A6EA54891), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45744", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796731, 3555), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:988 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 749ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.423-0500 I STORAGE [conn84] dropCollection: test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 (e4dc439b-a1ea-454c-8508-2a174ac32e17) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.423-0500 I STORAGE [conn84] Finishing collection drop for test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 (e4dc439b-a1ea-454c-8508-2a174ac32e17).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.423-0500 I STORAGE [conn84] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 (e4dc439b-a1ea-454c-8508-2a174ac32e17)'. Ident: 'index-460--2588534479858262356', commit timestamp: 'Timestamp(1574796732, 2029)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.423-0500 I STORAGE [conn84] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0 (e4dc439b-a1ea-454c-8508-2a174ac32e17)'. Ident: 'index-464--2588534479858262356', commit timestamp: 'Timestamp(1574796732, 2029)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.423-0500 I STORAGE [conn84] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0'. Ident: collection-457--2588534479858262356, commit timestamp: Timestamp(1574796732, 2029)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.423-0500 I COMMAND [conn82] CMD: drop test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.423-0500 I STORAGE [conn82] dropCollection: test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 (238fea4c-3ff7-4edd-b81a-26f46b89b22d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.423-0500 I STORAGE [conn82] Finishing collection drop for test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 (238fea4c-3ff7-4edd-b81a-26f46b89b22d).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.423-0500 I STORAGE [conn82] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 (238fea4c-3ff7-4edd-b81a-26f46b89b22d)'. Ident: 'index-461--2588534479858262356', commit timestamp: 'Timestamp(1574796732, 2030)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.423-0500 I STORAGE [conn82] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8 (238fea4c-3ff7-4edd-b81a-26f46b89b22d)'. Ident: 'index-466--2588534479858262356', commit timestamp: 'Timestamp(1574796732, 2030)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.423-0500 I STORAGE [conn82] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8'. Ident: collection-458--2588534479858262356, commit timestamp: Timestamp(1574796732, 2030)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.423-0500 I COMMAND [conn62] command test4_fsmdb0.agg_out appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6755615654670860213, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6479508783571722160, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796732271), clusterTime: Timestamp(1574796732, 2) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796732, 2), signature: { hash: BinData(0, 1B19FAC56B4A897F7A08461A171EF8453D341921), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45728", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796731, 3555), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:988 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 151ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.424-0500 I COMMAND [conn64] command test4_fsmdb0.agg_out appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8676179066948883003, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4312349322474462318, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796732271), clusterTime: Timestamp(1574796732, 2) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796732, 2), signature: { hash: BinData(0, 1B19FAC56B4A897F7A08461A171EF8453D341921), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45740", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796731, 3555), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:993 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 151ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.426-0500 I STORAGE [conn84] createCollection: test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 with generated UUID: 0c1484b7-7ff8-4053-b91a-813f178aa00a and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.426-0500 I STORAGE [conn82] createCollection: test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e with generated UUID: cf46e68b-d632-49be-82f6-c82d66fa191f and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.449-0500 I INDEX [conn84] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.487-0500 I INDEX [conn82] index build: done building index _id_ on ns test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.487-0500 I COMMAND [conn80] CMD: dropIndexes test4_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.487-0500 I INDEX [conn84] Registering index build: 574b0924-632b-4f7d-bf05-cd582e43aa01
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.487-0500 I INDEX [conn82] Registering index build: 323a591d-8cda-4d44-a915-b34206012480
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.492-0500 I COMMAND [conn80] CMD: dropIndexes test4_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.502-0500 I INDEX [conn84] index build: starting on test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.502-0500 I INDEX [conn84] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.502-0500 I STORAGE [conn84] Index build initialized: 574b0924-632b-4f7d-bf05-cd582e43aa01: test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 (0c1484b7-7ff8-4053-b91a-813f178aa00a ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.502-0500 I INDEX [conn84] Waiting for index build to complete: 574b0924-632b-4f7d-bf05-cd582e43aa01
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.502-0500 I COMMAND [conn81] CMD: dropIndexes test4_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.502-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.503-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.511-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.519-0500 I INDEX [conn82] index build: starting on test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.519-0500 I INDEX [conn82] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.519-0500 I STORAGE [conn82] Index build initialized: 323a591d-8cda-4d44-a915-b34206012480: test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e (cf46e68b-d632-49be-82f6-c82d66fa191f ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.519-0500 I INDEX [conn82] Waiting for index build to complete: 323a591d-8cda-4d44-a915-b34206012480
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.520-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.520-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 574b0924-632b-4f7d-bf05-cd582e43aa01: test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 ( 0c1484b7-7ff8-4053-b91a-813f178aa00a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.520-0500 I INDEX [conn84] Index build completed: 574b0924-632b-4f7d-bf05-cd582e43aa01
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.521-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.524-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.527-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 323a591d-8cda-4d44-a915-b34206012480: test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e ( cf46e68b-d632-49be-82f6-c82d66fa191f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.527-0500 I INDEX [conn82] Index build completed: 323a591d-8cda-4d44-a915-b34206012480
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.529-0500 I COMMAND [conn64] CMD: dropIndexes test4_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.534-0500 I COMMAND [conn64] CMD: dropIndexes test4_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.547-0500 I COMMAND [conn82] CMD: drop test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.548-0500 I COMMAND [conn64] CMD: dropIndexes test4_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.548-0500 I STORAGE [conn82] dropCollection: test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e (cf46e68b-d632-49be-82f6-c82d66fa191f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.548-0500 I STORAGE [conn82] Finishing collection drop for test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e (cf46e68b-d632-49be-82f6-c82d66fa191f).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.548-0500 I STORAGE [conn82] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e (cf46e68b-d632-49be-82f6-c82d66fa191f)'. Ident: 'index-471--2588534479858262356', commit timestamp: 'Timestamp(1574796732, 2809)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.548-0500 I STORAGE [conn82] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e (cf46e68b-d632-49be-82f6-c82d66fa191f)'. Ident: 'index-474--2588534479858262356', commit timestamp: 'Timestamp(1574796732, 2809)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.548-0500 I STORAGE [conn82] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e'. Ident: collection-469--2588534479858262356, commit timestamp: Timestamp(1574796732, 2809)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.548-0500 I COMMAND [conn65] command test4_fsmdb0.agg_out appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2667577458929584396, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2861896779176417409, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796732424), clusterTime: Timestamp(1574796732, 2030) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796732, 2030), signature: { hash: BinData(0, 1B19FAC56B4A897F7A08461A171EF8453D341921), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45728", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796732, 1361), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:996 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 122ms
[fsm_workload_test:agg_out] 2019-11-26T14:32:15.614-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:15.648-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:15.608-0500 I NETWORK [conn164] end connection 127.0.0.1:45728 (4 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:15.620-0500 I NETWORK [conn132] end connection 127.0.0.1:57036 (24 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:15.620-0500 I NETWORK [conn180] end connection 127.0.0.1:39792 (44 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:32:18.085-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:18.085-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:18.085-0500 [jsTest] New session started with sessionID: { "id" : UUID("d068ee7f-0c07-478c-845f-d34c92796fa5") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:32:18.085-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:18.085-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:18.085-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:18.085-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:18.085-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:18.085-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:18.085-0500 [jsTest] Workload(s) completed in 19167 ms: jstests/concurrency/fsm_workloads/agg_out.js
[fsm_workload_test:agg_out] 2019-11-26T14:32:18.085-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:18.085-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:18.086-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.086-0500 Implicit session: session { "id" : UUID("6e5c0208-de9c-4392-82f9-c4adc83a9f7f") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.086-0500 MongoDB server version: 0.0.0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.086-0500 true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.086-0500 2019-11-26T14:32:15.658-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.086-0500 2019-11-26T14:32:15.658-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.086-0500 2019-11-26T14:32:15.659-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.086-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.086-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.086-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.086-0500 [jsTest] New session started with sessionID: { "id" : UUID("7437eaaa-7ec3-4528-b9c9-518de29864c3") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.086-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.086-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.086-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.086-0500 2019-11-26T14:32:15.662-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.086-0500 2019-11-26T14:32:15.662-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.086-0500 2019-11-26T14:32:15.662-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.086-0500 2019-11-26T14:32:15.662-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.086-0500 2019-11-26T14:32:15.663-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.086-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.087-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.087-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.087-0500 [jsTest] New session started with sessionID: { "id" : UUID("6be66171-6b33-4f99-9236-179db74685df") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.087-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.087-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.087-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:15.621-0500 I NETWORK [conn51] end connection 127.0.0.1:58822 (0 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.087-0500 2019-11-26T14:32:15.664-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:15.622-0500 I NETWORK [conn89] end connection 127.0.0.1:52866 (13 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:32:18.087-0500 FSM workload jstests/concurrency/fsm_workloads/agg_out.js finished.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:15.622-0500 I NETWORK [conn83] end connection 127.0.0.1:53756 (14 connections now open)
[executor:fsm_workload_test:job0] 2019-11-26T14:32:18.088-0500 agg_out.js ran in 21.98 seconds: no failures detected.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.087-0500 2019-11-26T14:32:15.665-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:15.623-0500 I NETWORK [conn85] end connection 127.0.0.1:52506 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.088-0500 2019-11-26T14:32:15.665-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.089-0500 2019-11-26T14:32:15.665-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:15.623-0500 I NETWORK [conn79] end connection 127.0.0.1:35868 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.089-0500 2019-11-26T14:32:15.665-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.552-0500 I COMMAND [conn80] CMD: dropIndexes test4_fsmdb0.agg_out: { randInt: -1.0 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.089-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:15.621-0500 I NETWORK [conn134] end connection 127.0.0.1:57074 (23 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.089-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.089-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.089-0500 [jsTest] New session started with sessionID: { "id" : UUID("d6d61c2a-77a8-4622-92ce-b46fd30012e3") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.089-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.089-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.089-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:15.612-0500 I NETWORK [conn169] end connection 127.0.0.1:45740 (3 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.090-0500 Skipping data consistency checks for 1-node CSRS: { "type" : "replica set", "primary" : "localhost:20000", "nodes" : [ "localhost:20000" ] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.090-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:15.621-0500 I NETWORK [conn182] end connection 127.0.0.1:39816 (43 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.090-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:15.634-0500 I NETWORK [conn87] end connection 127.0.0.1:52816 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.090-0500 Implicit session: session { "id" : UUID("0d67f142-aa4c-40e8-b553-0907f6be8ddd") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.090-0500 Implicit session: session { "id" : UUID("a4f1ed7e-5a1b-4fe9-b876-23884b05068e") }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:15.634-0500 I NETWORK [conn81] end connection 127.0.0.1:53702 (13 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.090-0500 MongoDB server version: 0.0.0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.090-0500 MongoDB server version: 0.0.0
[CheckReplDBHashInBackground:job0] Pausing the background check repl dbhash thread.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.091-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:15.634-0500 I NETWORK [conn83] end connection 127.0.0.1:52466 (11 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.091-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:15.665-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52636 #91 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.091-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.552-0500 I COMMAND [conn65] CMD: dropIndexes test4_fsmdb0.agg_out: { padding: "text" }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.091-0500 [jsTest] New session started with sessionID: { "id" : UUID("30b3cb07-eabf-4cc3-8432-31b8415e4d63") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.091-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:15.622-0500 I NETWORK [conn135] end connection 127.0.0.1:57084 (22 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.091-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:15.620-0500 I NETWORK [conn157] end connection 127.0.0.1:45626 (2 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.091-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.092-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:15.622-0500 I NETWORK [conn183] end connection 127.0.0.1:39828 (42 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.092-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:15.662-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52996 #95 (13 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.092-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.092-0500 [jsTest] New session started with sessionID: { "id" : UUID("2fca30d0-4d10-4c8f-a045-6742204c4714") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:15.662-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53882 #95 (14 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.092-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.092-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:15.634-0500 I NETWORK [conn77] end connection 127.0.0.1:35820 (11 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.092-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:15.665-0500 I NETWORK [conn91] received client metadata from 127.0.0.1:52636 conn91: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.093-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.558-0500 I COMMAND [conn84] CMD: drop test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.093-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:15.623-0500 I NETWORK [conn136] end connection 127.0.0.1:57086 (21 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.093-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:15.621-0500 I NETWORK [conn159] end connection 127.0.0.1:45670 (1 connection now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.093-0500 [jsTest] New session started with sessionID: { "id" : UUID("f72535c2-2281-42b0-ba51-10571d22823f") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:15.622-0500 I NETWORK [conn185] end connection 127.0.0.1:39836 (41 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.093-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:15.663-0500 I NETWORK [conn95] received client metadata from 127.0.0.1:52996 conn95: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.093-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:15.663-0500 I NETWORK [conn95] received client metadata from 127.0.0.1:53882 conn95: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.094-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.094-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:15.665-0500 I NETWORK [listener] connection accepted from 127.0.0.1:35994 #91 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.094-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:15.730-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52650 #92 (13 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.094-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.094-0500 [jsTest] New session started with sessionID: { "id" : UUID("f691fdca-72a5-4bd2-993f-047c4e27b62b") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.558-0500 I STORAGE [conn84] dropCollection: test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 (0c1484b7-7ff8-4053-b91a-813f178aa00a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.094-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.094-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:15.634-0500 I NETWORK [conn130] end connection 127.0.0.1:57032 (20 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.095-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:15.623-0500 I NETWORK [conn160] end connection 127.0.0.1:45678 (0 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.095-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:15.622-0500 I NETWORK [conn184] end connection 127.0.0.1:39830 (40 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.095-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:15.732-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53026 #96 (14 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.095-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:15.733-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53916 #96 (15 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.095-0500 [jsTest] New session started with sessionID: { "id" : UUID("c9fdce54-25b2-4c27-9b77-888e06f0498b") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.095-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:15.665-0500 I NETWORK [conn91] received client metadata from 127.0.0.1:35994 conn91: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.095-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:15.731-0500 I NETWORK [conn92] received client metadata from 127.0.0.1:52650 conn92: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.096-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.558-0500 I STORAGE [conn84] Finishing collection drop for test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 (0c1484b7-7ff8-4053-b91a-813f178aa00a).
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.096-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:15.658-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57216 #139 (21 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.096-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:15.649-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45812 #174 (1 connection now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.096-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:15.634-0500 I NETWORK [conn178] end connection 127.0.0.1:39786 (39 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.096-0500 [jsTest] New session started with sessionID: { "id" : UUID("e118372d-1401-4521-9ace-8cc2ebb4d75c") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:15.732-0500 I NETWORK [conn96] received client metadata from 127.0.0.1:53026 conn96: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.096-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:15.733-0500 I NETWORK [conn96] received client metadata from 127.0.0.1:53916 conn96: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.097-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:15.744-0500 W CONTROL [conn96] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 323 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.097-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:15.741-0500 W CONTROL [conn92] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 603 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.097-0500 Running data consistency checks for replica set: shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.558-0500 I STORAGE [conn84] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 (0c1484b7-7ff8-4053-b91a-813f178aa00a)'. Ident: 'index-470--2588534479858262356', commit timestamp: 'Timestamp(1574796732, 3056)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.097-0500 Running data consistency checks for replica set: shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:15.659-0500 I NETWORK [conn139] received client metadata from 127.0.0.1:57216 conn139: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.097-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:15.649-0500 I NETWORK [conn174] received client metadata from 127.0.0.1:45812 conn174: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.097-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:15.662-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39964 #194 (40 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.098-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:15.743-0500 W CONTROL [conn96] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 718 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.098-0500 [jsTest] New session started with sessionID: { "id" : UUID("2e27d95b-7508-44b3-b4ae-7206daefd048") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:15.760-0500 W CONTROL [conn96] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 718 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.098-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:15.762-0500 I NETWORK [conn96] end connection 127.0.0.1:53026 (13 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.098-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:15.763-0500 W CONTROL [conn92] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 603 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.098-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.098-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.558-0500 I STORAGE [conn84] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298 (0c1484b7-7ff8-4053-b91a-813f178aa00a)'. Ident: 'index-472--2588534479858262356', commit timestamp: 'Timestamp(1574796732, 3056)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.098-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:15.659-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57218 #140 (22 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.099-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:15.720-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45834 #175 (2 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.099-0500 [jsTest] New session started with sessionID: { "id" : UUID("1af51067-9bf3-4e8d-aeea-ad7ebe859430") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:15.663-0500 I NETWORK [conn194] received client metadata from 127.0.0.1:39964 conn194: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.099-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:15.663-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39966 #195 (41 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.099-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:15.663-0500 I NETWORK [conn195] received client metadata from 127.0.0.1:39966 conn195: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.099-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:15.776-0500 I NETWORK [conn95] end connection 127.0.0.1:52996 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.099-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.100-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:15.766-0500 I NETWORK [conn92] end connection 127.0.0.1:52650 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.100-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.100-0500 [jsTest] New session started with sessionID: { "id" : UUID("6912c663-eefd-443d-ad32-19ac04f3baf1") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.558-0500 I STORAGE [conn84] Deferring table drop for collection 'test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298'. Ident: collection-468--2588534479858262356, commit timestamp: Timestamp(1574796732, 3056)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.100-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:15.659-0500 I NETWORK [conn140] received client metadata from 127.0.0.1:57218 conn140: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.100-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:15.720-0500 I NETWORK [conn175] received client metadata from 127.0.0.1:45834 conn175: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.100-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:15.721-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45836 #176 (3 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.100-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:15.760-0500 W CONTROL [conn96] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 323 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.101-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:15.729-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39982 #196 (42 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.101-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:15.776-0500 I NETWORK [conn91] end connection 127.0.0.1:52636 (11 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.101-0500 [jsTest] New session started with sessionID: { "id" : UUID("d7939688-2dc7-4918-a960-84b994aa4aae") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.559-0500 I COMMAND [conn62] command test4_fsmdb0.agg_out appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8186158068383584708, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8968338981817809391, ns: "test4_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test4_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796732425), clusterTime: Timestamp(1574796732, 2030) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796732, 2030), signature: { hash: BinData(0, 1B19FAC56B4A897F7A08461A171EF8453D341921), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45740", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796732, 1361), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298\", to: \"test4_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test4_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:996 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 133ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.101-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.101-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:15.769-0500 I NETWORK [conn140] end connection 127.0.0.1:57218 (21 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.101-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.101-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:15.731-0500 I NETWORK [listener] connection accepted from 127.0.0.1:36012 #92 (13 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.102-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:15.721-0500 I NETWORK [conn176] received client metadata from 127.0.0.1:45836 conn176: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.102-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:15.762-0500 I NETWORK [conn96] end connection 127.0.0.1:53916 (14 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.102-0500 [jsTest] New session started with sessionID: { "id" : UUID("2820ea1c-1d2f-4adc-9c5a-17bab80b387c") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:15.729-0500 I NETWORK [conn196] received client metadata from 127.0.0.1:39982 conn196: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.102-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.561-0500 I COMMAND [conn62] CMD: dropIndexes test4_fsmdb0.agg_out: { flag: 1.0 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.102-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:15.776-0500 I NETWORK [conn139] end connection 127.0.0.1:57216 (20 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.102-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:15.731-0500 I NETWORK [conn92] received client metadata from 127.0.0.1:36012 conn92: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.103-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:15.761-0500 I NETWORK [conn176] end connection 127.0.0.1:45836 (2 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.103-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:15.776-0500 I NETWORK [conn95] end connection 127.0.0.1:53882 (13 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.103-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:15.731-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39990 #197 (43 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.103-0500 [jsTest] New session started with sessionID: { "id" : UUID("f9b0df42-8885-4f33-aae4-a6aa214f84fd") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.562-0500 I COMMAND [conn62] CMD: dropIndexes test4_fsmdb0.agg_out: { flag: 1.0 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.103-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.103-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:15.742-0500 W CONTROL [conn92] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 492 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.103-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:15.765-0500 I NETWORK [conn175] end connection 127.0.0.1:45834 (1 connection now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:15.732-0500 I NETWORK [conn197] received client metadata from 127.0.0.1:39990 conn197: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:18.104-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js finished.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.570-0500 I COMMAND [conn62] CMD: dropIndexes test4_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[executor:fsm_workload_test:job0] 2019-11-26T14:32:18.104-0500 agg_out:CheckReplDBHashInBackground ran in 21.99 seconds: no failures detected.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:15.764-0500 W CONTROL [conn92] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 492 }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:15.768-0500 I NETWORK [conn174] end connection 127.0.0.1:45812 (0 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:15.743-0500 W CONTROL [conn197] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 327 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.572-0500 I COMMAND [conn62] CMD: dropIndexes test4_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:15.766-0500 I NETWORK [conn92] end connection 127.0.0.1:36012 (12 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:15.759-0500 W CONTROL [conn197] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 327 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.576-0500 I COMMAND [conn62] CMD: dropIndexes test4_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:15.776-0500 I NETWORK [conn91] end connection 127.0.0.1:35994 (11 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:15.762-0500 I NETWORK [conn196] end connection 127.0.0.1:39982 (42 connections now open)
[executor:fsm_workload_test:job0] 2019-11-26T14:32:18.106-0500 Running agg_out:CheckReplDBHash...
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.578-0500 I COMMAND [conn62] CMD: dropIndexes test4_fsmdb0.agg_out: { randInt: -1.0 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.107-0500 Starting JSTest jstests/hooks/run_check_repl_dbhash.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_check_repl_dbhash"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_check_repl_dbhash.js
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:15.762-0500 I NETWORK [conn197] end connection 127.0.0.1:39990 (41 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.580-0500 I COMMAND [conn62] CMD: dropIndexes test4_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:15.769-0500 I NETWORK [conn195] end connection 127.0.0.1:39966 (40 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.586-0500 I COMMAND [conn62] CMD: dropIndexes test4_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:15.776-0500 I NETWORK [conn194] end connection 127.0.0.1:39964 (39 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.597-0500 I COMMAND [conn65] CMD: dropIndexes test4_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.600-0500 I COMMAND [conn65] CMD: dropIndexes test4_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.602-0500 I COMMAND [conn65] CMD: dropIndexes test4_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.607-0500 W CONTROL [conn197] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 377 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.609-0500 I NETWORK [conn196] end connection 127.0.0.1:47412 (49 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.610-0500 I NETWORK [conn197] end connection 127.0.0.1:47414 (48 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.610-0500 I COMMAND [conn65] CMD: dropIndexes test4_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.614-0500 I NETWORK [conn195] end connection 127.0.0.1:47398 (47 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:12.620-0500 I NETWORK [conn194] end connection 127.0.0.1:47396 (46 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.596-0500 I COMMAND [conn62] CMD: dropIndexes test4_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.596-0500 I COMMAND [conn65] command test4_fsmdb0.agg_out appName: "tid:0" command: dropIndexes { dropIndexes: "agg_out", index: { flag: 1.0 }, writeConcern: { w: 1, wtimeout: 0 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("9d61435e-d844-47c7-b952-b761253a3458"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796732, 3076), signature: { hash: BinData(0, 1B19FAC56B4A897F7A08461A171EF8453D341921), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45728", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796732, 3068), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"can't find index with key: { flag: 1.0 }" errName:IndexNotFound errCode:27 reslen:424 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } protocol:op_msg 2985ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.596-0500 I COMMAND [conn62] command test4_fsmdb0.agg_out appName: "tid:4" command: dropIndexes { dropIndexes: "agg_out", index: { padding: "text" }, writeConcern: { w: 1, wtimeout: 0 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("a6285e77-9e08-47dd-b6c1-7e5c42850c51"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796732, 3076), signature: { hash: BinData(0, 1B19FAC56B4A897F7A08461A171EF8453D341921), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:45740", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796732, 3068), t: 1 } }, $db: "test4_fsmdb0" } numYields:0 ok:0 errMsg:"can't find index with key: { padding: \"text\" }" errName:IndexNotFound errCode:27 reslen:430 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 2981526 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } protocol:op_msg 2981ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.598-0500 I COMMAND [conn62] CMD: dropIndexes test4_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.621-0500 I NETWORK [conn184] end connection 127.0.0.1:47274 (45 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.621-0500 I NETWORK [conn186] end connection 127.0.0.1:47284 (44 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.622-0500 I NETWORK [conn187] end connection 127.0.0.1:47304 (43 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.622-0500 I NETWORK [conn189] end connection 127.0.0.1:47312 (42 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.622-0500 I NETWORK [conn188] end connection 127.0.0.1:47306 (41 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.634-0500 I NETWORK [conn182] end connection 127.0.0.1:47264 (40 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.665-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47434 #198 (41 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.665-0500 I NETWORK [conn198] received client metadata from 127.0.0.1:47434 conn198: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.665-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47440 #199 (42 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.666-0500 I NETWORK [conn199] received client metadata from 127.0.0.1:47440 conn199: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.727-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47446 #200 (43 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.728-0500 I NETWORK [conn200] received client metadata from 127.0.0.1:47446 conn200: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.730-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47450 #201 (44 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.730-0500 I NETWORK [conn201] received client metadata from 127.0.0.1:47450 conn201: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.741-0500 W CONTROL [conn201] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 377 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.763-0500 W CONTROL [conn201] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 377 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.765-0500 I NETWORK [conn200] end connection 127.0.0.1:47446 (43 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.766-0500 I NETWORK [conn201] end connection 127.0.0.1:47450 (42 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.769-0500 I NETWORK [conn199] end connection 127.0.0.1:47440 (41 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:15.776-0500 I NETWORK [conn198] end connection 127.0.0.1:47434 (40 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.114-0500 JSTest jstests/hooks/run_check_repl_dbhash.js started with pid 16308.
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.137-0500 MongoDB shell version v0.0.0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.187-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:18.187-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45854 #177 (1 connection now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:18.187-0500 I NETWORK [conn177] received client metadata from 127.0.0.1:45854 conn177: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.189-0500 Implicit session: session { "id" : UUID("c0cd31cd-c1c7-4cbf-acfa-ac355c363e83") }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.191-0500 MongoDB server version: 0.0.0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.192-0500 true
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.196-0500 2019-11-26T14:32:18.196-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.196-0500 2019-11-26T14:32:18.196-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:18.196-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57258 #141 (21 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:18.196-0500 I NETWORK [conn141] received client metadata from 127.0.0.1:57258 conn141: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.197-0500 2019-11-26T14:32:18.197-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:18.197-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57260 #142 (22 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:18.197-0500 I NETWORK [conn142] received client metadata from 127.0.0.1:57260 conn142: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.198-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.198-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.198-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.198-0500 [jsTest] New session started with sessionID: { "id" : UUID("d095eed8-9032-480b-9edf-afe250670eda") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.198-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.198-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.198-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.200-0500 2019-11-26T14:32:18.200-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.200-0500 2019-11-26T14:32:18.200-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.200-0500 2019-11-26T14:32:18.200-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.200-0500 2019-11-26T14:32:18.200-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:18.200-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53924 #97 (14 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:18.200-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53038 #97 (13 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:18.200-0500 I NETWORK [conn97] received client metadata from 127.0.0.1:53924 conn97: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.200-0500 I NETWORK [listener] connection accepted from 127.0.0.1:40006 #198 (40 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:18.200-0500 I NETWORK [conn97] received client metadata from 127.0.0.1:53038 conn97: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.201-0500 2019-11-26T14:32:18.201-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.200-0500 I NETWORK [conn198] received client metadata from 127.0.0.1:40006 conn198: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.201-0500 I NETWORK [listener] connection accepted from 127.0.0.1:40008 #199 (41 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.201-0500 I NETWORK [conn199] received client metadata from 127.0.0.1:40008 conn199: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.202-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.202-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.202-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.202-0500 [jsTest] New session started with sessionID: { "id" : UUID("2e6c37f8-b2de-41ae-b21d-86d5f938ab00") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.202-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.202-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.202-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.202-0500 2019-11-26T14:32:18.202-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.202-0500 2019-11-26T14:32:18.202-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.202-0500 2019-11-26T14:32:18.202-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.202-0500 2019-11-26T14:32:18.202-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:18.203-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52674 #93 (12 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:18.203-0500 I NETWORK [listener] connection accepted from 127.0.0.1:36036 #93 (12 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:18.203-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47480 #202 (41 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:18.203-0500 I NETWORK [conn93] received client metadata from 127.0.0.1:52674 conn93: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:18.203-0500 I NETWORK [conn93] received client metadata from 127.0.0.1:36036 conn93: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:18.203-0500 I NETWORK [conn202] received client metadata from 127.0.0.1:47480 conn202: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.203-0500 2019-11-26T14:32:18.203-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:18.204-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47482 #203 (42 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:18.204-0500 I NETWORK [conn203] received client metadata from 127.0.0.1:47482 conn203: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.204-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.204-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.204-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.204-0500 [jsTest] New session started with sessionID: { "id" : UUID("ded622c7-17c1-4574-927a-1363ac267d35") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.204-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.205-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.205-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.205-0500 Skipping data consistency checks for 1-node CSRS: { "type" : "sharded cluster", "configsvr" : { "type" : "replica set", "primary" : "localhost:20000", "nodes" : [ "localhost:20000" ] }, "shards" : { "shard-rs0" : { "type" : "replica set", "primary" : "localhost:20001", "nodes" : [ "localhost:20001", "localhost:20002", "localhost:20003" ] }, "shard-rs1" : { "type" : "replica set", "primary" : "localhost:20004", "nodes" : [ "localhost:20004", "localhost:20005", "localhost:20006" ] } }, "mongos" : { "type" : "mongos router", "nodes" : [ "localhost:20007", "localhost:20008" ] } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.283-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.283-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:18.283-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45876 #178 (2 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:18.284-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45878 #179 (3 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:18.284-0500 I NETWORK [conn178] received client metadata from 127.0.0.1:45876 conn178: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:18.284-0500 I NETWORK [conn179] received client metadata from 127.0.0.1:45878 conn179: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.286-0500 Implicit session: session { "id" : UUID("bf5273ca-5ee0-4de7-b8d0-cb0f99b27449") }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.286-0500 Implicit session: session { "id" : UUID("786221a9-6ad7-4372-a0dc-99df048b2a19") }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.287-0500 MongoDB server version: 0.0.0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.287-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.291-0500 I NETWORK [listener] connection accepted from 127.0.0.1:40022 #200 (42 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:18.291-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47490 #204 (43 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:18.291-0500 I NETWORK [conn204] received client metadata from 127.0.0.1:47490 conn204: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.291-0500 I NETWORK [conn200] received client metadata from 127.0.0.1:40022 conn200: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.292-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.292-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.292-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.292-0500 [jsTest] New session started with sessionID: { "id" : UUID("b59d3f55-05ce-4688-a6ec-8177e4b280b3") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.292-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.292-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.292-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.292-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.292-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.292-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.293-0500 [jsTest] New session started with sessionID: { "id" : UUID("ca71398a-c195-43e7-8ab1-fdb16a72c280") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.293-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.293-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.293-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.294-0500 Recreating replica set from config {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.294-0500 "_id" : "shard-rs1",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.294-0500 "version" : 2,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.294-0500 "protocolVersion" : NumberLong(1),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.294-0500 "writeConcernMajorityJournalDefault" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.294-0500 "members" : [
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.294-0500 I NETWORK [listener] connection accepted from 127.0.0.1:40028 #201 (43 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.294-0500 {
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:18.294-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47492 #205 (44 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.295-0500 "_id" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.295-0500 "host" : "localhost:20004",
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.294-0500 I NETWORK [conn201] received client metadata from 127.0.0.1:40028 conn201: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.295-0500 "arbiterOnly" : false,
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:18.294-0500 I NETWORK [conn205] received client metadata from 127.0.0.1:47492 conn205: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.295-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.295-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.295-0500 "priority" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.295-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.295-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.296-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.296-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.296-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.296-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.296-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.296-0500 "_id" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.296-0500 "host" : "localhost:20005",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.296-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.296-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.296-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.296-0500 "priority" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.296-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.296-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.296-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.296-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.296-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.296-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.297-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.297-0500 "_id" : 2,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.297-0500 "host" : "localhost:20006",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.297-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.297-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.297-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.297-0500 "priority" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.297-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.297-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.297-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.297-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.297-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.297-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.297-0500 ],
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.297-0500 "settings" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.297-0500 "chainingAllowed" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.297-0500 "heartbeatIntervalMillis" : 2000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.297-0500 "heartbeatTimeoutSecs" : 10,
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:18.295-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53064 #98 (14 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.297-0500 "electionTimeoutMillis" : 86400000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.298-0500 "catchUpTimeoutMillis" : -1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.298-0500 "catchUpTakeoverDelayMillis" : 30000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.298-0500 "getLastErrorModes" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.298-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.298-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.298-0500 "getLastErrorDefaults" : {
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:18.295-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52696 #94 (13 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.298-0500 "w" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.298-0500 "wtimeout" : 0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.298-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.298-0500 "replicaSetId" : ObjectId("5ddd7d6bcf8184c2e1492eba")
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.298-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.298-0500 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:18.295-0500 I NETWORK [listener] connection accepted from 127.0.0.1:36060 #94 (13 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.298-0500 Recreating replica set from config {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.298-0500 "_id" : "shard-rs0",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.298-0500 "version" : 2,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.299-0500 "protocolVersion" : NumberLong(1),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.299-0500 "writeConcernMajorityJournalDefault" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.299-0500 "members" : [
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.299-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.299-0500 "_id" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.299-0500 "host" : "localhost:20001",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.299-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.299-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.299-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.299-0500 "priority" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.299-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.299-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.299-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.299-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.299-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.299-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.299-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.299-0500 "_id" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.299-0500 "host" : "localhost:20002",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.299-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.299-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.299-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.299-0500 "priority" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.300-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.300-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.300-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.300-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.300-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.300-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.300-0500 {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.300-0500 "_id" : 2,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.300-0500 "host" : "localhost:20003",
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.300-0500 "arbiterOnly" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.300-0500 "buildIndexes" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.300-0500 "hidden" : false,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.300-0500 "priority" : 0,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.300-0500 "tags" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.300-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.300-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.300-0500 "slaveDelay" : NumberLong(0),
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.300-0500 "votes" : 1
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.300-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.300-0500 ],
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.300-0500 "settings" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.300-0500 "chainingAllowed" : true,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.301-0500 "heartbeatIntervalMillis" : 2000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.301-0500 "heartbeatTimeoutSecs" : 10,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.301-0500 "electionTimeoutMillis" : 86400000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.301-0500 "catchUpTimeoutMillis" : -1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.301-0500 "catchUpTakeoverDelayMillis" : 30000,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.301-0500 "getLastErrorModes" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.301-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.301-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.301-0500 "getLastErrorDefaults" : {
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.301-0500 "w" : 1,
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.301-0500 "wtimeout" : 0
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.301-0500 },
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.301-0500 "replicaSetId" : ObjectId("5ddd7d683bbfe7fa5630d3b8")
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.301-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.301-0500 }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.301-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.301-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.301-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.301-0500 [jsTest] New session started with sessionID: { "id" : UUID("c03bc18a-ef7c-4c9a-bde6-397ece3d2c09") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:18.295-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53956 #98 (15 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.301-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.302-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.302-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.302-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.302-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.302-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.302-0500 [jsTest] New session started with sessionID: { "id" : UUID("5cb4c44e-787a-46fb-980a-877c74cc8e85") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.302-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.302-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.302-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.302-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.302-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.302-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.302-0500 [jsTest] New session started with sessionID: { "id" : UUID("2cbd4cb0-73e2-4553-8ef9-2e48740dd0a3") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.302-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.302-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.302-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.302-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.302-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.302-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.302-0500 [jsTest] New session started with sessionID: { "id" : UUID("285a2709-6782-43b4-bd0a-fe486e3a1332") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.302-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.303-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.303-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.303-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.303-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.303-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.303-0500 [jsTest] New session started with sessionID: { "id" : UUID("9ae44b05-b29e-4063-b32f-05c08408bc3e") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.303-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.303-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.303-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.303-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.303-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.303-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.303-0500 [jsTest] New session started with sessionID: { "id" : UUID("12506bc7-8451-4c14-b3f8-c250708f0063") } and options: { "causalConsistency" : false }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.303-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.303-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.303-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:18.295-0500 I NETWORK [conn98] received client metadata from 127.0.0.1:53064 conn98: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:18.295-0500 I NETWORK [conn94] received client metadata from 127.0.0.1:52696 conn94: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:18.296-0500 I NETWORK [conn94] received client metadata from 127.0.0.1:36060 conn94: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:18.295-0500 I NETWORK [conn98] received client metadata from 127.0.0.1:53956 conn98: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.309-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.310-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.310-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.310-0500 [jsTest] Freezing nodes: [localhost:20002,localhost:20003]
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.310-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.310-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.310-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:18.310-0500 I COMMAND [conn98] Attempting to step down in response to replSetStepDown command
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.310-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.310-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.310-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.310-0500 [jsTest] Freezing nodes: [localhost:20005,localhost:20006]
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.311-0500 [jsTest] ----
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.311-0500
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.311-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:18.311-0500 I REPL [conn98] 'freezing' for 86400 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:18.311-0500 I COMMAND [conn94] Attempting to step down in response to replSetStepDown command
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:18.311-0500 I COMMAND [conn98] Attempting to step down in response to replSetStepDown command
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:18.312-0500 I REPL [conn94] 'freezing' for 86400 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:18.312-0500 I REPL [conn98] 'freezing' for 86400 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:18.312-0500 I COMMAND [conn94] Attempting to step down in response to replSetStepDown command
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:18.313-0500 I REPL [conn94] 'freezing' for 86400 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.314-0500 I COMMAND [conn201] CMD fsync: sync:1 lock:1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:18.315-0500 I COMMAND [conn205] CMD fsync: sync:1 lock:1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.362-0500 W COMMAND [fsyncLockWorker] WARNING: instance is locked, blocking all writes. The fsync command has finished execution, remember to unlock the instance using fsyncUnlock().
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.362-0500 I COMMAND [conn201] mongod is locked and no writes are allowed. db.fsyncUnlock() to unlock
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.362-0500 I COMMAND [conn201] Lock count is 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.362-0500 I COMMAND [conn201] For more info see http://dochub.mongodb.org/core/fsynccommand
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.363-0500 ReplSetTest awaitReplication: going to check only localhost:20002,localhost:20003
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.370-0500 ReplSetTest awaitReplication: starting: optime for primary, localhost:20001, is { "ts" : Timestamp(1574796738, 6), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.370-0500 ReplSetTest awaitReplication: checking secondaries against latest primary optime { "ts" : Timestamp(1574796738, 6), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.372-0500 ReplSetTest awaitReplication: checking secondary #0: localhost:20002
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.373-0500 ReplSetTest awaitReplication: secondary #0, localhost:20002, is synced
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.375-0500 ReplSetTest awaitReplication: checking secondary #1: localhost:20003
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.376-0500 ReplSetTest awaitReplication: secondary #1, localhost:20003, is synced
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.376-0500 ReplSetTest awaitReplication: finished: all 2 secondaries synced at optime { "ts" : Timestamp(1574796738, 6), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.380-0500 checkDBHashesForReplSet checking data hashes against primary: localhost:20001
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.380-0500 checkDBHashesForReplSet going to check data hashes on secondary: localhost:20002
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.382-0500 checkDBHashesForReplSet going to check data hashes on secondary: localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.512-0500 I COMMAND [conn201] command: unlock requested
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.515-0500 I COMMAND [conn201] fsyncUnlock completed. mongod is now unlocked and free to accept writes
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:18.516-0500 I REPL [conn98] 'unfreezing'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:18.516-0500 I REPL [conn98] 'unfreezing'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:18.517-0500 I NETWORK [conn179] end connection 127.0.0.1:45878 (2 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.518-0500 I NETWORK [conn200] end connection 127.0.0.1:40022 (42 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.518-0500 I NETWORK [conn201] end connection 127.0.0.1:40028 (41 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:18.518-0500 I NETWORK [conn98] end connection 127.0.0.1:53956 (14 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:18.518-0500 I NETWORK [conn98] end connection 127.0.0.1:53064 (13 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:18.768-0500 W COMMAND [fsyncLockWorker] WARNING: instance is locked, blocking all writes. The fsync command has finished execution, remember to unlock the instance using fsyncUnlock().
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:18.768-0500 I COMMAND [conn205] mongod is locked and no writes are allowed. db.fsyncUnlock() to unlock
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:18.768-0500 I COMMAND [conn205] Lock count is 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:18.768-0500 I COMMAND [conn205] For more info see http://dochub.mongodb.org/core/fsynccommand
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:18.768-0500 I COMMAND [conn205] command admin.$cmd appName: "MongoDB Shell" command: fsync { fsync: 1.0, lock: 1.0, allowFsyncFailure: true, lsid: { id: UUID("5cb4c44e-787a-46fb-980a-877c74cc8e85") }, $clusterTime: { clusterTime: Timestamp(1574796738, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:477 locks:{ Mutex: { acquireCount: { W: 1 } } } protocol:op_msg 452ms
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.768-0500 ReplSetTest awaitReplication: going to check only localhost:20005,localhost:20006
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.775-0500 ReplSetTest awaitReplication: starting: optime for primary, localhost:20004, is { "ts" : Timestamp(1574796738, 8), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.775-0500 ReplSetTest awaitReplication: checking secondaries against latest primary optime { "ts" : Timestamp(1574796738, 8), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.778-0500 ReplSetTest awaitReplication: checking secondary #0: localhost:20005
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.779-0500 ReplSetTest awaitReplication: secondary #0, localhost:20005, is synced
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.781-0500 ReplSetTest awaitReplication: checking secondary #1: localhost:20006
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.781-0500 ReplSetTest awaitReplication: secondary #1, localhost:20006, is synced
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.781-0500 ReplSetTest awaitReplication: finished: all 2 secondaries synced at optime { "ts" : Timestamp(1574796738, 8), "t" : NumberLong(1) }
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.785-0500 checkDBHashesForReplSet checking data hashes against primary: localhost:20004
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.786-0500 checkDBHashesForReplSet going to check data hashes on secondary: localhost:20005
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.787-0500 checkDBHashesForReplSet going to check data hashes on secondary: localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:18.913-0500 I COMMAND [conn205] command: unlock requested
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:18.916-0500 I COMMAND [conn205] fsyncUnlock completed. mongod is now unlocked and free to accept writes
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:18.917-0500 I REPL [conn94] 'unfreezing'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:18.917-0500 I REPL [conn94] 'unfreezing'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:18.918-0500 I NETWORK [conn178] end connection 127.0.0.1:45876 (1 connection now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:18.918-0500 I NETWORK [conn204] end connection 127.0.0.1:47490 (43 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:18.919-0500 I NETWORK [conn205] end connection 127.0.0.1:47492 (42 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:18.919-0500 I NETWORK [conn94] end connection 127.0.0.1:52696 (12 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:18.919-0500 I NETWORK [conn94] end connection 127.0.0.1:36060 (12 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.921-0500 Finished data consistency checks for cluster in 728 ms.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:18.922-0500 I NETWORK [conn177] end connection 127.0.0.1:45854 (0 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:18.922-0500 I NETWORK [conn142] end connection 127.0.0.1:57260 (21 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.922-0500 I NETWORK [conn199] end connection 127.0.0.1:40008 (40 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:18.923-0500 I NETWORK [conn203] end connection 127.0.0.1:47482 (41 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:18.930-0500 I NETWORK [conn202] end connection 127.0.0.1:47480 (40 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:18.930-0500 I NETWORK [conn97] end connection 127.0.0.1:53924 (13 connections now open)
[CheckReplDBHash:job0:agg_out:CheckReplDBHash] 2019-11-26T14:32:18.931-0500 JSTest jstests/hooks/run_check_repl_dbhash.js finished.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.930-0500 I NETWORK [conn198] end connection 127.0.0.1:40006 (39 connections now open)
[executor:fsm_workload_test:job0] 2019-11-26T14:32:18.931-0500 agg_out:CheckReplDBHash ran in 0.83 seconds: no failures detected.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:18.930-0500 I NETWORK [conn141] end connection 127.0.0.1:57258 (20 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:18.930-0500 I NETWORK [conn93] end connection 127.0.0.1:36036 (11 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:18.930-0500 I NETWORK [conn93] end connection 127.0.0.1:52674 (11 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:18.930-0500 I NETWORK [conn97] end connection 127.0.0.1:53038 (12 connections now open)
[executor:fsm_workload_test:job0] 2019-11-26T14:32:18.932-0500 Running agg_out:ValidateCollections...
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:18.933-0500 Starting JSTest jstests/hooks/run_validate_collections.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_validate_collections"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_validate_collections.js
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.933-0500 I STORAGE [TimestampMonitor] Removing drop-pending idents with drop timestamps before timestamp Timestamp(1574796733, 6)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.933-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-336-8224331490264904478 (ns: test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 15)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.935-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-337-8224331490264904478 (ns: test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 15)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.936-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-335-8224331490264904478 (ns: test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 15)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.937-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-339-8224331490264904478 (ns: config.cache.chunks.test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 23)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.938-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-340-8224331490264904478 (ns: config.cache.chunks.test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 23)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:18.939-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-338-8224331490264904478 (ns: config.cache.chunks.test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 23)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:18.944-0500 JSTest jstests/hooks/run_validate_collections.js started with pid 16341.
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:18.966-0500 MongoDB shell version v0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.017-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.017-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45896 #180 (1 connection now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.017-0500 I NETWORK [conn180] received client metadata from 127.0.0.1:45896 conn180: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.019-0500 Implicit session: session { "id" : UUID("7994986f-0a35-449f-a101-17a788a7151d") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.021-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.022-0500 true
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.026-0500 2019-11-26T14:32:19.026-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.026-0500 2019-11-26T14:32:19.026-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.026-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57300 #143 (21 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.026-0500 I NETWORK [conn143] received client metadata from 127.0.0.1:57300 conn143: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.027-0500 2019-11-26T14:32:19.027-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.027-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57302 #144 (22 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.027-0500 I NETWORK [conn144] received client metadata from 127.0.0.1:57302 conn144: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.028-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.028-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.028-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.028-0500 [jsTest] New session started with sessionID: { "id" : UUID("0077bff5-45cd-4913-8daa-c9fd09ec1c22") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.028-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.028-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.028-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.030-0500 2019-11-26T14:32:19.030-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.030-0500 2019-11-26T14:32:19.030-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.030-0500 2019-11-26T14:32:19.030-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.030-0500 2019-11-26T14:32:19.030-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.030-0500 I NETWORK [listener] connection accepted from 127.0.0.1:40044 #202 (40 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.030-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53082 #99 (13 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.030-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53968 #99 (14 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.031-0500 I NETWORK [conn99] received client metadata from 127.0.0.1:53968 conn99: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.031-0500 I NETWORK [conn99] received client metadata from 127.0.0.1:53082 conn99: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.031-0500 I NETWORK [conn202] received client metadata from 127.0.0.1:40044 conn202: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.031-0500 2019-11-26T14:32:19.031-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.031-0500 I NETWORK [listener] connection accepted from 127.0.0.1:40050 #203 (41 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.031-0500 I NETWORK [conn203] received client metadata from 127.0.0.1:40050 conn203: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.032-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.032-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.032-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.032-0500 [jsTest] New session started with sessionID: { "id" : UUID("6a8fb8fe-32b4-462a-ab4d-6d2713b9138f") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.032-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.032-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.032-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.032-0500 2019-11-26T14:32:19.032-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.032-0500 2019-11-26T14:32:19.032-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.032-0500 2019-11-26T14:32:19.032-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.032-0500 2019-11-26T14:32:19.032-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.033-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47518 #206 (41 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.033-0500 I NETWORK [listener] connection accepted from 127.0.0.1:36080 #95 (12 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.033-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52718 #95 (12 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.033-0500 I NETWORK [conn206] received client metadata from 127.0.0.1:47518 conn206: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.033-0500 I NETWORK [conn95] received client metadata from 127.0.0.1:52718 conn95: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.033-0500 2019-11-26T14:32:19.033-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.033-0500 I NETWORK [conn95] received client metadata from 127.0.0.1:36080 conn95: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.033-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47524 #207 (42 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.034-0500 I NETWORK [conn207] received client metadata from 127.0.0.1:47524 conn207: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.034-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.034-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.034-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.034-0500 [jsTest] New session started with sessionID: { "id" : UUID("92452997-53f6-4084-af6b-88f96f1aae1b") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.034-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.035-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.035-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.116-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.116-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.116-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.116-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.116-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45918 #181 (2 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.116-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45919 #182 (3 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.116-0500 I NETWORK [conn181] received client metadata from 127.0.0.1:45918 conn181: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.116-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45920 #183 (4 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.116-0500 I NETWORK [conn182] received client metadata from 127.0.0.1:45919 conn182: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.117-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45926 #184 (5 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.117-0500 I NETWORK [conn183] received client metadata from 127.0.0.1:45920 conn183: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.117-0500 I NETWORK [conn184] received client metadata from 127.0.0.1:45926 conn184: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.117-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.117-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.117-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45928 #185 (6 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.117-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45930 #186 (7 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.117-0500 I NETWORK [conn185] received client metadata from 127.0.0.1:45928 conn185: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.118-0500 I NETWORK [conn186] received client metadata from 127.0.0.1:45930 conn186: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.118-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.118-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45932 #187 (8 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.118-0500 I NETWORK [conn187] received client metadata from 127.0.0.1:45932 conn187: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.118-0500 Implicit session: session { "id" : UUID("00fe2439-b61b-45dd-9874-cd8d566eb35c") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.118-0500 Implicit session: session { "id" : UUID("2f5faf88-2d14-47f0-900a-d965bff32d10") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.118-0500 Implicit session: session { "id" : UUID("7bd32b8d-58ea-4047-953e-55d17e3ebd72") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.119-0500 Implicit session: session { "id" : UUID("473420fb-307a-4b68-936a-217897aa3b10") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.119-0500 Implicit session: session { "id" : UUID("5195bd1e-0b8d-40f2-8ebb-24ed1c5a58c2") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.119-0500 Implicit session: session { "id" : UUID("752f3f28-d0a1-49af-bd13-51f8aca1a46f") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.120-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.120-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.120-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.120-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.120-0500 Implicit session: session { "id" : UUID("26eb6951-0bbd-4155-a541-7c6adc529247") }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.121-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.121-0500 MongoDB server version: 0.0.0
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.121-0500 Running validate() on localhost:20001
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.121-0500 Running validate() on localhost:20000
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.121-0500 Running validate() on localhost:20002
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.122-0500 Running validate() on localhost:20005
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.121-0500 I NETWORK [listener] connection accepted from 127.0.0.1:40076 #204 (42 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.121-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57338 #145 (23 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.122-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52746 #96 (13 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.122-0500 Running validate() on localhost:20006
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.121-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53114 #100 (14 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.122-0500 I NETWORK [conn145] received client metadata from 127.0.0.1:57338 conn145: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.122-0500 I NETWORK [conn204] received client metadata from 127.0.0.1:40076 conn204: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.123-0500 Running validate() on localhost:20004
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.123-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.122-0500 I NETWORK [listener] connection accepted from 127.0.0.1:36108 #96 (13 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.123-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.123-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.122-0500 I NETWORK [conn96] received client metadata from 127.0.0.1:52746 conn96: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.124-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.124-0500 [jsTest] New session started with sessionID: { "id" : UUID("4b26232a-bef5-497b-9016-6039e40e65cf") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.123-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47552 #208 (43 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.124-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.122-0500 I NETWORK [conn100] received client metadata from 127.0.0.1:53114 conn100: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.124-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.124-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.123-0500 I NETWORK [conn96] received client metadata from 127.0.0.1:36108 conn96: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.124-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.123-0500 I NETWORK [conn208] received client metadata from 127.0.0.1:47552 conn208: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.125-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.125-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.125-0500 [jsTest] New session started with sessionID: { "id" : UUID("e3c37efa-247d-4846-a31b-30ac8575ea7a") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.125-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.125-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.125-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.125-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.125-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.125-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.125-0500 [jsTest] New session started with sessionID: { "id" : UUID("ae46471e-5946-4957-8d62-cc48a854257e") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.125-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.126-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.126-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.126-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.126-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.126-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.125-0500 I NETWORK [listener] connection accepted from 127.0.0.1:54010 #100 (15 connections now open)
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.126-0500 [jsTest] New session started with sessionID: { "id" : UUID("98ffac19-46a8-4dc6-a44a-a9b1b97f84be") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.126-0500 I NETWORK [conn100] received client metadata from 127.0.0.1:54010 conn100: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.126-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.126-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.126-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.126-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.127-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.127-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.127-0500 [jsTest] New session started with sessionID: { "id" : UUID("1ed47288-a80c-4312-ac63-bbc98c658ea4") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.127-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.127-0500 I COMMAND [conn100] CMD: validate admin.system.version, full:true
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.127-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.127-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.127-0500 I COMMAND [conn204] CMD: validate admin.system.version, full:true
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.127-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.127-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.128-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.128-0500 [jsTest] New session started with sessionID: { "id" : UUID("13b3a81f-3a3f-4750-8739-d03b6c4b98b8") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.128-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.128-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.128-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.128-0500 I COMMAND [conn96] CMD: validate admin.system.version, full:true
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.128-0500 Running validate() on localhost:20003
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.128-0500 I COMMAND [conn208] CMD: validate admin.system.version, full:true
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.128-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.129-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.129-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.129-0500 [jsTest] New session started with sessionID: { "id" : UUID("81f86b8a-8dcc-41da-aefa-63d691fb594d") } and options: { "causalConsistency" : false }
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.129-0500 [jsTest] ----
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.129-0500
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.129-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.128-0500 I INDEX [conn204] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.128-0500 I COMMAND [conn96] CMD: validate admin.system.version, full:true
[ValidateCollections:job0:agg_out:ValidateCollections] 2019-11-26T14:32:19.346-0500 JSTest jstests/hooks/run_validate_collections.js finished.
[executor:fsm_workload_test:job0] 2019-11-26T14:32:19.956-0500 agg_out:ValidateCollections ran in 1.02 seconds: no failures detected.
[executor:fsm_workload_test:job0] 2019-11-26T14:32:19.956-0500 Running agg_out:CleanupConcurrencyWorkloads...
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.128-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.129-0500 I COMMAND [conn145] CMD: validate admin.system.keys, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.129-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.129-0500 I INDEX [conn208] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.130-0500 I INDEX [conn204] validating collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.132-0500 I COMMAND [conn100] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.213-0500 I NETWORK [conn182] end connection 127.0.0.1:45919 (7 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.129-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.130-0500 I INDEX [conn145] validating the internal structure of index _id_ on collection admin.system.keys
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.132-0500 I INDEX [conn96] validating collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:19.958-0500 I NETWORK [listener] connection accepted from 127.0.0.1:59090 #54 (1 connection now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.132-0500 I INDEX [conn208] validating collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.130-0500 I INDEX [conn100] validating collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.130-0500 I INDEX [conn204] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.133-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.216-0500 I NETWORK [conn187] end connection 127.0.0.1:45932 (6 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.133-0500 I INDEX [conn96] validating collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.133-0500 I INDEX [conn145] validating collection admin.system.keys (UUID: 807238e6-a72f-4ef0-b305-4bab60afd0e6)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.132-0500 I INDEX [conn96] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:19.959-0500 I NETWORK [conn54] received client metadata from 127.0.0.1:59090 conn54: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.133-0500 I INDEX [conn208] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.130-0500 I INDEX [conn100] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.130-0500 I INDEX [conn204] Validation complete for collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.135-0500 I INDEX [conn100] validating collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.221-0500 I NETWORK [conn181] end connection 127.0.0.1:45918 (5 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.133-0500 I INDEX [conn96] validating index consistency _id_ on collection admin.system.version
[CleanupConcurrencyWorkloads:job0:agg_out:CleanupConcurrencyWorkloads] 2019-11-26T14:32:19.964-0500 Dropping all databases except for ['config', 'local', '$external', 'admin']
[CleanupConcurrencyWorkloads:job0:agg_out:CleanupConcurrencyWorkloads] 2019-11-26T14:32:19.964-0500 Dropping database test4_fsmdb0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.133-0500 I INDEX [conn145] validating index consistency _id_ on collection admin.system.keys
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.132-0500 I INDEX [conn96] Validation complete for collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5). No corruption found.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:19.961-0500 I NETWORK [listener] connection accepted from 127.0.0.1:59094 #55 (2 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.133-0500 I INDEX [conn208] Validation complete for collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.130-0500 I INDEX [conn100] Validation complete for collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.132-0500 I COMMAND [conn204] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.135-0500 I INDEX [conn100] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.252-0500 I NETWORK [conn186] end connection 127.0.0.1:45930 (4 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.133-0500 I INDEX [conn96] Validation complete for collection admin.system.version (UUID: 19b398bd-025a-4aca-9299-76bf6d82acc5). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.133-0500 I INDEX [conn145] Validation complete for collection admin.system.keys (UUID: 807238e6-a72f-4ef0-b305-4bab60afd0e6). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.135-0500 I COMMAND [conn96] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:19.961-0500 I NETWORK [conn55] received client metadata from 127.0.0.1:59094 conn55: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.134-0500 I COMMAND [conn208] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.132-0500 I COMMAND [conn100] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.133-0500 I INDEX [conn204] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.135-0500 I INDEX [conn100] Validation complete for collection admin.system.version (UUID: 70439088-b608-4bfe-8d4e-f62378562d13). No corruption found.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.258-0500 I NETWORK [conn183] end connection 127.0.0.1:45920 (3 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.134-0500 I COMMAND [conn96] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.134-0500 I COMMAND [conn145] CMD: validate admin.system.version, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.136-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.135-0500 I INDEX [conn208] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.133-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.135-0500 I INDEX [conn204] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.137-0500 I COMMAND [conn100] CMD: validate config.cache.chunks.config.system.sessions, full:true
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.331-0500 I NETWORK [conn185] end connection 127.0.0.1:45928 (2 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.136-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.135-0500 I INDEX [conn145] validating the internal structure of index _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.138-0500 I INDEX [conn96] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.137-0500 I INDEX [conn208] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.135-0500 I INDEX [conn100] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.137-0500 I INDEX [conn204] validating collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.138-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.333-0500 I NETWORK [conn184] end connection 127.0.0.1:45926 (1 connection now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.138-0500 I INDEX [conn96] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.138-0500 I INDEX [conn145] validating collection admin.system.version (UUID: 1b1834a4-71ee-49e7-abbc-7ae09d5089b2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.141-0500 I INDEX [conn96] validating collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.141-0500 I INDEX [conn208] validating collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.141-0500 I INDEX [conn208] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.137-0500 I INDEX [conn204] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.137-0500 I INDEX [conn204] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.140-0500 I INDEX [conn100] validating the internal structure of index lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.140-0500 I INDEX [conn96] validating collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.138-0500 I INDEX [conn145] validating index consistency _id_ on collection admin.system.version
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.141-0500 I INDEX [conn96] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.141-0500 I INDEX [conn96] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.142-0500 I INDEX [conn96] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385). No corruption found.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.336-0500 I NETWORK [conn180] end connection 127.0.0.1:45896 (0 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.137-0500 I INDEX [conn204] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.138-0500 I COMMAND [conn204] CMD: validate config.cache.chunks.test4_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.138-0500 I INDEX [conn204] validating the internal structure of index _id_ on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.140-0500 I INDEX [conn204] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.137-0500 I INDEX [conn100] validating collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.141-0500 I INDEX [conn208] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.142-0500 I COMMAND [conn96] CMD: validate config.cache.chunks.test4_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.960-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45950 #188 (1 connection now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.142-0500 I INDEX [conn100] validating collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.140-0500 I INDEX [conn96] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.138-0500 I INDEX [conn145] Validation complete for collection admin.system.version (UUID: 1b1834a4-71ee-49e7-abbc-7ae09d5089b2). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.142-0500 I INDEX [conn204] validating collection config.cache.chunks.test4_fsmdb0.fsmcoll0 (UUID: 647e6274-b0dc-4671-90c7-65b5ed709ba8)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.142-0500 I INDEX [conn204] validating index consistency _id_ on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.141-0500 I INDEX [conn208] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.142-0500 W STORAGE [conn96] Could not complete validation of table:collection-459--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.961-0500 I NETWORK [conn188] received client metadata from 127.0.0.1:45950 conn188: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.142-0500 I INDEX [conn100] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.140-0500 I INDEX [conn96] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.140-0500 I COMMAND [conn145] CMD: validate config.actionlog, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.137-0500 I INDEX [conn100] validating index consistency _id_ on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.142-0500 I INDEX [conn204] validating index consistency lastmod_1 on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.141-0500 I COMMAND [conn208] CMD: validate config.cache.chunks.test4_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.142-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection config.cache.chunks.test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.964-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45954 #189 (2 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.142-0500 I INDEX [conn100] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.140-0500 I INDEX [conn96] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 89d743ca-3d59-460f-a575-cb12eb122385). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.141-0500 I INDEX [conn145] validating the internal structure of index _id_ on collection config.actionlog
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.137-0500 I INDEX [conn100] validating index consistency lastmod_1 on collection config.cache.chunks.config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.142-0500 I INDEX [conn204] Validation complete for collection config.cache.chunks.test4_fsmdb0.fsmcoll0 (UUID: 647e6274-b0dc-4671-90c7-65b5ed709ba8). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.142-0500 I INDEX [conn208] validating the internal structure of index _id_ on collection config.cache.chunks.test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.142-0500 W STORAGE [conn96] Could not complete validation of table:index-460--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:19.965-0500 I NETWORK [conn189] received client metadata from 127.0.0.1:45954 conn189: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.142-0500 I INDEX [conn100] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.141-0500 I COMMAND [conn96] CMD: validate config.cache.chunks.test4_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.144-0500 I INDEX [conn145] validating collection config.actionlog (UUID: ff427093-1de4-4a9f-83c9-6b01392e1aea)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.137-0500 I INDEX [conn100] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: 64c6a829-dbfe-4506-b9df-8620f75d7efb). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.143-0500 I COMMAND [conn204] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.144-0500 I INDEX [conn208] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.142-0500 I INDEX [conn96] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.143-0500 I COMMAND [conn100] CMD: validate config.cache.chunks.test4_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.141-0500 W STORAGE [conn96] Could not complete validation of table:collection-459--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.144-0500 I INDEX [conn145] validating index consistency _id_ on collection config.actionlog
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.138-0500 I COMMAND [conn100] CMD: validate config.cache.chunks.test4_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.144-0500 I INDEX [conn204] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.147-0500 I INDEX [conn208] validating collection config.cache.chunks.test4_fsmdb0.agg_out (UUID: b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.143-0500 W STORAGE [conn96] Could not complete validation of table:index-461--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.143-0500 W STORAGE [conn100] Could not complete validation of table:collection-357--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.141-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection config.cache.chunks.test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.144-0500 I INDEX [conn145] Validation complete for collection config.actionlog (UUID: ff427093-1de4-4a9f-83c9-6b01392e1aea). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.138-0500 W STORAGE [conn100] Could not complete validation of table:collection-357--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.146-0500 I INDEX [conn204] validating collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.147-0500 I INDEX [conn208] validating index consistency _id_ on collection config.cache.chunks.test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.143-0500 I INDEX [conn96] validating collection config.cache.chunks.test4_fsmdb0.agg_out (UUID: b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.143-0500 I INDEX [conn96] validating index consistency _id_ on collection config.cache.chunks.test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.143-0500 I INDEX [conn96] validating index consistency lastmod_1 on collection config.cache.chunks.test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.145-0500 I COMMAND [conn145] CMD: validate config.changelog, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.138-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.146-0500 I INDEX [conn204] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.147-0500 I INDEX [conn208] validating index consistency lastmod_1 on collection config.cache.chunks.test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.143-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.141-0500 W STORAGE [conn96] Could not complete validation of table:index-460--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.143-0500 I INDEX [conn96] Validation complete for collection config.cache.chunks.test4_fsmdb0.agg_out (UUID: b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.145-0500 W STORAGE [conn145] Could not complete validation of table:collection-49-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.138-0500 W STORAGE [conn100] Could not complete validation of table:index-358--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.147-0500 I INDEX [conn204] Validation complete for collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.147-0500 I INDEX [conn208] Validation complete for collection config.cache.chunks.test4_fsmdb0.agg_out (UUID: b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.143-0500 W STORAGE [conn100] Could not complete validation of table:index-358--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.141-0500 I INDEX [conn96] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.144-0500 I COMMAND [conn96] CMD: validate config.cache.chunks.test4_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.145-0500 I INDEX [conn145] validating the internal structure of index _id_ on collection config.changelog
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.138-0500 I INDEX [conn100] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.147-0500 I COMMAND [conn204] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.147-0500 I COMMAND [conn208] CMD: validate config.cache.chunks.test4_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.143-0500 I INDEX [conn100] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.141-0500 W STORAGE [conn96] Could not complete validation of table:index-461--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.144-0500 W STORAGE [conn96] Could not complete validation of table:collection-229--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.145-0500 W STORAGE [conn145] Could not complete validation of table:index-50-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.138-0500 W STORAGE [conn100] Could not complete validation of table:index-359--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.148-0500 I INDEX [conn204] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.148-0500 I INDEX [conn208] validating the internal structure of index _id_ on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.143-0500 W STORAGE [conn100] Could not complete validation of table:index-359--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.141-0500 I INDEX [conn96] validating collection config.cache.chunks.test4_fsmdb0.agg_out (UUID: b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.144-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.145-0500 I INDEX [conn145] validating collection config.changelog (UUID: 65b892c8-48e9-4ca9-8300-743a486a361f)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.138-0500 I INDEX [conn100] validating collection config.cache.chunks.test4_fsmdb0.fsmcoll0 (UUID: 647e6274-b0dc-4671-90c7-65b5ed709ba8)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.150-0500 I INDEX [conn204] validating collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.150-0500 I INDEX [conn208] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[executor:fsm_workload_test:job0] 2019-11-26T14:32:20.019-0500 agg_out:CleanupConcurrencyWorkloads ran in 0.06 seconds: no failures detected.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.143-0500 I INDEX [conn100] validating collection config.cache.chunks.test4_fsmdb0.fsmcoll0 (UUID: 647e6274-b0dc-4671-90c7-65b5ed709ba8)
[executor] 2019-11-26T14:32:20.425-0500 Waiting for threads to complete
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.141-0500 I INDEX [conn96] validating index consistency _id_ on collection config.cache.chunks.test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.144-0500 W STORAGE [conn96] Could not complete validation of table:index-230--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.145-0500 I INDEX [conn145] validating index consistency _id_ on collection config.changelog
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.138-0500 I INDEX [conn100] validating index consistency _id_ on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.150-0500 I INDEX [conn204] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:20.019-0500 I NETWORK [conn55] end connection 127.0.0.1:59094 (1 connection now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.019-0500 I NETWORK [conn189] end connection 127.0.0.1:45954 (1 connection now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.153-0500 I INDEX [conn208] validating collection config.cache.chunks.test4_fsmdb0.fsmcoll0 (UUID: c7f3cab2-be92-4a48-8ca9-60ce74a83411)
[CheckReplDBHashInBackground:job0] Stopping the background check repl dbhash thread.
[executor] 2019-11-26T14:32:20.426-0500 Threads are completed!
[executor] 2019-11-26T14:32:20.426-0500 Summary of latest execution: All 5 test(s) passed in 24.32 seconds.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.143-0500 I INDEX [conn100] validating index consistency _id_ on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.141-0500 I INDEX [conn96] validating index consistency lastmod_1 on collection config.cache.chunks.test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.144-0500 I INDEX [conn96] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.146-0500 I INDEX [conn145] Validation complete for collection config.changelog (UUID: 65b892c8-48e9-4ca9-8300-743a486a361f). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.138-0500 I INDEX [conn100] validating index consistency lastmod_1 on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.150-0500 I INDEX [conn204] Validation complete for collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f). No corruption found.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:20.019-0500 I NETWORK [conn54] end connection 127.0.0.1:59090 (0 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.019-0500 I NETWORK [conn188] end connection 127.0.0.1:45950 (0 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.153-0500 I INDEX [conn208] validating index consistency _id_ on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.143-0500 I INDEX [conn100] validating index consistency lastmod_1 on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.141-0500 I INDEX [conn96] Validation complete for collection config.cache.chunks.test4_fsmdb0.agg_out (UUID: b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2). No corruption found.
[CheckReplDBHashInBackground:job0] Starting the background check repl dbhash thread.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.144-0500 W STORAGE [conn96] Could not complete validation of table:index-231--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.146-0500 I COMMAND [conn145] CMD: validate config.chunks, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.138-0500 I INDEX [conn100] Validation complete for collection config.cache.chunks.test4_fsmdb0.fsmcoll0 (UUID: 647e6274-b0dc-4671-90c7-65b5ed709ba8). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.151-0500 I COMMAND [conn204] CMD: validate config.system.sessions, full:true
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:20.428-0500 I NETWORK [listener] connection accepted from 127.0.0.1:59098 #56 (1 connection now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.429-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45958 #190 (1 connection now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.153-0500 I INDEX [conn208] validating index consistency lastmod_1 on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.143-0500 I INDEX [conn100] Validation complete for collection config.cache.chunks.test4_fsmdb0.fsmcoll0 (UUID: 647e6274-b0dc-4671-90c7-65b5ed709ba8). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.142-0500 I COMMAND [conn96] CMD: validate config.cache.chunks.test4_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.144-0500 I INDEX [conn96] validating collection config.cache.chunks.test4_fsmdb0.fsmcoll0 (UUID: c7f3cab2-be92-4a48-8ca9-60ce74a83411)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.146-0500 W STORAGE [conn145] Could not complete validation of table:collection-17-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.139-0500 I COMMAND [conn100] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.152-0500 I INDEX [conn204] validating the internal structure of index _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:20.433-0500 I NETWORK [conn56] received client metadata from 127.0.0.1:59098 conn56: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.433-0500 I NETWORK [conn190] received client metadata from 127.0.0.1:45958 conn190: { driver: { name: "PyMongo", version: "3.7.2" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "4.4.0-112-generic" }, platform: "CPython 3.7.4.final.0" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.153-0500 I INDEX [conn208] Validation complete for collection config.cache.chunks.test4_fsmdb0.fsmcoll0 (UUID: c7f3cab2-be92-4a48-8ca9-60ce74a83411). No corruption found.
[CheckReplDBHashInBackground:job0] Resuming the background check repl dbhash thread.
[executor:fsm_workload_test:job0] 2019-11-26T14:32:20.438-0500 Running agg_out.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval TestData = new Object(); TestData["usingReplicaSetShards"] = true; TestData["runningWithAutoSplit"] = false; TestData["runningWithBalancer"] = false; TestData["fsmWorkloads"] = ["jstests/concurrency/fsm_workloads/agg_out.js"]; TestData["resmokeDbPathPrefix"] = "/home/nz_linux/data/job0/resmoke"; TestData["dbNamePrefix"] = "test5_"; TestData["sameDB"] = false; TestData["sameCollection"] = false; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "resmoke_runner"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); --readMode=commands mongodb://localhost:20007,localhost:20008 jstests/concurrency/fsm_libs/resmoke_runner.js
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.143-0500 I COMMAND [conn100] CMD: validate config.cache.collections, full:true
[executor:fsm_workload_test:job0] 2019-11-26T14:32:20.438-0500 Running agg_out:CheckReplDBHashInBackground...
[fsm_workload_test:agg_out] 2019-11-26T14:32:20.439-0500 Starting FSM workload jstests/concurrency/fsm_workloads/agg_out.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval TestData = new Object(); TestData["usingReplicaSetShards"] = true; TestData["runningWithAutoSplit"] = false; TestData["runningWithBalancer"] = false; TestData["fsmWorkloads"] = ["jstests/concurrency/fsm_workloads/agg_out.js"]; TestData["resmokeDbPathPrefix"] = "/home/nz_linux/data/job0/resmoke"; TestData["dbNamePrefix"] = "test5_"; TestData["sameDB"] = false; TestData["sameCollection"] = false; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "resmoke_runner"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); --readMode=commands mongodb://localhost:20007,localhost:20008 jstests/concurrency/fsm_libs/resmoke_runner.js
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.142-0500 W STORAGE [conn96] Could not complete validation of table:collection-229--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:20.440-0500 Starting JSTest jstests/hooks/run_check_repl_dbhash_background.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_check_repl_dbhash_background"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_check_repl_dbhash_background.js
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.144-0500 I INDEX [conn96] validating index consistency _id_ on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.146-0500 I INDEX [conn145] validating the internal structure of index _id_ on collection config.chunks
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.139-0500 W STORAGE [conn100] Could not complete validation of table:collection-29--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.153-0500 I INDEX [conn204] validating the internal structure of index lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:20.435-0500 I NETWORK [conn56] end connection 127.0.0.1:59098 (0 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.435-0500 I NETWORK [conn190] end connection 127.0.0.1:45958 (0 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.154-0500 I COMMAND [conn208] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.144-0500 W STORAGE [conn100] Could not complete validation of table:collection-29--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.142-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.144-0500 I INDEX [conn96] validating index consistency lastmod_1 on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.146-0500 W STORAGE [conn145] Could not complete validation of table:index-18-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.139-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.156-0500 I INDEX [conn204] validating collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.155-0500 I INDEX [conn208] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.144-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.142-0500 W STORAGE [conn96] Could not complete validation of table:index-230--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.144-0500 I INDEX [conn96] Validation complete for collection config.cache.chunks.test4_fsmdb0.fsmcoll0 (UUID: c7f3cab2-be92-4a48-8ca9-60ce74a83411). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.146-0500 I INDEX [conn145] validating the internal structure of index ns_1_min_1 on collection config.chunks
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.139-0500 W STORAGE [conn100] Could not complete validation of table:index-30--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.156-0500 I INDEX [conn204] validating index consistency _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.158-0500 I INDEX [conn208] validating collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.144-0500 W STORAGE [conn100] Could not complete validation of table:index-30--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.142-0500 I INDEX [conn96] validating the internal structure of index lastmod_1 on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.145-0500 I COMMAND [conn96] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.146-0500 W STORAGE [conn145] Could not complete validation of table:index-19-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.139-0500 I INDEX [conn100] validating collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.156-0500 I INDEX [conn204] validating index consistency lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.158-0500 I INDEX [conn208] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.144-0500 I INDEX [conn100] validating collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.142-0500 W STORAGE [conn96] Could not complete validation of table:index-231--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.145-0500 W STORAGE [conn96] Could not complete validation of table:collection-27--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.146-0500 I INDEX [conn145] validating the internal structure of index ns_1_shard_1_min_1 on collection config.chunks
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.139-0500 I INDEX [conn100] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.156-0500 I INDEX [conn204] Validation complete for collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.158-0500 I INDEX [conn208] Validation complete for collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.144-0500 I INDEX [conn100] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.142-0500 I INDEX [conn96] validating collection config.cache.chunks.test4_fsmdb0.fsmcoll0 (UUID: c7f3cab2-be92-4a48-8ca9-60ce74a83411)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.145-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.147-0500 W STORAGE [conn145] Could not complete validation of table:index-20-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.139-0500 I INDEX [conn100] Validation complete for collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.156-0500 I COMMAND [conn204] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.158-0500 I COMMAND [conn208] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.144-0500 I INDEX [conn100] Validation complete for collection config.cache.collections (UUID: 9215d95d-c07d-4373-a3d5-16d1fad88b5b). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.142-0500 I INDEX [conn96] validating index consistency _id_ on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.145-0500 W STORAGE [conn96] Could not complete validation of table:index-28--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.147-0500 I INDEX [conn145] validating the internal structure of index ns_1_lastmod_1 on collection config.chunks
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.140-0500 I COMMAND [conn100] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.156-0500 W STORAGE [conn204] Could not complete validation of table:collection-15-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.159-0500 I INDEX [conn208] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.144-0500 I COMMAND [conn100] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.142-0500 I INDEX [conn96] validating index consistency lastmod_1 on collection config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.145-0500 I INDEX [conn96] validating collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.147-0500 W STORAGE [conn145] Could not complete validation of table:index-21-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.140-0500 W STORAGE [conn100] Could not complete validation of table:collection-27--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.156-0500 I INDEX [conn204] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.162-0500 I INDEX [conn208] validating collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.144-0500 W STORAGE [conn100] Could not complete validation of table:collection-27--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.142-0500 I INDEX [conn96] Validation complete for collection config.cache.chunks.test4_fsmdb0.fsmcoll0 (UUID: c7f3cab2-be92-4a48-8ca9-60ce74a83411). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.145-0500 I INDEX [conn96] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.147-0500 I INDEX [conn145] validating collection config.chunks (UUID: e7035d0b-a892-4426-b520-83da62bcbda6)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.140-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.156-0500 W STORAGE [conn204] Could not complete validation of table:index-16-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.162-0500 I INDEX [conn208] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.144-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.143-0500 I COMMAND [conn96] CMD: validate config.cache.collections, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.145-0500 I INDEX [conn96] Validation complete for collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.147-0500 I INDEX [conn145] validating index consistency _id_ on collection config.chunks
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.140-0500 W STORAGE [conn100] Could not complete validation of table:index-28--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.157-0500 I INDEX [conn204] validating collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.162-0500 I INDEX [conn208] Validation complete for collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.144-0500 W STORAGE [conn100] Could not complete validation of table:index-28--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.143-0500 W STORAGE [conn96] Could not complete validation of table:collection-27--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.146-0500 I COMMAND [conn96] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.147-0500 I INDEX [conn145] validating index consistency ns_1_min_1 on collection config.chunks
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.140-0500 I INDEX [conn100] validating collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.157-0500 I INDEX [conn204] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.163-0500 I COMMAND [conn208] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.144-0500 I INDEX [conn100] validating collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.143-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.146-0500 W STORAGE [conn96] Could not complete validation of table:collection-25--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.147-0500 I INDEX [conn145] validating index consistency ns_1_shard_1_min_1 on collection config.chunks
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.140-0500 I INDEX [conn100] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.157-0500 I INDEX [conn204] Validation complete for collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.163-0500 W STORAGE [conn208] Could not complete validation of table:collection-15--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.144-0500 I INDEX [conn100] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.143-0500 W STORAGE [conn96] Could not complete validation of table:index-28--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.146-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.147-0500 I INDEX [conn145] validating index consistency ns_1_lastmod_1 on collection config.chunks
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.140-0500 I INDEX [conn100] Validation complete for collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.158-0500 I COMMAND [conn204] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.163-0500 I INDEX [conn208] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.145-0500 I INDEX [conn100] Validation complete for collection config.cache.databases (UUID: d4db3d14-1174-436c-a1b7-966e3cf5246f). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.143-0500 I INDEX [conn96] validating collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.146-0500 W STORAGE [conn96] Could not complete validation of table:index-26--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.147-0500 I INDEX [conn145] Validation complete for collection config.chunks (UUID: e7035d0b-a892-4426-b520-83da62bcbda6). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.141-0500 I COMMAND [conn100] CMD: validate config.system.sessions, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.158-0500 W STORAGE [conn204] Could not complete validation of table:collection-10-8224331490264904478. This is a transient issue as the collection was actively in use by other operations.
[fsm_workload_test:agg_out] 2019-11-26T14:32:20.451-0500 FSM workload jstests/concurrency/fsm_workloads/agg_out.js started with pid 16397.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.163-0500 W STORAGE [conn208] Could not complete validation of table:index-16--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.145-0500 I COMMAND [conn100] CMD: validate config.system.sessions, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.143-0500 I INDEX [conn96] validating index consistency _id_ on collection config.cache.collections
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.146-0500 I INDEX [conn96] validating collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.148-0500 I COMMAND [conn145] CMD: validate config.collections, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.142-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.158-0500 I INDEX [conn204] validating collection local.oplog.rs (UUID: 5f1b9ff7-2fef-4590-8e90-0f3704b0f5df)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.163-0500 I INDEX [conn208] validating collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.146-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.143-0500 I INDEX [conn96] Validation complete for collection config.cache.collections (UUID: 20c18e31-cbdc-4c75-b799-d89f05ff917c). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.146-0500 I INDEX [conn96] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.148-0500 W STORAGE [conn145] Could not complete validation of table:collection-51-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.143-0500 I INDEX [conn100] validating the internal structure of index lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.188-0500 I INDEX [conn204] Validation complete for collection local.oplog.rs (UUID: 5f1b9ff7-2fef-4590-8e90-0f3704b0f5df). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.163-0500 I INDEX [conn208] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.148-0500 I INDEX [conn100] validating the internal structure of index lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.144-0500 I COMMAND [conn96] CMD: validate config.cache.databases, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.146-0500 I INDEX [conn96] Validation complete for collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.148-0500 I INDEX [conn145] validating the internal structure of index _id_ on collection config.collections
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.146-0500 I INDEX [conn100] validating collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.189-0500 I COMMAND [conn204] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.163-0500 I INDEX [conn208] Validation complete for collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.151-0500 I INDEX [conn100] validating collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.144-0500 W STORAGE [conn96] Could not complete validation of table:collection-25--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.147-0500 I COMMAND [conn96] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.148-0500 W STORAGE [conn145] Could not complete validation of table:index-52-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.146-0500 I INDEX [conn100] validating index consistency _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.190-0500 I INDEX [conn204] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.165-0500 I COMMAND [conn208] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.151-0500 I INDEX [conn100] validating index consistency _id_ on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.144-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.147-0500 W STORAGE [conn96] Could not complete validation of table:collection-21--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.148-0500 I INDEX [conn145] validating collection config.collections (UUID: c846d630-16e0-4675-b90f-3cd769544ef0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.146-0500 I INDEX [conn100] validating index consistency lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.192-0500 I INDEX [conn204] validating collection local.replset.election (UUID: 801ad0de-17c3-44b2-a878-e91b8de004c5)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.165-0500 W STORAGE [conn208] Could not complete validation of table:collection-10--2588534479858262356. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.151-0500 I INDEX [conn100] validating index consistency lsidTTLIndex on collection config.system.sessions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.144-0500 W STORAGE [conn96] Could not complete validation of table:index-26--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.147-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.148-0500 I INDEX [conn145] validating index consistency _id_ on collection config.collections
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.146-0500 I INDEX [conn100] Validation complete for collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.192-0500 I INDEX [conn204] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.165-0500 I INDEX [conn208] validating collection local.oplog.rs (UUID: f999d0d7-cb6c-4d2c-a5ff-807a7ed09766)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.151-0500 I INDEX [conn100] Validation complete for collection config.system.sessions (UUID: 13cbac84-c366-42f3-b1e6-6924cc7c7479). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.144-0500 I INDEX [conn96] validating collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.149-0500 I INDEX [conn96] validating collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.148-0500 I INDEX [conn145] Validation complete for collection config.collections (UUID: c846d630-16e0-4675-b90f-3cd769544ef0). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.147-0500 I COMMAND [conn100] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.192-0500 I INDEX [conn204] Validation complete for collection local.replset.election (UUID: 801ad0de-17c3-44b2-a878-e91b8de004c5). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.211-0500 I INDEX [conn208] Validation complete for collection local.oplog.rs (UUID: f999d0d7-cb6c-4d2c-a5ff-807a7ed09766). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.151-0500 I COMMAND [conn100] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.144-0500 I INDEX [conn96] validating index consistency _id_ on collection config.cache.databases
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.149-0500 I INDEX [conn96] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.149-0500 I COMMAND [conn145] CMD: validate config.databases, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.147-0500 W STORAGE [conn100] Could not complete validation of table:collection-21--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.193-0500 I COMMAND [conn204] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.212-0500 I COMMAND [conn208] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.151-0500 W STORAGE [conn100] Could not complete validation of table:collection-21--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.144-0500 I INDEX [conn96] Validation complete for collection config.cache.databases (UUID: e62da42c-0881-4ab9-ac4f-a628b927bd13). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.149-0500 I INDEX [conn96] Validation complete for collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.149-0500 W STORAGE [conn145] Could not complete validation of table:collection-55-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.147-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.193-0500 I INDEX [conn204] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.213-0500 I INDEX [conn208] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.151-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.145-0500 I COMMAND [conn96] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.152-0500 I COMMAND [conn96] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.149-0500 I INDEX [conn145] validating the internal structure of index _id_ on collection config.databases
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.147-0500 W STORAGE [conn100] Could not complete validation of table:index-22--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.196-0500 I INDEX [conn204] validating collection local.replset.minvalid (UUID: a96fd08c-e1c8-43e5-868a-0849697b175e)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.215-0500 I INDEX [conn208] validating collection local.replset.election (UUID: 101a66fe-c3c0-4bee-94b9-e9bb8d04aa79)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.151-0500 W STORAGE [conn100] Could not complete validation of table:index-22--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.145-0500 W STORAGE [conn96] Could not complete validation of table:collection-21--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.212-0500 W STORAGE [conn96] Could not complete validation of table:collection-16--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.149-0500 W STORAGE [conn145] Could not complete validation of table:index-56-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.147-0500 I INDEX [conn100] validating collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.196-0500 I INDEX [conn204] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.215-0500 I INDEX [conn208] validating index consistency _id_ on collection local.replset.election
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:20.463-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js started with pid 16400.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.151-0500 I INDEX [conn100] validating collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.145-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.212-0500 I INDEX [conn96] validating collection local.oplog.rs (UUID: 6c707c3f-4064-4e35-98fb-b2fff8245539)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.149-0500 I INDEX [conn145] validating collection config.databases (UUID: 1c31f9a7-ee46-41d3-a296-2e1f323b51b8)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.147-0500 I INDEX [conn100] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.196-0500 I INDEX [conn204] Validation complete for collection local.replset.minvalid (UUID: a96fd08c-e1c8-43e5-868a-0849697b175e). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.215-0500 I INDEX [conn208] Validation complete for collection local.replset.election (UUID: 101a66fe-c3c0-4bee-94b9-e9bb8d04aa79). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.151-0500 I INDEX [conn100] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.147-0500 I INDEX [conn96] validating collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.300-0500 I INDEX [conn96] Validation complete for collection local.oplog.rs (UUID: 6c707c3f-4064-4e35-98fb-b2fff8245539). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.149-0500 I INDEX [conn145] validating index consistency _id_ on collection config.databases
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.147-0500 I INDEX [conn100] Validation complete for collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.196-0500 I COMMAND [conn204] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.216-0500 I COMMAND [conn208] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.152-0500 I INDEX [conn100] Validation complete for collection config.transactions (UUID: 594dd33c-8197-4d92-ab4c-87745ec5f77d). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.147-0500 I INDEX [conn96] validating index consistency _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.300-0500 I COMMAND [conn96] command local.$cmd appName: "MongoDB Shell" command: validate { validate: "oplog.rs", full: true, lsid: { id: UUID("98ffac19-46a8-4dc6-a44a-a9b1b97f84be") }, $clusterTime: { clusterTime: Timestamp(1574796738, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:678 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { W: 1 } } } flowControl:{ acquireCount: 1 } storage:{ data: { bytesRead: 57653212, timeReadingMicros: 57004 } } protocol:op_msg 148ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.149-0500 I INDEX [conn145] Validation complete for collection config.databases (UUID: 1c31f9a7-ee46-41d3-a296-2e1f323b51b8). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.149-0500 I COMMAND [conn100] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.197-0500 I INDEX [conn204] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.216-0500 I INDEX [conn208] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.153-0500 I COMMAND [conn100] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.147-0500 I INDEX [conn96] Validation complete for collection config.transactions (UUID: ec61ac84-71d3-4912-9466-2724ab31be3d). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.301-0500 I COMMAND [conn96] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.150-0500 I COMMAND [conn145] CMD: validate config.lockpings, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.158-0500 W STORAGE [conn100] Could not complete validation of table:collection-16--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.199-0500 I INDEX [conn204] validating collection local.replset.oplogTruncateAfterPoint (UUID: 4ac06258-0ea7-46c8-b773-0c637830872b)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.218-0500 I INDEX [conn208] validating collection local.replset.minvalid (UUID: 5dfed1a1-c7a1-4f91-a3da-2544e54d2e9a)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.153-0500 W STORAGE [conn100] Could not complete validation of table:collection-16--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.149-0500 I COMMAND [conn96] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.302-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.150-0500 W STORAGE [conn145] Could not complete validation of table:collection-32-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.158-0500 I INDEX [conn100] validating collection local.oplog.rs (UUID: 88962763-38f7-4965-bfd6-b2a62304ae0e)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.199-0500 I INDEX [conn204] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.218-0500 I INDEX [conn208] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.153-0500 I INDEX [conn100] validating collection local.oplog.rs (UUID: 6d43bede-f05f-41b1-b7ac-5a32b66b8140)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.209-0500 W STORAGE [conn96] Could not complete validation of table:collection-16--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.304-0500 I INDEX [conn96] validating collection local.replset.election (UUID: 6a83721b-d0f2-438c-a2e3-ec6a11e75236)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.150-0500 I INDEX [conn145] validating the internal structure of index _id_ on collection config.lockpings
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.229-0500 I INDEX [conn100] Validation complete for collection local.oplog.rs (UUID: 88962763-38f7-4965-bfd6-b2a62304ae0e). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.199-0500 I INDEX [conn204] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 4ac06258-0ea7-46c8-b773-0c637830872b). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.219-0500 I INDEX [conn208] Validation complete for collection local.replset.minvalid (UUID: 5dfed1a1-c7a1-4f91-a3da-2544e54d2e9a). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.187-0500 I INDEX [conn100] Validation complete for collection local.oplog.rs (UUID: 6d43bede-f05f-41b1-b7ac-5a32b66b8140). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.209-0500 I INDEX [conn96] validating collection local.oplog.rs (UUID: 307925b3-4143-4c06-a46a-f04119b3afb4)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.304-0500 I INDEX [conn96] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.152-0500 I INDEX [conn145] validating the internal structure of index ping_1 on collection config.lockpings
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.230-0500 I COMMAND [conn100] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.200-0500 I COMMAND [conn204] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.219-0500 I COMMAND [conn208] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.188-0500 I COMMAND [conn100] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.298-0500 I INDEX [conn96] Validation complete for collection local.oplog.rs (UUID: 307925b3-4143-4c06-a46a-f04119b3afb4). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.304-0500 I INDEX [conn96] Validation complete for collection local.replset.election (UUID: 6a83721b-d0f2-438c-a2e3-ec6a11e75236). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.152-0500 W STORAGE [conn145] Could not complete validation of table:index-34-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.231-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.201-0500 I INDEX [conn204] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.220-0500 I INDEX [conn208] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.188-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.298-0500 I COMMAND [conn96] command local.$cmd appName: "MongoDB Shell" command: validate { validate: "oplog.rs", full: true, lsid: { id: UUID("1ed47288-a80c-4312-ac63-bbc98c658ea4") }, $clusterTime: { clusterTime: Timestamp(1574796738, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:678 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { W: 1 } } } flowControl:{ acquireCount: 1 } storage:{ data: { bytesRead: 57703274, timeReadingMicros: 57485 } } protocol:op_msg 149ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.305-0500 I COMMAND [conn96] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.152-0500 I INDEX [conn145] validating collection config.lockpings (UUID: f662f115-623a-496b-9953-7132cdf8c056)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.233-0500 I INDEX [conn100] validating collection local.replset.election (UUID: d0928956-d7fc-46fe-a9bc-1f07f2435457)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.203-0500 I INDEX [conn204] validating collection local.startup_log (UUID: e8e71921-e80f-42ad-92d0-ad769374a694)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.222-0500 I INDEX [conn208] validating collection local.replset.oplogTruncateAfterPoint (UUID: 31ce824c-ef86-4223-a4be-3069dae7b5f2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.190-0500 I INDEX [conn100] validating collection local.replset.election (UUID: bf7b5380-e70a-475e-ad1b-16751bee6907)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.299-0500 I COMMAND [conn96] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.305-0500 W STORAGE [conn96] Could not complete validation of table:collection-4--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.152-0500 I INDEX [conn145] validating index consistency _id_ on collection config.lockpings
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.233-0500 I INDEX [conn100] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.203-0500 I INDEX [conn204] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.222-0500 I INDEX [conn208] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.190-0500 I INDEX [conn100] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.300-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.305-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.152-0500 I INDEX [conn145] validating index consistency ping_1 on collection config.lockpings
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.233-0500 I INDEX [conn100] Validation complete for collection local.replset.election (UUID: d0928956-d7fc-46fe-a9bc-1f07f2435457). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.203-0500 I INDEX [conn204] Validation complete for collection local.startup_log (UUID: e8e71921-e80f-42ad-92d0-ad769374a694). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.222-0500 I INDEX [conn208] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 31ce824c-ef86-4223-a4be-3069dae7b5f2). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.191-0500 I INDEX [conn100] Validation complete for collection local.replset.election (UUID: bf7b5380-e70a-475e-ad1b-16751bee6907). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.302-0500 I INDEX [conn96] validating collection local.replset.election (UUID: 7b059263-7419-4cf5-8072-b44957d729c9)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.306-0500 I INDEX [conn96] validating collection local.replset.minvalid (UUID: 3f481e27-9697-4b6d-b77b-0bd9b43c5dfa)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.152-0500 I INDEX [conn145] Validation complete for collection config.lockpings (UUID: f662f115-623a-496b-9953-7132cdf8c056). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.234-0500 I COMMAND [conn100] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.204-0500 I COMMAND [conn204] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.223-0500 I COMMAND [conn208] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.191-0500 I COMMAND [conn100] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.302-0500 I INDEX [conn96] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.306-0500 I INDEX [conn96] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.154-0500 I COMMAND [conn145] CMD: validate config.locks, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.234-0500 W STORAGE [conn100] Could not complete validation of table:collection-4--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.204-0500 I INDEX [conn204] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.223-0500 I INDEX [conn208] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.191-0500 W STORAGE [conn100] Could not complete validation of table:collection-4--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.191-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.193-0500 I INDEX [conn100] validating collection local.replset.minvalid (UUID: 6654b1c2-f323-4c78-9165-5ff31d331960)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.193-0500 I INDEX [conn100] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.193-0500 I INDEX [conn100] Validation complete for collection local.replset.minvalid (UUID: 6654b1c2-f323-4c78-9165-5ff31d331960). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.207-0500 I INDEX [conn204] validating collection local.system.replset (UUID: 318b7af2-23ac-427e-bba7-a3e3f5b1e60d)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.225-0500 I INDEX [conn208] validating collection local.startup_log (UUID: fd9e05bb-cd6c-441c-9265-3783d4065b03)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.302-0500 I INDEX [conn96] Validation complete for collection local.replset.election (UUID: 7b059263-7419-4cf5-8072-b44957d729c9). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.306-0500 I INDEX [conn96] Validation complete for collection local.replset.minvalid (UUID: 3f481e27-9697-4b6d-b77b-0bd9b43c5dfa). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.154-0500 W STORAGE [conn145] Could not complete validation of table:collection-28-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.234-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.194-0500 I COMMAND [conn100] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.207-0500 I INDEX [conn204] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.226-0500 I INDEX [conn208] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.302-0500 I COMMAND [conn96] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.307-0500 I COMMAND [conn96] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.154-0500 I INDEX [conn145] validating the internal structure of index _id_ on collection config.locks
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.235-0500 I INDEX [conn100] validating collection local.replset.minvalid (UUID: 6eb6e647-60c7-450a-a905-f04052287b8a)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.197-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.207-0500 I INDEX [conn204] Validation complete for collection local.system.replset (UUID: 318b7af2-23ac-427e-bba7-a3e3f5b1e60d). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.226-0500 I INDEX [conn208] Validation complete for collection local.startup_log (UUID: fd9e05bb-cd6c-441c-9265-3783d4065b03). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.303-0500 W STORAGE [conn96] Could not complete validation of table:collection-4--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.311-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.154-0500 W STORAGE [conn145] Could not complete validation of table:index-29-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.235-0500 I INDEX [conn100] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.199-0500 I INDEX [conn100] validating collection local.replset.oplogTruncateAfterPoint (UUID: fe211210-ae1b-4ab2-81d6-86b025cc1404)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.207-0500 I COMMAND [conn204] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.226-0500 I COMMAND [conn208] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.303-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.313-0500 I INDEX [conn96] validating collection local.replset.oplogTruncateAfterPoint (UUID: ae67a1b2-b2be-4d7e-8242-18f3082bc280)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.154-0500 I INDEX [conn145] validating the internal structure of index ts_1 on collection config.locks
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.235-0500 I INDEX [conn100] Validation complete for collection local.replset.minvalid (UUID: 6eb6e647-60c7-450a-a905-f04052287b8a). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.199-0500 I INDEX [conn100] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.208-0500 I INDEX [conn204] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.227-0500 I INDEX [conn208] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.304-0500 I INDEX [conn96] validating collection local.replset.minvalid (UUID: e1166351-a2a9-4335-b202-a653b252b811)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.313-0500 I INDEX [conn96] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.154-0500 W STORAGE [conn145] Could not complete validation of table:index-30-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.236-0500 I COMMAND [conn100] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.199-0500 I INDEX [conn100] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: fe211210-ae1b-4ab2-81d6-86b025cc1404). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.210-0500 I INDEX [conn204] validating collection local.system.rollback.id (UUID: 2d9a033a-73d1-44ef-b7d1-30b6243b0419)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.229-0500 I INDEX [conn208] validating collection local.system.replset (UUID: 3eb8c3e8-f477-448c-9a25-5db5ef40b0d6)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.304-0500 I INDEX [conn96] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.313-0500 I INDEX [conn96] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: ae67a1b2-b2be-4d7e-8242-18f3082bc280). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.154-0500 I INDEX [conn145] validating the internal structure of index state_1_process_1 on collection config.locks
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.239-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.200-0500 I COMMAND [conn100] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.210-0500 I INDEX [conn204] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.229-0500 I INDEX [conn208] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.304-0500 I INDEX [conn96] Validation complete for collection local.replset.minvalid (UUID: e1166351-a2a9-4335-b202-a653b252b811). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.314-0500 I COMMAND [conn96] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.315-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.241-0500 I INDEX [conn100] validating collection local.replset.oplogTruncateAfterPoint (UUID: 5d41bfc8-ebca-43f3-a038-30023495a91a)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.241-0500 I INDEX [conn100] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.210-0500 I INDEX [conn204] Validation complete for collection local.system.rollback.id (UUID: 2d9a033a-73d1-44ef-b7d1-30b6243b0419). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.229-0500 I INDEX [conn208] Validation complete for collection local.system.replset (UUID: 3eb8c3e8-f477-448c-9a25-5db5ef40b0d6). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.305-0500 I COMMAND [conn96] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.154-0500 W STORAGE [conn145] Could not complete validation of table:index-31-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.317-0500 I INDEX [conn96] validating collection local.startup_log (UUID: fb2ea5d2-ac7b-4697-a368-9f5d41483423)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.201-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.241-0500 I INDEX [conn100] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 5d41bfc8-ebca-43f3-a038-30023495a91a). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.242-0500 I COMMAND [conn100] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.230-0500 I COMMAND [conn208] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.309-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.154-0500 I INDEX [conn145] validating collection config.locks (UUID: dbde06c7-d8ac-4f80-ab9f-cae486f16451)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.317-0500 I INDEX [conn96] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.203-0500 I INDEX [conn100] validating collection local.startup_log (UUID: 7b6988ea-0c65-41a6-9855-5680c2c711a1)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.212-0500 I COMMAND [conn204] CMD: validate test4_fsmdb0.fsmcoll0, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.243-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.231-0500 I INDEX [conn208] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.311-0500 I INDEX [conn96] validating collection local.replset.oplogTruncateAfterPoint (UUID: 022b88bb-9282-4f39-aad1-6988341f4ac1)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.154-0500 I INDEX [conn145] validating index consistency _id_ on collection config.locks
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.317-0500 I INDEX [conn96] Validation complete for collection local.startup_log (UUID: fb2ea5d2-ac7b-4697-a368-9f5d41483423). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.317-0500 I COMMAND [conn96] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.214-0500 I INDEX [conn204] validating the internal structure of index _id_ on collection test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.245-0500 I INDEX [conn100] validating collection local.startup_log (UUID: e0cc0511-0005-4584-a461-5ae30058b4c6)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.233-0500 I INDEX [conn208] validating collection local.system.rollback.id (UUID: 223114bc-2956-4d9b-8f0a-5c567c2cb10e)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.311-0500 I INDEX [conn96] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.154-0500 I INDEX [conn145] validating index consistency ts_1 on collection config.locks
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.203-0500 I INDEX [conn100] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.318-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.215-0500 I INDEX [conn204] validating the internal structure of index _id_hashed on collection test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.245-0500 I INDEX [conn100] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.233-0500 I INDEX [conn208] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.311-0500 I INDEX [conn96] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 022b88bb-9282-4f39-aad1-6988341f4ac1). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.154-0500 I INDEX [conn145] validating index consistency state_1_process_1 on collection config.locks
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.203-0500 I INDEX [conn100] Validation complete for collection local.startup_log (UUID: 7b6988ea-0c65-41a6-9855-5680c2c711a1). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.320-0500 I INDEX [conn96] validating collection local.system.replset (UUID: 2b695a66-e9c6-4bba-a36e-eb0a5cf356ba)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.218-0500 I INDEX [conn204] validating collection test4_fsmdb0.fsmcoll0 (UUID: 08555f78-3db2-4ee9-9e10-8c80139ec7dd)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.245-0500 I INDEX [conn100] Validation complete for collection local.startup_log (UUID: e0cc0511-0005-4584-a461-5ae30058b4c6). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.233-0500 I INDEX [conn208] Validation complete for collection local.system.rollback.id (UUID: 223114bc-2956-4d9b-8f0a-5c567c2cb10e). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.312-0500 I COMMAND [conn96] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.154-0500 I INDEX [conn145] Validation complete for collection config.locks (UUID: dbde06c7-d8ac-4f80-ab9f-cae486f16451). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.203-0500 I COMMAND [conn100] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.320-0500 I INDEX [conn96] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.219-0500 I INDEX [conn204] validating index consistency _id_ on collection test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.245-0500 I COMMAND [conn100] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.234-0500 I COMMAND [conn208] CMD: validate test4_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.312-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.155-0500 I COMMAND [conn145] CMD: validate config.migrations, full:true
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.204-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.320-0500 I INDEX [conn96] Validation complete for collection local.system.replset (UUID: 2b695a66-e9c6-4bba-a36e-eb0a5cf356ba). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.219-0500 I INDEX [conn204] validating index consistency _id_hashed on collection test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.246-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.236-0500 I INDEX [conn208] validating the internal structure of index _id_ on collection test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.314-0500 I INDEX [conn96] validating collection local.startup_log (UUID: 62f9eac5-a715-4818-9af1-edc47894f622)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.156-0500 W STORAGE [conn145] Could not complete validation of table:collection-22-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.206-0500 I INDEX [conn100] validating collection local.system.replset (UUID: 920cbf66-0930-4ef5-82e9-10d7319f0fda)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.321-0500 I COMMAND [conn96] CMD: validate local.system.rollback.id, full:true
[fsm_workload_test:agg_out] 2019-11-26T14:32:20.479-0500 MongoDB shell version v0.0.0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.220-0500 I INDEX [conn204] Validation complete for collection test4_fsmdb0.fsmcoll0 (UUID: 08555f78-3db2-4ee9-9e10-8c80139ec7dd). No corruption found.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.248-0500 I INDEX [conn100] validating collection local.system.replset (UUID: 3b8c02e8-ec29-4e79-912d-3e315d1d851c)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.237-0500 I INDEX [conn208] validating the internal structure of index _id_hashed on collection test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.315-0500 I INDEX [conn96] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.156-0500 I INDEX [conn145] validating the internal structure of index _id_ on collection config.migrations
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.206-0500 I INDEX [conn100] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.322-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.221-0500 I NETWORK [conn204] end connection 127.0.0.1:40076 (41 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.248-0500 I INDEX [conn100] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.240-0500 I INDEX [conn208] validating collection test4_fsmdb0.agg_out (UUID: 6d7b1b53-805f-4e82-a6e8-dfd96f7e7393)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.315-0500 I INDEX [conn96] Validation complete for collection local.startup_log (UUID: 62f9eac5-a715-4818-9af1-edc47894f622). No corruption found.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.156-0500 W STORAGE [conn145] Could not complete validation of table:index-23-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.206-0500 I INDEX [conn100] Validation complete for collection local.system.replset (UUID: 920cbf66-0930-4ef5-82e9-10d7319f0fda). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.324-0500 I INDEX [conn96] validating collection local.system.rollback.id (UUID: d6027364-802b-4e8d-ae7f-556bc4252840)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.336-0500 I NETWORK [conn203] end connection 127.0.0.1:40050 (40 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.248-0500 I INDEX [conn100] Validation complete for collection local.system.replset (UUID: 3b8c02e8-ec29-4e79-912d-3e315d1d851c). No corruption found.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.241-0500 I INDEX [conn208] validating index consistency _id_ on collection test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.315-0500 I COMMAND [conn96] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.156-0500 I INDEX [conn145] validating the internal structure of index ns_1_min_1 on collection config.migrations
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.207-0500 I COMMAND [conn100] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.324-0500 I INDEX [conn96] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.346-0500 I NETWORK [conn202] end connection 127.0.0.1:40044 (39 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.249-0500 I COMMAND [conn100] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.242-0500 I INDEX [conn208] validating index consistency _id_hashed on collection test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.316-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.156-0500 W STORAGE [conn145] Could not complete validation of table:index-24-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.208-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.324-0500 I INDEX [conn96] Validation complete for collection local.system.rollback.id (UUID: d6027364-802b-4e8d-ae7f-556bc4252840). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.972-0500 I COMMAND [conn37] CMD: drop test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.250-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.242-0500 I INDEX [conn208] Validation complete for collection test4_fsmdb0.agg_out (UUID: 6d7b1b53-805f-4e82-a6e8-dfd96f7e7393). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.318-0500 I INDEX [conn96] validating collection local.system.replset (UUID: c43cc3e4-845d-4144-8406-83bf4df96d39)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.156-0500 I INDEX [conn145] validating collection config.migrations (UUID: 550e32ef-0dd4-48f9-bb5e-9e21bec0734f)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.210-0500 I INDEX [conn100] validating collection local.system.rollback.id (UUID: 9434a858-83b3-4d87-8d66-64bde405790b)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.325-0500 I COMMAND [conn96] CMD: validate test4_fsmdb0.agg_out, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.985-0500 I COMMAND [conn37] CMD: drop test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.252-0500 I INDEX [conn100] validating collection local.system.rollback.id (UUID: 1099f6d7-f170-471c-a0ac-dc97bd7e42b0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:20.485-0500 MongoDB shell version v0.0.0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.242-0500 I COMMAND [conn208] CMD: validate test4_fsmdb0.fsmcoll0, full:true
[fsm_workload_test:agg_out] 2019-11-26T14:32:20.530-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.822-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.822-0500 Implicit session: session { "id" : UUID("81fc4fb1-fe7f-4fbe-9830-4ea02604e934") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.822-0500 MongoDB server version: 0.0.0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.823-0500 true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.823-0500 2019-11-26T14:32:20.543-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.823-0500 2019-11-26T14:32:20.543-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.823-0500 2019-11-26T14:32:20.544-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.823-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.318-0500 I INDEX [conn96] validating index consistency _id_ on collection local.system.replset
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.823-0500 Implicit session: session { "id" : UUID("6eb1ba76-9c69-4752-ba2c-4295709b636e") }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.156-0500 I INDEX [conn145] validating index consistency _id_ on collection config.migrations
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.823-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.210-0500 I INDEX [conn100] validating index consistency _id_ on collection local.system.rollback.id
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.823-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.325-0500 W STORAGE [conn96] Could not complete validation of table:collection-437--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.823-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.985-0500 I STORAGE [conn37] dropCollection: test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.824-0500 true
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.530-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45960 #191 (1 connection now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.824-0500 [jsTest] New session started with sessionID: { "id" : UUID("ed837728-89be-4c36-8b0d-24ff8ec3e724") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:20.571-0500 I NETWORK [listener] connection accepted from 127.0.0.1:59156 #57 (1 connection now open)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.824-0500 2019-11-26T14:32:20.546-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.252-0500 I INDEX [conn100] validating index consistency _id_ on collection local.system.rollback.id
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.824-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.244-0500 I INDEX [conn208] validating the internal structure of index _id_ on collection test4_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.824-0500 2019-11-26T14:32:20.546-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.318-0500 I INDEX [conn96] Validation complete for collection local.system.replset (UUID: c43cc3e4-845d-4144-8406-83bf4df96d39). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.825-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.156-0500 I INDEX [conn145] validating index consistency ns_1_min_1 on collection config.migrations
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.825-0500 2019-11-26T14:32:20.547-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.210-0500 I INDEX [conn100] Validation complete for collection local.system.rollback.id (UUID: 9434a858-83b3-4d87-8d66-64bde405790b). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.825-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.325-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection test4_fsmdb0.agg_out
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.825-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.985-0500 I STORAGE [conn37] Finishing collection drop for test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd).
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.825-0500 2019-11-26T14:32:20.547-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.252-0500 I INDEX [conn100] Validation complete for collection local.system.rollback.id (UUID: 1099f6d7-f170-471c-a0ac-dc97bd7e42b0). No corruption found.
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.826-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.531-0500 I NETWORK [conn191] received client metadata from 127.0.0.1:45960 conn191: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.826-0500 2019-11-26T14:32:20.547-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:20.571-0500 I NETWORK [conn57] received client metadata from 127.0.0.1:59156 conn57: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.826-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.246-0500 I INDEX [conn208] validating the internal structure of index _id_hashed on collection test4_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.826-0500 2019-11-26T14:32:20.548-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.319-0500 I COMMAND [conn96] CMD: validate local.system.rollback.id, full:true
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.826-0500 [jsTest] New session started with sessionID: { "id" : UUID("0fcf076f-d701-4c42-98ed-a9623da9f0b3") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.156-0500 I INDEX [conn145] Validation complete for collection config.migrations (UUID: 550e32ef-0dd4-48f9-bb5e-9e21bec0734f). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.827-0500 2019-11-26T14:32:20.548-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.211-0500 I COMMAND [conn100] CMD: validate test4_fsmdb0.fsmcoll0, full:true
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.827-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.325-0500 W STORAGE [conn96] Could not complete validation of table:index-438--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.827-0500 2019-11-26T14:32:20.548-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.986-0500 I STORAGE [conn37] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd)'. Ident: 'index-343-8224331490264904478', commit timestamp: 'Timestamp(1574796739, 13)'
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.827-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.253-0500 I COMMAND [conn100] CMD: validate test4_fsmdb0.fsmcoll0, full:true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.827-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.535-0500 I NETWORK [listener] connection accepted from 127.0.0.1:45962 #192 (2 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.828-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:21.040-0500 I NETWORK [listener] connection accepted from 127.0.0.1:59212 #58 (2 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.828-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.248-0500 I INDEX [conn208] validating collection test4_fsmdb0.fsmcoll0 (UUID: 08555f78-3db2-4ee9-9e10-8c80139ec7dd)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.828-0500 2019-11-26T14:32:20.550-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.319-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection local.system.rollback.id
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.828-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.157-0500 I COMMAND [conn145] CMD: validate config.mongos, full:true
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.828-0500 2019-11-26T14:32:20.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.211-0500 W STORAGE [conn100] Could not complete validation of table:collection-353--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.828-0500 [jsTest] New session started with sessionID: { "id" : UUID("4c5f0545-68f7-4d16-9d80-4368d0b3acd7") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.326-0500 I INDEX [conn96] validating the internal structure of index _id_hashed on collection test4_fsmdb0.agg_out
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.829-0500 2019-11-26T14:32:20.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.986-0500 I STORAGE [conn37] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd)'. Ident: 'index-344-8224331490264904478', commit timestamp: 'Timestamp(1574796739, 13)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.829-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.253-0500 W STORAGE [conn100] Could not complete validation of table:collection-353--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.829-0500 2019-11-26T14:32:20.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.535-0500 I NETWORK [conn192] received client metadata from 127.0.0.1:45962 conn192: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.829-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:21.040-0500 I NETWORK [conn58] received client metadata from 127.0.0.1:59212 conn58: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.829-0500 2019-11-26T14:32:20.551-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.250-0500 I INDEX [conn208] validating index consistency _id_ on collection test4_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.829-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.321-0500 I INDEX [conn96] validating collection local.system.rollback.id (UUID: af3b2fdb-b5ae-49b3-a026-c55e1bf822c0)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.830-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.157-0500 W STORAGE [conn145] Could not complete validation of table:collection-43-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.830-0500 2019-11-26T14:32:20.550-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.211-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection test4_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.830-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.326-0500 W STORAGE [conn96] Could not complete validation of table:index-447--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.830-0500 2019-11-26T14:32:20.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.986-0500 I STORAGE [conn37] Deferring table drop for collection 'test4_fsmdb0.fsmcoll0'. Ident: collection-342-8224331490264904478, commit timestamp: Timestamp(1574796739, 13)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.830-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.253-0500 I INDEX [conn100] validating the internal structure of index _id_ on collection test4_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.831-0500 2019-11-26T14:32:20.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.563-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46004 #193 (3 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.831-0500 [jsTest] New session started with sessionID: { "id" : UUID("0f463b9d-ebfc-42eb-ae2e-efc5a84bf79e") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:21.060-0500 I NETWORK [listener] connection accepted from 127.0.0.1:59220 #59 (3 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.831-0500 2019-11-26T14:32:20.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.250-0500 I INDEX [conn208] validating index consistency _id_hashed on collection test4_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.831-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.321-0500 I INDEX [conn96] validating index consistency _id_ on collection local.system.rollback.id
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.831-0500 2019-11-26T14:32:20.551-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.157-0500 I INDEX [conn145] validating the internal structure of index _id_ on collection config.mongos
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.831-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.212-0500 W STORAGE [conn100] Could not complete validation of table:index-354--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.832-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.326-0500 I INDEX [conn96] validating collection test4_fsmdb0.agg_out (UUID: 6d7b1b53-805f-4e82-a6e8-dfd96f7e7393)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.832-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.998-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.fsmcoll0 took 1 ms and found the collection is not sharded
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.832-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.253-0500 W STORAGE [conn100] Could not complete validation of table:index-354--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.832-0500 2019-11-26T14:32:20.553-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.563-0500 I NETWORK [conn193] received client metadata from 127.0.0.1:46004 conn193: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.832-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:21.061-0500 I NETWORK [conn59] received client metadata from 127.0.0.1:59220 conn59: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.832-0500 2019-11-26T14:32:20.553-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.250-0500 I INDEX [conn208] Validation complete for collection test4_fsmdb0.fsmcoll0 (UUID: 08555f78-3db2-4ee9-9e10-8c80139ec7dd). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.833-0500 [jsTest] New session started with sessionID: { "id" : UUID("a1760c2b-908c-4649-8f1a-dcc26389da1b") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.321-0500 I INDEX [conn96] Validation complete for collection local.system.rollback.id (UUID: af3b2fdb-b5ae-49b3-a026-c55e1bf822c0). No corruption found.
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.833-0500 2019-11-26T14:32:20.553-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.159-0500 I INDEX [conn145] validating collection config.mongos (UUID: 57207abe-6d8d-4102-a526-bc847dba6c09)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.833-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.212-0500 I INDEX [conn100] validating the internal structure of index _id_hashed on collection test4_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.833-0500 2019-11-26T14:32:20.553-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.327-0500 I INDEX [conn96] validating index consistency _id_ on collection test4_fsmdb0.agg_out
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.833-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.998-0500 I SHARDING [conn37] Updating metadata for collection test4_fsmdb0.fsmcoll0 from collection version: 1|3||5ddd7daccf8184c2e1494359, shard version: 1|1||5ddd7daccf8184c2e1494359 to collection version: due to UUID change
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.833-0500 2019-11-26T14:32:20.554-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.254-0500 I INDEX [conn100] validating the internal structure of index _id_hashed on collection test4_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.834-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.570-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46012 #194 (4 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.834-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:21.071-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test5_fsmdb0 from version {} to version { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 } took 0 ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.834-0500 Skipping data consistency checks for 1-node CSRS: { "type" : "replica set", "primary" : "localhost:20000", "nodes" : [ "localhost:20000" ] }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.252-0500 I NETWORK [conn208] end connection 127.0.0.1:47552 (42 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.834-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.323-0500 I COMMAND [conn96] CMD: validate test4_fsmdb0.agg_out, full:true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.834-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.159-0500 I INDEX [conn145] validating index consistency _id_ on collection config.mongos
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.834-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.212-0500 W STORAGE [conn100] Could not complete validation of table:index-355--8000595249233899911. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.835-0500 Implicit session: session { "id" : UUID("cc99317b-fa80-446e-966c-bd4589d3311e") }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.328-0500 I INDEX [conn96] validating index consistency _id_hashed on collection test4_fsmdb0.agg_out
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.835-0500 [jsTest] New session started with sessionID: { "id" : UUID("e0680a1e-4649-42c8-ba8e-237a103832a8") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.998-0500 I COMMAND [ShardServerCatalogCacheLoader-0] CMD: drop config.cache.chunks.test4_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.835-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.254-0500 W STORAGE [conn100] Could not complete validation of table:index-355--4104909142373009110. This is a transient issue as the collection was actively in use by other operations.
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.835-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.570-0500 I NETWORK [conn194] received client metadata from 127.0.0.1:46012 conn194: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.835-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:21.073-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test5_fsmdb0.fsmcoll0 to version 1|3||5ddd7dc43bbfe7fa5630eb06 took 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.835-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.336-0500 I NETWORK [conn207] end connection 127.0.0.1:47524 (41 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.835-0500 Implicit session: session { "id" : UUID("1062decd-7be8-4330-a913-09d65ebc0c3e") }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.323-0500 W STORAGE [conn96] Could not complete validation of table:collection-437--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.836-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.159-0500 I INDEX [conn145] Validation complete for collection config.mongos (UUID: 57207abe-6d8d-4102-a526-bc847dba6c09). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.836-0500 MongoDB server version: 0.0.0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.213-0500 I INDEX [conn100] validating collection test4_fsmdb0.fsmcoll0 (UUID: 08555f78-3db2-4ee9-9e10-8c80139ec7dd)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.836-0500 setting random seed: 750235419
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.328-0500 I INDEX [conn96] Validation complete for collection test4_fsmdb0.agg_out (UUID: 6d7b1b53-805f-4e82-a6e8-dfd96f7e7393). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.836-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.998-0500 I STORAGE [ShardServerCatalogCacheLoader-0] dropCollection: config.cache.chunks.test4_fsmdb0.fsmcoll0 (647e6274-b0dc-4671-90c7-65b5ed709ba8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.836-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.255-0500 I INDEX [conn100] validating collection test4_fsmdb0.fsmcoll0 (UUID: 08555f78-3db2-4ee9-9e10-8c80139ec7dd)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.836-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.606-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46040 #195 (5 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.837-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:21.266-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 195ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.837-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.346-0500 I NETWORK [conn206] end connection 127.0.0.1:47518 (40 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.837-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.323-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection test4_fsmdb0.agg_out
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.837-0500 [jsTest] New session started with sessionID: { "id" : UUID("f64a797d-bf82-45cb-8cae-12a72dbfc97e") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.160-0500 I COMMAND [conn145] CMD: validate config.settings, full:true
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.837-0500 [jsTest] New session started with sessionID: { "id" : UUID("89e7d5bb-0ad0-4ffd-b43e-f2d90ed83e00") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.214-0500 I INDEX [conn100] validating index consistency _id_ on collection test4_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.837-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.329-0500 I COMMAND [conn96] CMD: validate test4_fsmdb0.fsmcoll0, full:true
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.837-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.998-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Finishing collection drop for config.cache.chunks.test4_fsmdb0.fsmcoll0 (647e6274-b0dc-4671-90c7-65b5ed709ba8).
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.838-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.256-0500 I INDEX [conn100] validating index consistency _id_ on collection test4_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.838-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.606-0500 I NETWORK [conn195] received client metadata from 127.0.0.1:46040 conn195: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.838-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:21.266-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 195ms
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.838-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.609-0500 I STORAGE [TimestampMonitor] Removing drop-pending idents with drop timestamps before timestamp Timestamp(1574796733, 8)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.838-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.323-0500 W STORAGE [conn96] Could not complete validation of table:index-438--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.838-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.160-0500 W STORAGE [conn145] Could not complete validation of table:collection-45-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.839-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.214-0500 I INDEX [conn100] validating index consistency _id_hashed on collection test4_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.839-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.329-0500 W STORAGE [conn96] Could not complete validation of table:collection-225--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.839-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.998-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test4_fsmdb0.fsmcoll0 (647e6274-b0dc-4671-90c7-65b5ed709ba8)'. Ident: 'index-346-8224331490264904478', commit timestamp: 'Timestamp(1574796739, 21)'
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.839-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.256-0500 I INDEX [conn100] validating index consistency _id_hashed on collection test4_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.839-0500 [jsTest] New session started with sessionID: { "id" : UUID("e2692ddc-e250-42d4-94bf-482be555121f") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.608-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46042 #196 (6 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.839-0500 [jsTest] New session started with sessionID: { "id" : UUID("12ada6b6-dc56-430d-a8ec-97905d787b88") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.609-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-189--2588534479858262356 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796715, 7)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.839-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.323-0500 I INDEX [conn96] validating the internal structure of index _id_hashed on collection test4_fsmdb0.agg_out
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.840-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.160-0500 I INDEX [conn145] validating the internal structure of index _id_ on collection config.settings
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.840-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.215-0500 I INDEX [conn100] Validation complete for collection test4_fsmdb0.fsmcoll0 (UUID: 08555f78-3db2-4ee9-9e10-8c80139ec7dd). No corruption found.
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.840-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.329-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection test4_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.840-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.998-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test4_fsmdb0.fsmcoll0 (647e6274-b0dc-4671-90c7-65b5ed709ba8)'. Ident: 'index-347-8224331490264904478', commit timestamp: 'Timestamp(1574796739, 21)'
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.840-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.256-0500 I INDEX [conn100] Validation complete for collection test4_fsmdb0.fsmcoll0 (UUID: 08555f78-3db2-4ee9-9e10-8c80139ec7dd). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.840-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.609-0500 I NETWORK [conn196] received client metadata from 127.0.0.1:46042 conn196: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.841-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.612-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-190--2588534479858262356 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796715, 7)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.841-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.324-0500 W STORAGE [conn96] Could not complete validation of table:index-447--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.841-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.160-0500 W STORAGE [conn145] Could not complete validation of table:index-46-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.841-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.216-0500 I NETWORK [conn100] end connection 127.0.0.1:54010 (14 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.841-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.329-0500 W STORAGE [conn96] Could not complete validation of table:index-226--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.841-0500 [jsTest] New session started with sessionID: { "id" : UUID("a13bcf13-58f1-4abf-aa44-9682d15aa18b") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:19.998-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Deferring table drop for collection 'config.cache.chunks.test4_fsmdb0.fsmcoll0'. Ident: collection-345-8224331490264904478, commit timestamp: Timestamp(1574796739, 21)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.841-0500 [jsTest] New session started with sessionID: { "id" : UUID("130e432a-33ff-4514-93b8-547e3a30c738") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.258-0500 I NETWORK [conn100] end connection 127.0.0.1:53114 (13 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.842-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.635-0500 I NETWORK [conn196] end connection 127.0.0.1:46042 (5 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.842-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.613-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-188--2588534479858262356 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796715, 7)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.842-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.324-0500 I INDEX [conn96] validating collection test4_fsmdb0.agg_out (UUID: 6d7b1b53-805f-4e82-a6e8-dfd96f7e7393)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.842-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.160-0500 I INDEX [conn145] validating collection config.settings (UUID: 6d167d1d-0483-49b9-9ac8-ee5b66996698)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.842-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.346-0500 I NETWORK [conn99] end connection 127.0.0.1:53968 (13 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.842-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.329-0500 I INDEX [conn96] validating the internal structure of index _id_hashed on collection test4_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.843-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.009-0500 I COMMAND [conn37] dropDatabase test4_fsmdb0 - starting
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.843-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.346-0500 I NETWORK [conn99] end connection 127.0.0.1:53082 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.843-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.681-0500 I NETWORK [conn195] end connection 127.0.0.1:46040 (4 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.843-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.614-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-207--2588534479858262356 (ns: config.cache.chunks.test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796715, 11)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.843-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.325-0500 I INDEX [conn96] validating index consistency _id_ on collection test4_fsmdb0.agg_out
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.843-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.160-0500 I INDEX [conn145] validating index consistency _id_ on collection config.settings
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.843-0500 [jsTest] New session started with sessionID: { "id" : UUID("63ede745-eb66-4fce-97fa-fca3f135b60f") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.987-0500 I COMMAND [ReplWriterWorker-3] CMD: drop test4_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.844-0500 [jsTest] New session started with sessionID: { "id" : UUID("29f85d66-640b-4dd8-9bfa-8c8f004df115") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.329-0500 W STORAGE [conn96] Could not complete validation of table:index-227--2310912778499990807. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.844-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.009-0500 I COMMAND [conn37] dropDatabase test4_fsmdb0 - dropped 0 collection(s)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.844-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.989-0500 I COMMAND [ReplWriterWorker-10] CMD: drop test4_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.844-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.685-0500 I NETWORK [conn192] end connection 127.0.0.1:45962 (3 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.844-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.615-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-210--2588534479858262356 (ns: config.cache.chunks.test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796715, 11)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.844-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.326-0500 I INDEX [conn96] validating index consistency _id_hashed on collection test4_fsmdb0.agg_out
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.845-0500
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.160-0500 I INDEX [conn145] Validation complete for collection config.settings (UUID: 6d167d1d-0483-49b9-9ac8-ee5b66996698). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.845-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.988-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796739, 13), t: 1 } and commit timestamp Timestamp(1574796739, 13)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.845-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.330-0500 I INDEX [conn96] validating collection test4_fsmdb0.fsmcoll0 (UUID: 08555f78-3db2-4ee9-9e10-8c80139ec7dd)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.845-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.009-0500 I COMMAND [conn37] dropDatabase test4_fsmdb0 - finished
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.845-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.989-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796739, 13), t: 1 } and commit timestamp Timestamp(1574796739, 13)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.845-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.765-0500 I COMMAND [conn194] command test5_fsmdb0.fsmcoll0 appName: "MongoDB Shell" command: shardCollection { shardCollection: "test5_fsmdb0.fsmcoll0", key: { _id: "hashed" }, lsid: { id: UUID("dd718102-e73d-4e8c-9c24-9aea49593289") }, $clusterTime: { clusterTime: Timestamp(1574796740, 17), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:245 protocol:op_msg 154ms
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.845-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.615-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-205--2588534479858262356 (ns: config.cache.chunks.test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796715, 11)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.846-0500 [jsTest] New session started with sessionID: { "id" : UUID("412220ca-f99b-4082-bf20-1c629657d353") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.326-0500 I INDEX [conn96] Validation complete for collection test4_fsmdb0.agg_out (UUID: 6d7b1b53-805f-4e82-a6e8-dfd96f7e7393). No corruption found.
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.846-0500 [jsTest] New session started with sessionID: { "id" : UUID("66e7b54a-69f0-43cd-b333-ba2a88a53a16") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.161-0500 I COMMAND [conn145] CMD: validate config.shards, full:true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.846-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.988-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd).
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.846-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.331-0500 I INDEX [conn96] validating index consistency _id_ on collection test4_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.846-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.014-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test4_fsmdb0 took 0 ms and failed :: caused by :: NamespaceNotFound: database test4_fsmdb0 not found
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.846-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.989-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd).
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.847-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.862-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test5_fsmdb0 from version {} to version { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 } took 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.847-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.616-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-157--2588534479858262356 (ns: test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 16)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.847-0500 Running data consistency checks for replica set: shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.326-0500 I COMMAND [conn96] CMD: validate test4_fsmdb0.fsmcoll0, full:true
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.847-0500 Recreating replica set from config {
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.161-0500 W STORAGE [conn145] Could not complete validation of table:collection-25-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.847-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.988-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd)'. Ident: 'index-354--8000595249233899911', commit timestamp: 'Timestamp(1574796739, 13)'
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.847-0500 "_id" : "config-rs",
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.331-0500 I INDEX [conn96] validating index consistency _id_hashed on collection test4_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.847-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.015-0500 I SHARDING [conn37] setting this node's cached database version for test4_fsmdb0 to {}
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.848-0500 "version" : 1,
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.989-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd)'. Ident: 'index-354--4104909142373009110', commit timestamp: 'Timestamp(1574796739, 13)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.848-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.864-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test5_fsmdb0.fsmcoll0 to version 1|3||5ddd7dc43bbfe7fa5630eb06 took 0 ms
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.848-0500 "configsvr" : true,
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.617-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-158--2588534479858262356 (ns: test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 16)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.848-0500 [jsTest] New session started with sessionID: { "id" : UUID("f5d7fcad-c765-4618-872a-6ff2ef74e0a9") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.326-0500 W STORAGE [conn96] Could not complete validation of table:collection-225--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.848-0500 "protocolVersion" : NumberLong(1),
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.161-0500 I INDEX [conn145] validating the internal structure of index _id_ on collection config.shards
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.849-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.988-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd)'. Ident: 'index-355--8000595249233899911', commit timestamp: 'Timestamp(1574796739, 13)'
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.849-0500 "writeConcernMajorityJournalDefault" : true,
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.331-0500 I INDEX [conn96] Validation complete for collection test4_fsmdb0.fsmcoll0 (UUID: 08555f78-3db2-4ee9-9e10-8c80139ec7dd). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.849-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.989-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd)'. Ident: 'index-355--4104909142373009110', commit timestamp: 'Timestamp(1574796739, 13)'
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.849-0500 "members" : [
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.548-0500 I NETWORK [listener] connection accepted from 127.0.0.1:40118 #205 (40 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.849-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:20.951-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test5_fsmdb0.agg_out took 0 ms and found the collection is not sharded
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.849-0500 {
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.619-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-156--2588534479858262356 (ns: test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 16)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.850-0500 Running data consistency checks for replica set: shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.326-0500 I INDEX [conn96] validating the internal structure of index _id_ on collection test4_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.850-0500 "_id" : 0,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.163-0500 I INDEX [conn145] validating the internal structure of index host_1 on collection config.shards
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.850-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:19.988-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test4_fsmdb0.fsmcoll0'. Ident: collection-353--8000595249233899911, commit timestamp: Timestamp(1574796739, 13)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.850-0500 "host" : "localhost:20000",
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.333-0500 I NETWORK [conn96] end connection 127.0.0.1:52746 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.850-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:19.989-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test4_fsmdb0.fsmcoll0'. Ident: collection-353--4104909142373009110, commit timestamp: Timestamp(1574796739, 13)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.851-0500 "arbiterOnly" : false,
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.548-0500 I NETWORK [conn205] received client metadata from 127.0.0.1:40118 conn205: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.851-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.030-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46060 #197 (4 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.851-0500 "buildIndexes" : true,
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.620-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-161--2588534479858262356 (ns: config.cache.chunks.test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 25)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.851-0500 [jsTest] New session started with sessionID: { "id" : UUID("958f81f5-ed81-4ba7-a480-0fb9bceea736") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.327-0500 W STORAGE [conn96] Could not complete validation of table:index-226--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.851-0500 "hidden" : false,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.165-0500 I INDEX [conn145] validating collection config.shards (UUID: ed6a2b77-0788-4ad3-a1b0-ccd61535c24f)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.851-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.002-0500 I COMMAND [ReplWriterWorker-14] CMD: drop config.cache.chunks.test4_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.852-0500 "priority" : 1,
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.346-0500 I NETWORK [conn95] end connection 127.0.0.1:52718 (11 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.852-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.003-0500 I COMMAND [ReplWriterWorker-6] CMD: drop config.cache.chunks.test4_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.852-0500 "tags" : {
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.549-0500 I NETWORK [listener] connection accepted from 127.0.0.1:40120 #206 (41 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.852-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.030-0500 I NETWORK [conn197] received client metadata from 127.0.0.1:46060 conn197: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.852-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.621-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-162--2588534479858262356 (ns: config.cache.chunks.test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 25)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.852-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.327-0500 I INDEX [conn96] validating the internal structure of index _id_hashed on collection test4_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.853-0500 },
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.165-0500 I INDEX [conn145] validating index consistency _id_ on collection config.shards
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.853-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.002-0500 I STORAGE [ReplWriterWorker-14] dropCollection: config.cache.chunks.test4_fsmdb0.fsmcoll0 (647e6274-b0dc-4671-90c7-65b5ed709ba8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796739, 21), t: 1 } and commit timestamp Timestamp(1574796739, 21)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.853-0500 "slaveDelay" : NumberLong(0),
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.975-0500 I COMMAND [ReplWriterWorker-6] CMD: drop test4_fsmdb0.agg_out
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.853-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.003-0500 I STORAGE [ReplWriterWorker-6] dropCollection: config.cache.chunks.test4_fsmdb0.fsmcoll0 (647e6274-b0dc-4671-90c7-65b5ed709ba8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796739, 21), t: 1 } and commit timestamp Timestamp(1574796739, 21)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.853-0500 "votes" : 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.549-0500 I NETWORK [conn206] received client metadata from 127.0.0.1:40120 conn206: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.853-0500 [jsTest] New session started with sessionID: { "id" : UUID("dee53c14-2116-4bb7-9905-ad02c416e516") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.030-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46062 #198 (5 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.854-0500 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.622-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-160--2588534479858262356 (ns: config.cache.chunks.test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 25)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.854-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.327-0500 W STORAGE [conn96] Could not complete validation of table:index-227--7234316082034423155. This is a transient issue as the collection was actively in use by other operations.
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.854-0500 ],
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.165-0500 I INDEX [conn145] validating index consistency host_1 on collection config.shards
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.854-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.002-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for config.cache.chunks.test4_fsmdb0.fsmcoll0 (647e6274-b0dc-4671-90c7-65b5ed709ba8).
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.854-0500 "settings" : {
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.975-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test4_fsmdb0.agg_out (6d7b1b53-805f-4e82-a6e8-dfd96f7e7393) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796739, 5), t: 1 } and commit timestamp Timestamp(1574796739, 5)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.854-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.003-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for config.cache.chunks.test4_fsmdb0.fsmcoll0 (647e6274-b0dc-4671-90c7-65b5ed709ba8).
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.855-0500 "chainingAllowed" : true,
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.551-0500 I NETWORK [listener] connection accepted from 127.0.0.1:40132 #207 (42 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.855-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.030-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46064 #199 (6 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.855-0500 "heartbeatIntervalMillis" : 2000,
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.623-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-225--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 2618)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.855-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.327-0500 I INDEX [conn96] validating collection test4_fsmdb0.fsmcoll0 (UUID: 08555f78-3db2-4ee9-9e10-8c80139ec7dd)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.855-0500 "heartbeatTimeoutSecs" : 10,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.165-0500 I INDEX [conn145] Validation complete for collection config.shards (UUID: ed6a2b77-0788-4ad3-a1b0-ccd61535c24f). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.855-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.002-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test4_fsmdb0.fsmcoll0 (647e6274-b0dc-4671-90c7-65b5ed709ba8)'. Ident: 'index-358--8000595249233899911', commit timestamp: 'Timestamp(1574796739, 21)'
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.856-0500 "electionTimeoutMillis" : 86400000,
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.975-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test4_fsmdb0.agg_out (6d7b1b53-805f-4e82-a6e8-dfd96f7e7393).
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.856-0500 [jsTest] New session started with sessionID: { "id" : UUID("beb981e5-96f3-43c2-9cdb-ec8a243b4cb8") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.003-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test4_fsmdb0.fsmcoll0 (647e6274-b0dc-4671-90c7-65b5ed709ba8)'. Ident: 'index-358--4104909142373009110', commit timestamp: 'Timestamp(1574796739, 21)'
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.856-0500 "catchUpTimeoutMillis" : -1,
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.551-0500 I NETWORK [conn207] received client metadata from 127.0.0.1:40132 conn207: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.856-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.030-0500 I NETWORK [conn198] received client metadata from 127.0.0.1:46062 conn198: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.856-0500 "catchUpTakeoverDelayMillis" : 30000,
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.623-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-226--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 2618)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.856-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.329-0500 I INDEX [conn96] validating index consistency _id_ on collection test4_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.857-0500 "getLastErrorModes" : {
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.166-0500 I COMMAND [conn145] CMD: validate config.system.sessions, full:true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.857-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.002-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test4_fsmdb0.fsmcoll0 (647e6274-b0dc-4671-90c7-65b5ed709ba8)'. Ident: 'index-359--8000595249233899911', commit timestamp: 'Timestamp(1574796739, 21)'
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.857-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.975-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (6d7b1b53-805f-4e82-a6e8-dfd96f7e7393)'. Ident: 'index-438--2310912778499990807', commit timestamp: 'Timestamp(1574796739, 5)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.857-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.003-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test4_fsmdb0.fsmcoll0 (647e6274-b0dc-4671-90c7-65b5ed709ba8)'. Ident: 'index-359--4104909142373009110', commit timestamp: 'Timestamp(1574796739, 21)'
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.857-0500 },
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.551-0500 I NETWORK [listener] connection accepted from 127.0.0.1:40136 #208 (43 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.857-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.031-0500 I NETWORK [conn199] received client metadata from 127.0.0.1:46064 conn199: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.857-0500 "getLastErrorDefaults" : {
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.624-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-224--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 2618)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.858-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.329-0500 I INDEX [conn96] validating index consistency _id_hashed on collection test4_fsmdb0.fsmcoll0
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.858-0500 "w" : 1,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.166-0500 W STORAGE [conn145] Could not complete validation of table:collection-53-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.858-0500 [jsTest] New session started with sessionID: { "id" : UUID("6c769f82-2a92-4b85-940e-77746ebce400") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.002-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'config.cache.chunks.test4_fsmdb0.fsmcoll0'. Ident: collection-357--8000595249233899911, commit timestamp: Timestamp(1574796739, 21)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.858-0500 "wtimeout" : 0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.975-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (6d7b1b53-805f-4e82-a6e8-dfd96f7e7393)'. Ident: 'index-447--2310912778499990807', commit timestamp: 'Timestamp(1574796739, 5)'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.858-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.003-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'config.cache.chunks.test4_fsmdb0.fsmcoll0'. Ident: collection-357--4104909142373009110, commit timestamp: Timestamp(1574796739, 21)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.858-0500 },
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.552-0500 I NETWORK [conn208] received client metadata from 127.0.0.1:40136 conn208: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.859-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.039-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46066 #200 (7 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.859-0500 "replicaSetId" : ObjectId("5ddd7d655cde74b6784bb14d")
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.625-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-233--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 2967)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.859-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.329-0500 I INDEX [conn96] Validation complete for collection test4_fsmdb0.fsmcoll0 (UUID: 08555f78-3db2-4ee9-9e10-8c80139ec7dd). No corruption found.
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.859-0500 }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.166-0500 I INDEX [conn145] validating the internal structure of index _id_ on collection config.system.sessions
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.859-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.011-0500 I COMMAND [ReplWriterWorker-4] dropDatabase test4_fsmdb0 - starting
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.859-0500 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.975-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-437--2310912778499990807, commit timestamp: Timestamp(1574796739, 5)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.860-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.012-0500 I COMMAND [ReplWriterWorker-3] dropDatabase test4_fsmdb0 - starting
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.860-0500
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.567-0500 I NETWORK [listener] connection accepted from 127.0.0.1:40150 #209 (44 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.860-0500 [jsTest] ----
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.040-0500 I NETWORK [conn200] received client metadata from 127.0.0.1:46066 conn200: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.860-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.627-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-238--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 2967)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.860-0500 [jsTest] New session started with sessionID: { "id" : UUID("595ee9f0-3fb5-4d65-88f3-ac8f5c64d429") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.331-0500 I NETWORK [conn96] end connection 127.0.0.1:36108 (12 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.860-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.166-0500 W STORAGE [conn145] Could not complete validation of table:index-54-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.860-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.011-0500 I COMMAND [ReplWriterWorker-4] dropDatabase test4_fsmdb0 - dropped 0 collection(s)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.860-0500 [jsTest] New session started with sessionID: { "id" : UUID("5ef8fec0-3ae1-4f6c-a7f5-d5449ae7698c") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.984-0500 I COMMAND [ReplWriterWorker-11] CMD: drop config.cache.chunks.test4_fsmdb0.agg_out
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.861-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.012-0500 I COMMAND [ReplWriterWorker-3] dropDatabase test4_fsmdb0 - dropped 0 collection(s)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.861-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.568-0500 I NETWORK [conn209] received client metadata from 127.0.0.1:40150 conn209: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.861-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.040-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46068 #201 (8 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.861-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.628-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-228--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 2967)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.861-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.346-0500 I NETWORK [conn95] end connection 127.0.0.1:36080 (11 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.166-0500 I INDEX [conn145] validating collection config.system.sessions (UUID: 9014747b-5aa2-462f-9e13-1e6b27298390)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.861-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.011-0500 I COMMAND [ReplWriterWorker-4] dropDatabase test4_fsmdb0 - finished
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.861-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.985-0500 I STORAGE [ReplWriterWorker-11] dropCollection: config.cache.chunks.test4_fsmdb0.agg_out (b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796739, 9), t: 1 } and commit timestamp Timestamp(1574796739, 9)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.861-0500 Recreating replica set from config {
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.012-0500 I COMMAND [ReplWriterWorker-3] dropDatabase test4_fsmdb0 - finished
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.862-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.577-0500 I NETWORK [listener] connection accepted from 127.0.0.1:40162 #210 (45 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.862-0500 "_id" : "shard-rs0",
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.040-0500 I NETWORK [conn201] received client metadata from 127.0.0.1:46068 conn201: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.862-0500 [jsTest] New session started with sessionID: { "id" : UUID("20e0be63-4900-4609-a555-412585948209") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.629-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-237--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 3032)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.862-0500 "version" : 2,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.167-0500 I INDEX [conn145] validating index consistency _id_ on collection config.system.sessions
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.862-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.975-0500 I COMMAND [ReplWriterWorker-12] CMD: drop test4_fsmdb0.agg_out
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.862-0500 "protocolVersion" : NumberLong(1),
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.016-0500 I SHARDING [ReplWriterWorker-7] setting this node's cached database version for test4_fsmdb0 to {}
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.863-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.985-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for config.cache.chunks.test4_fsmdb0.agg_out (b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2).
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.863-0500 "writeConcernMajorityJournalDefault" : true,
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.018-0500 I SHARDING [ReplWriterWorker-9] setting this node's cached database version for test4_fsmdb0 to {}
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.863-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.863-0500 "members" : [
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.863-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.863-0500 "_id" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.863-0500 "host" : "localhost:20001",
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.863-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.863-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.863-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.863-0500 "priority" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500 "_id" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500 "host" : "localhost:20002",
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500 "priority" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500 "_id" : 2,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500 "host" : "localhost:20003",
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.864-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.865-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.865-0500 "priority" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.865-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.865-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.865-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.865-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.865-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.865-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.865-0500 ],
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.865-0500 "settings" : {
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.865-0500 "chainingAllowed" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.865-0500 "heartbeatIntervalMillis" : 2000,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.865-0500 "heartbeatTimeoutSecs" : 10,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.865-0500 "electionTimeoutMillis" : 86400000,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.865-0500 "catchUpTimeoutMillis" : -1,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.865-0500 "catchUpTakeoverDelayMillis" : 30000,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.865-0500 "getLastErrorModes" : {
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.865-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.865-0500 },
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.577-0500 I NETWORK [conn210] received client metadata from 127.0.0.1:40162 conn210: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.865-0500 "getLastErrorDefaults" : {
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.042-0500 I NETWORK [conn197] end connection 127.0.0.1:46060 (7 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.866-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js finished.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.630-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-246--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 3032)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.866-0500 "w" : 1,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.167-0500 I INDEX [conn145] Validation complete for collection config.system.sessions (UUID: 9014747b-5aa2-462f-9e13-1e6b27298390). No corruption found.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.867-0500 Starting JSTest jstests/hooks/run_check_repl_dbhash_background.js...
PATH=/home/nz_linux/mongo:/data/multiversion:/home/nz_linux/bin:/home/nz_linux/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/nz_linux/bin:/usr/local/go/bin:/opt/mongodbtoolchain/v3/bin mongo --eval MongoRunner.dataDir = "/home/nz_linux/data/job0/mongorunner"; MongoRunner.dataPath = "/home/nz_linux/data/job0/mongorunner/"; MongoRunner.mongoShellPath = "/home/nz_linux/mongo/mongo"; TestData = new Object(); TestData["minPort"] = 20020; TestData["maxPort"] = 20249; TestData["peerPids"] = [13986, 14076, 14079, 14082, 14340, 14343, 14346]; TestData["failIfUnterminatedProcesses"] = true; TestData["isMainTest"] = true; TestData["numTestClients"] = 1; TestData["enableMajorityReadConcern"] = true; TestData["mixedBinVersions"] = ""; TestData["noJournal"] = false; TestData["serviceExecutor"] = ""; TestData["storageEngine"] = ""; TestData["storageEngineCacheSizeGB"] = ""; TestData["testName"] = "run_check_repl_dbhash_background"; TestData["transportLayer"] = ""; TestData["wiredTigerCollectionConfigString"] = ""; TestData["wiredTigerEngineConfigString"] = ""; TestData["wiredTigerIndexConfigString"] = ""; TestData["setParameters"] = new Object(); TestData["setParameters"]["logComponentVerbosity"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"] = new Object(); TestData["setParameters"]["logComponentVerbosity"]["replication"]["rollback"] = 2; TestData["setParameters"]["logComponentVerbosity"]["transaction"] = 4; TestData["setParametersMongos"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"] = new Object(); TestData["setParametersMongos"]["logComponentVerbosity"]["transaction"] = 3; TestData["transactionLifetimeLimitSeconds"] = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); mongodb://localhost:20007,localhost:20008 jstests/hooks/run_check_repl_dbhash_background.js
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.976-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test4_fsmdb0.agg_out (6d7b1b53-805f-4e82-a6e8-dfd96f7e7393) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796739, 5), t: 1 } and commit timestamp Timestamp(1574796739, 5)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.867-0500 "wtimeout" : 0
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.867-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.868-0500 "replicaSetId" : ObjectId("5ddd7d683bbfe7fa5630d3b8")
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.868-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.868-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.868-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.868-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.868-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.868-0500 [jsTest] New session started with sessionID: { "id" : UUID("654e6586-3758-483a-9d9b-56393f0e0228") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.868-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.868-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.868-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.868-0500 Recreating replica set from config {
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.868-0500 "_id" : "shard-rs1",
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.868-0500 "version" : 2,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.868-0500 "protocolVersion" : NumberLong(1),
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.868-0500 "writeConcernMajorityJournalDefault" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.868-0500 "members" : [
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.868-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.868-0500 "_id" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.868-0500 "host" : "localhost:20004",
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.868-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.868-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.868-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.868-0500 "priority" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500 "_id" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500 "host" : "localhost:20005",
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500 "priority" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500 {
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500 "_id" : 2,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500 "host" : "localhost:20006",
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500 "arbiterOnly" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500 "buildIndexes" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.869-0500 "hidden" : false,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500 "priority" : 0,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500 "tags" : {
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500 "slaveDelay" : NumberLong(0),
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500 "votes" : 1
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500 ],
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500 "settings" : {
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500 "chainingAllowed" : true,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500 "heartbeatIntervalMillis" : 2000,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500 "heartbeatTimeoutSecs" : 10,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500 "electionTimeoutMillis" : 86400000,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500 "catchUpTimeoutMillis" : -1,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500 "catchUpTakeoverDelayMillis" : 30000,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500 "getLastErrorModes" : {
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500 "getLastErrorDefaults" : {
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500 "w" : 1,
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500 "wtimeout" : 0
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500 },
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500 "replicaSetId" : ObjectId("5ddd7d6bcf8184c2e1492eba")
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.870-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500 }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500 [jsTest] New session started with sessionID: { "id" : UUID("8d3d5e22-b37c-41d8-a003-41cfe197166e") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500 [jsTest] New session started with sessionID: { "id" : UUID("7d06f07b-3abd-4b14-9ab0-417cf5cf1ac9") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500 [jsTest] New session started with sessionID: { "id" : UUID("3ea927b8-3f9f-4100-b5ba-2cca59be5cb7") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.871-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500 [jsTest] New session started with sessionID: { "id" : UUID("5ec48c1e-b2bd-40f2-a34e-6bd6aaea7723") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500 [jsTest] New session started with sessionID: { "id" : UUID("288df13f-33f7-4668-96c0-c2aed10661c9") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500 [jsTest] New session started with sessionID: { "id" : UUID("70e92cb9-916f-4b6a-b95e-a17e6933744f") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500 [jsTest] Workload(s) started: jstests/concurrency/fsm_workloads/agg_out.js
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.872-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500 [jsTest] New session started with sessionID: { "id" : UUID("dd718102-e73d-4e8c-9c24-9aea49593289") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500 Using 5 threads (requested 5)
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500 Implicit session: session { "id" : UUID("398ba828-897a-48a7-b382-08d9a95e8e6f") }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500 Implicit session: session { "id" : UUID("ecf7a68a-f217-480e-a25a-8fc87651d55d") }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500 Implicit session: session { "id" : UUID("13d4a9fd-5ddf-4454-8847-4df00fd76a39") }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500 Implicit session: session { "id" : UUID("a110cce7-24da-4230-ad4f-1e86bcfed6b3") }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500 Implicit session: session { "id" : UUID("e8edcb25-df11-436c-9eae-f1a6d0e0624f") }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.873-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.874-0500 MongoDB server version: 0.0.0
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.874-0500 [tid:1] setting random seed: 1698476146
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.874-0500 [tid:0] setting random seed: 388134170
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.874-0500 [tid:2] setting random seed: 1219078014
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.874-0500 [tid:4] setting random seed: 278446236
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.874-0500 [tid:3] setting random seed: 4223875219
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.874-0500 [tid:1]
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.874-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.874-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.874-0500 [jsTest] New session started with sessionID: { "id" : UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.874-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.874-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.874-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.874-0500 [tid:4]
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.874-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.874-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.874-0500 [jsTest] New session started with sessionID: { "id" : UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.874-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.874-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.874-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.874-0500 [tid:0]
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.874-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.875-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.875-0500 [jsTest] New session started with sessionID: { "id" : UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.875-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.875-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.875-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.875-0500 [tid:2]
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.875-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.875-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.875-0500 [jsTest] New session started with sessionID: { "id" : UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.875-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.875-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.875-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.875-0500 [tid:3]
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.875-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.875-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.875-0500 [jsTest] New session started with sessionID: { "id" : UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.875-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.875-0500
[fsm_workload_test:agg_out] 2019-11-26T14:32:21.875-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.548-0500 I NETWORK [listener] connection accepted from 127.0.0.1:54038 #101 (14 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.985-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test4_fsmdb0.agg_out (b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2)'. Ident: 'index-460--2310912778499990807', commit timestamp: 'Timestamp(1574796739, 9)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.548-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53148 #101 (13 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.579-0500 I NETWORK [listener] connection accepted from 127.0.0.1:40164 #211 (46 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.043-0500 I NETWORK [conn198] end connection 127.0.0.1:46062 (6 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.631-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-232--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 3032)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.167-0500 I COMMAND [conn145] CMD: validate config.tags, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.976-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test4_fsmdb0.agg_out (6d7b1b53-805f-4e82-a6e8-dfd96f7e7393).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.548-0500 I NETWORK [conn101] received client metadata from 127.0.0.1:54038 conn101: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.985-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test4_fsmdb0.agg_out (b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2)'. Ident: 'index-461--2310912778499990807', commit timestamp: 'Timestamp(1574796739, 9)'
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:21.879-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796741, 2527), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 611ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.548-0500 I NETWORK [conn101] received client metadata from 127.0.0.1:53148 conn101: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.579-0500 I NETWORK [conn211] received client metadata from 127.0.0.1:40164 conn211: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.043-0500 I NETWORK [conn199] end connection 127.0.0.1:46064 (5 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.631-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-235--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 3033)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.167-0500 W STORAGE [conn145] Could not complete validation of table:collection-35-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.976-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (6d7b1b53-805f-4e82-a6e8-dfd96f7e7393)'. Ident: 'index-438--7234316082034423155', commit timestamp: 'Timestamp(1574796739, 5)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.551-0500 I NETWORK [listener] connection accepted from 127.0.0.1:54052 #102 (15 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.985-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'config.cache.chunks.test4_fsmdb0.agg_out'. Ident: collection-459--2310912778499990807, commit timestamp: Timestamp(1574796739, 9)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.551-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53162 #102 (14 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.580-0500 I NETWORK [listener] connection accepted from 127.0.0.1:40170 #212 (47 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.051-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46072 #202 (6 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.632-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-242--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 3033)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.168-0500 I INDEX [conn145] validating the internal structure of index _id_ on collection config.tags
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.976-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (6d7b1b53-805f-4e82-a6e8-dfd96f7e7393)'. Ident: 'index-447--7234316082034423155', commit timestamp: 'Timestamp(1574796739, 5)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.551-0500 I NETWORK [conn102] received client metadata from 127.0.0.1:54052 conn102: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.992-0500 I COMMAND [ReplWriterWorker-12] CMD: drop test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.551-0500 I NETWORK [conn102] received client metadata from 127.0.0.1:53162 conn102: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.580-0500 I NETWORK [conn212] received client metadata from 127.0.0.1:40170 conn212: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.051-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46074 #203 (7 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.633-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-230--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 3033)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.168-0500 W STORAGE [conn145] Could not complete validation of table:index-36-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.976-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-437--7234316082034423155, commit timestamp: Timestamp(1574796739, 5)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.882-0500 JSTest jstests/hooks/run_check_repl_dbhash_background.js started with pid 16487.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.580-0500 I NETWORK [listener] connection accepted from 127.0.0.1:54090 #103 (16 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.992-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796739, 14), t: 1 } and commit timestamp Timestamp(1574796739, 14)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.579-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53200 #103 (15 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.607-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test5_fsmdb0 from version {} to version { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.051-0500 I NETWORK [conn202] received client metadata from 127.0.0.1:46072 conn202: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.635-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-236--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6) with drop timestamp Timestamp(1574796717, 5)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.168-0500 I INDEX [conn145] validating the internal structure of index ns_1_min_1 on collection config.tags
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.985-0500 I COMMAND [ReplWriterWorker-14] CMD: drop config.cache.chunks.test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.580-0500 I NETWORK [conn103] received client metadata from 127.0.0.1:54090 conn103: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.992-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.579-0500 I NETWORK [conn103] received client metadata from 127.0.0.1:53200 conn103: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.607-0500 I SHARDING [conn37] setting this node's cached database version for test5_fsmdb0 to { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.051-0500 I NETWORK [conn203] received client metadata from 127.0.0.1:46074 conn203: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.636-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-244--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6) with drop timestamp Timestamp(1574796717, 5)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.168-0500 W STORAGE [conn145] Could not complete validation of table:index-37-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.985-0500 I STORAGE [ReplWriterWorker-14] dropCollection: config.cache.chunks.test4_fsmdb0.agg_out (b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796739, 9), t: 1 } and commit timestamp Timestamp(1574796739, 9)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.985-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for config.cache.chunks.test4_fsmdb0.agg_out (b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.992-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd)'. Ident: 'index-226--2310912778499990807', commit timestamp: 'Timestamp(1574796739, 14)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.616-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53226 #104 (16 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.613-0500 I NETWORK [listener] connection accepted from 127.0.0.1:40186 #213 (48 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.060-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46076 #204 (8 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.637-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-231--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6) with drop timestamp Timestamp(1574796717, 5)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.168-0500 I INDEX [conn145] validating the internal structure of index ns_1_tag_1 on collection config.tags
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.617-0500 I NETWORK [listener] connection accepted from 127.0.0.1:54116 #104 (17 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.985-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test4_fsmdb0.agg_out (b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2)'. Ident: 'index-460--7234316082034423155', commit timestamp: 'Timestamp(1574796739, 9)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.992-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd)'. Ident: 'index-227--2310912778499990807', commit timestamp: 'Timestamp(1574796739, 14)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.617-0500 I NETWORK [conn104] received client metadata from 127.0.0.1:53226 conn104: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.613-0500 I NETWORK [conn213] received client metadata from 127.0.0.1:40186 conn213: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.060-0500 I NETWORK [conn204] received client metadata from 127.0.0.1:46076 conn204: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.638-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-234--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796717, 2151)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.168-0500 W STORAGE [conn145] Could not complete validation of table:index-38-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.168-0500 I INDEX [conn145] validating collection config.tags (UUID: d225b508-e40e-4c3c-a716-26adc4561055)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.985-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test4_fsmdb0.agg_out (b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2)'. Ident: 'index-461--7234316082034423155', commit timestamp: 'Timestamp(1574796739, 9)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:19.992-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test4_fsmdb0.fsmcoll0'. Ident: collection-225--2310912778499990807, commit timestamp: Timestamp(1574796739, 14)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.627-0500 W CONTROL [conn104] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 718 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.615-0500 I STORAGE [conn37] createCollection: test5_fsmdb0.fsmcoll0 with provided UUID: aad04aec-10f6-4c2e-aadf-f1052ef9cc6a and options: { uuid: UUID("aad04aec-10f6-4c2e-aadf-f1052ef9cc6a") }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.063-0500 I NETWORK [conn203] end connection 127.0.0.1:46074 (7 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.639-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-240--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796717, 2151)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.639-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-229--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796717, 2151)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.168-0500 I INDEX [conn145] validating index consistency _id_ on collection config.tags
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.985-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'config.cache.chunks.test4_fsmdb0.agg_out'. Ident: collection-459--7234316082034423155, commit timestamp: Timestamp(1574796739, 9)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.004-0500 I COMMAND [ReplWriterWorker-15] CMD: drop config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.640-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.fsmcoll0 with provided UUID: aad04aec-10f6-4c2e-aadf-f1052ef9cc6a and options: { uuid: UUID("aad04aec-10f6-4c2e-aadf-f1052ef9cc6a") }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.616-0500 I NETWORK [listener] connection accepted from 127.0.0.1:40190 #214 (49 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.063-0500 I NETWORK [conn202] end connection 127.0.0.1:46072 (6 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.617-0500 I NETWORK [conn104] received client metadata from 127.0.0.1:54116 conn104: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.640-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-251--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717) with drop timestamp Timestamp(1574796717, 2216)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.168-0500 I INDEX [conn145] validating index consistency ns_1_min_1 on collection config.tags
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.992-0500 I COMMAND [ReplWriterWorker-10] CMD: drop test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.004-0500 I STORAGE [ReplWriterWorker-15] dropCollection: config.cache.chunks.test4_fsmdb0.fsmcoll0 (c7f3cab2-be92-4a48-8ca9-60ce74a83411) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796739, 23), t: 1 } and commit timestamp Timestamp(1574796739, 23)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.659-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.616-0500 I NETWORK [conn214] received client metadata from 127.0.0.1:40190 conn214: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.226-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 155ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.627-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.fsmcoll0 with provided UUID: aad04aec-10f6-4c2e-aadf-f1052ef9cc6a and options: { uuid: UUID("aad04aec-10f6-4c2e-aadf-f1052ef9cc6a") }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.641-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-260--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717) with drop timestamp Timestamp(1574796717, 2216)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.168-0500 I INDEX [conn145] validating index consistency ns_1_tag_1 on collection config.tags
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.992-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796739, 14), t: 1 } and commit timestamp Timestamp(1574796739, 14)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.004-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for config.cache.chunks.test4_fsmdb0.fsmcoll0 (c7f3cab2-be92-4a48-8ca9-60ce74a83411).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.674-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.625-0500 I INDEX [conn37] index build: done building index _id_ on ns test5_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.226-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 155ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.627-0500 W CONTROL [conn104] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 323 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.643-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-249--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717) with drop timestamp Timestamp(1574796717, 2216)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.168-0500 I INDEX [conn145] Validation complete for collection config.tags (UUID: d225b508-e40e-4c3c-a716-26adc4561055). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.992-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.004-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test4_fsmdb0.fsmcoll0 (c7f3cab2-be92-4a48-8ca9-60ce74a83411)'. Ident: 'index-230--2310912778499990807', commit timestamp: 'Timestamp(1574796739, 23)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.674-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.625-0500 I INDEX [conn37] Registering index build: 9c38cd91-3e8b-478f-96a3-a2c4cdc184ba
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.265-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 194ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.639-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.644-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-250--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05) with drop timestamp Timestamp(1574796717, 2217)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.169-0500 I COMMAND [conn145] CMD: validate config.transactions, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.992-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd)'. Ident: 'index-226--7234316082034423155', commit timestamp: 'Timestamp(1574796739, 14)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.004-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test4_fsmdb0.fsmcoll0 (c7f3cab2-be92-4a48-8ca9-60ce74a83411)'. Ident: 'index-231--2310912778499990807', commit timestamp: 'Timestamp(1574796739, 23)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.674-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 3f2fe922-a730-4a23-98af-2482d0212c89: test5_fsmdb0.fsmcoll0 (aad04aec-10f6-4c2e-aadf-f1052ef9cc6a ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.626-0500 W CONTROL [conn214] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 327 }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.849-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796741, 1022), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 621ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.657-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.645-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-257--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05) with drop timestamp Timestamp(1574796717, 2217)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.169-0500 W STORAGE [conn145] Could not complete validation of table:collection-15-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.992-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd)'. Ident: 'index-227--7234316082034423155', commit timestamp: 'Timestamp(1574796739, 14)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.004-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'config.cache.chunks.test4_fsmdb0.fsmcoll0'. Ident: collection-229--2310912778499990807, commit timestamp: Timestamp(1574796739, 23)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.674-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.638-0500 I INDEX [conn37] index build: starting on test5_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.657-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.646-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-248--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05) with drop timestamp Timestamp(1574796717, 2217)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.169-0500 I INDEX [conn145] validating the internal structure of index _id_ on collection config.transactions
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:19.992-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test4_fsmdb0.fsmcoll0'. Ident: collection-225--7234316082034423155, commit timestamp: Timestamp(1574796739, 14)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.007-0500 I COMMAND [ReplWriterWorker-1] dropDatabase test4_fsmdb0 - starting
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.675-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.638-0500 I INDEX [conn37] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.657-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: f141c5b0-6cc0-41d2-887f-14ad873694c2: test5_fsmdb0.fsmcoll0 (aad04aec-10f6-4c2e-aadf-f1052ef9cc6a ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.647-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-253--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996) with drop timestamp Timestamp(1574796717, 2218)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.169-0500 W STORAGE [conn145] Could not complete validation of table:index-16-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.004-0500 I COMMAND [ReplWriterWorker-7] CMD: drop config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.007-0500 I COMMAND [ReplWriterWorker-1] dropDatabase test4_fsmdb0 - dropped 0 collection(s)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.678-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.fsmcoll0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:21.909-0500 MongoDB shell version v0.0.0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.638-0500 I STORAGE [conn37] Index build initialized: 9c38cd91-3e8b-478f-96a3-a2c4cdc184ba: test5_fsmdb0.fsmcoll0 (aad04aec-10f6-4c2e-aadf-f1052ef9cc6a ): indexes: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.888-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.888-0500 Implicit session: session { "id" : UUID("14969dc7-0819-4f6a-86e4-54284cf6e6a3") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.888-0500 MongoDB server version: 0.0.0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.888-0500 true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.888-0500 2019-11-26T14:32:21.970-0500 I NETWORK [js] Starting new replica set monitor for config-rs/localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.888-0500 2019-11-26T14:32:21.970-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.888-0500 2019-11-26T14:32:21.971-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for config-rs is config-rs/localhost:20000
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.888-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.888-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.888-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.888-0500 [jsTest] New session started with sessionID: { "id" : UUID("38746ffb-6479-44b9-8b3e-bcd281718f38") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.888-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.888-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.888-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.888-0500 2019-11-26T14:32:21.974-0500 I NETWORK [js] Starting new replica set monitor for shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.889-0500 2019-11-26T14:32:21.974-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20002
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.889-0500 2019-11-26T14:32:21.974-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.889-0500 2019-11-26T14:32:21.974-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.889-0500 2019-11-26T14:32:21.975-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs0 is shard-rs0/localhost:20001,localhost:20002,localhost:20003
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.889-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.889-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.889-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.889-0500 [jsTest] New session started with sessionID: { "id" : UUID("e8cee555-35a7-448e-8702-72b60bd0ca56") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.889-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.889-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.889-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.889-0500 2019-11-26T14:32:21.976-0500 I NETWORK [js] Starting new replica set monitor for shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.889-0500 2019-11-26T14:32:21.977-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20005
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.889-0500 2019-11-26T14:32:21.977-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.889-0500 2019-11-26T14:32:21.977-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20004
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.889-0500 2019-11-26T14:32:21.977-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard-rs1 is shard-rs1/localhost:20004,localhost:20005,localhost:20006
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.889-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.889-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.889-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.889-0500 [jsTest] New session started with sessionID: { "id" : UUID("4d2119a3-c577-48df-9d61-68d4a3d1a127") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.890-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.890-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.890-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.890-0500 Skipping data consistency checks for 1-node CSRS: { "type" : "replica set", "primary" : "localhost:20000", "nodes" : [ "localhost:20000" ] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.890-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.890-0500 connecting to: mongodb://localhost:20007,localhost:20008/?compressors=disabled&gssapiServiceName=mongodb
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.890-0500 Implicit session: session { "id" : UUID("bfb136d4-f339-4c87-b9c9-70b5dd9e11f5") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.890-0500 Implicit session: session { "id" : UUID("5b726cb3-b7f7-4974-b41f-35d0af65b475") }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.890-0500 MongoDB server version: 0.0.0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.890-0500 MongoDB server version: 0.0.0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.890-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.890-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.890-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.890-0500 [jsTest] New session started with sessionID: { "id" : UUID("9dd107f3-5d08-4dfb-b777-c6bc896bbd5e") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.890-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.890-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.890-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.890-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.890-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.891-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.891-0500 [jsTest] New session started with sessionID: { "id" : UUID("0bce6db7-af3f-42c2-a572-3771ca120015") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.891-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.891-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.891-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.657-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.891-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.647-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-254--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996) with drop timestamp Timestamp(1574796717, 2218)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.891-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.891-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.891-0500 [jsTest] New session started with sessionID: { "id" : UUID("779b4874-f141-43d4-963b-9a9a64aecd16") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.891-0500 [jsTest] ----
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.169-0500 I INDEX [conn145] validating collection config.transactions (UUID: c2741992-901b-4092-a01f-3dfe88ab21c5)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.891-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.891-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.892-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.892-0500
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.004-0500 I STORAGE [ReplWriterWorker-7] dropCollection: config.cache.chunks.test4_fsmdb0.fsmcoll0 (c7f3cab2-be92-4a48-8ca9-60ce74a83411) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796739, 23), t: 1 } and commit timestamp Timestamp(1574796739, 23)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.892-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.007-0500 I COMMAND [ReplWriterWorker-1] dropDatabase test4_fsmdb0 - finished
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.892-0500 [jsTest] New session started with sessionID: { "id" : UUID("1f528513-5191-448e-bdc5-00eb90006431") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.893-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796741, 1022), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 665ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.892-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.892-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:22.010-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796741, 2527), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a\", to: \"test5_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:714 protocol:op_msg 742ms
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.892-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.892-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.892-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.892-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.892-0500 [jsTest] New session started with sessionID: { "id" : UUID("b03d9cf1-aacb-473c-a9c2-abef9eac4c71") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.892-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.893-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.893-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.893-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.893-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.893-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.893-0500 [jsTest] New session started with sessionID: { "id" : UUID("56c3b6f7-b1fb-41c7-b117-327c4890386e") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.893-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.893-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.893-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.679-0500 W CONTROL [conn104] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 724 }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.893-0500 Running data consistency checks for replica set: shard-rs1/localhost:20004,localhost:20005,localhost:20006
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.638-0500 I INDEX [conn37] Waiting for index build to complete: 9c38cd91-3e8b-478f-96a3-a2c4cdc184ba
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.893-0500 Running data consistency checks for replica set: shard-rs0/localhost:20001,localhost:20002,localhost:20003
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.658-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.893-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.648-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-252--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996) with drop timestamp Timestamp(1574796717, 2218)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.894-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.894-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.894-0500 [jsTest] New session started with sessionID: { "id" : UUID("b9892f6e-a700-4bab-ae96-24419a1d8b2d") } and options: { "causalConsistency" : false }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.169-0500 I INDEX [conn145] validating index consistency _id_ on collection config.transactions
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.894-0500 [jsTest] ----
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.004-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for config.cache.chunks.test4_fsmdb0.fsmcoll0 (c7f3cab2-be92-4a48-8ca9-60ce74a83411).
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.894-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.894-0500
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.018-0500 I SHARDING [ReplWriterWorker-5] setting this node's cached database version for test4_fsmdb0 to {}
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.894-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.894-0500
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.961-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46082 #205 (7 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.894-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.894-0500 [jsTest] New session started with sessionID: { "id" : UUID("a29e1b7b-3a6c-48cd-9284-ff8175e1545c") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500 [jsTest] New session started with sessionID: { "id" : UUID("e1b67ff1-8a78-4548-9dbb-02bc1ac063fd") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500 [jsTest] New session started with sessionID: { "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500 [jsTest] New session started with sessionID: { "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.895-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.896-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.896-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.896-0500 [jsTest] New session started with sessionID: { "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb") } and options: { "causalConsistency" : false }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.896-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.896-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:32:23.896-0500
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:22.057-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796741, 4051), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 163ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.679-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 3f2fe922-a730-4a23-98af-2482d0212c89: test5_fsmdb0.fsmcoll0 ( aad04aec-10f6-4c2e-aadf-f1052ef9cc6a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.639-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.660-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.649-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-258--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796717, 2531)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.169-0500 I INDEX [conn145] Validation complete for collection config.transactions (UUID: c2741992-901b-4092-a01f-3dfe88ab21c5). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.004-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test4_fsmdb0.fsmcoll0 (c7f3cab2-be92-4a48-8ca9-60ce74a83411)'. Ident: 'index-230--7234316082034423155', commit timestamp: 'Timestamp(1574796739, 23)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.550-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52788 #97 (12 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.962-0500 I NETWORK [conn205] received client metadata from 127.0.0.1:46082 conn205: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.682-0500 I NETWORK [conn104] end connection 127.0.0.1:53226 (15 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.639-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.661-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: f141c5b0-6cc0-41d2-887f-14ad873694c2: test5_fsmdb0.fsmcoll0 ( aad04aec-10f6-4c2e-aadf-f1052ef9cc6a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.651-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-264--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796717, 2531)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.170-0500 I COMMAND [conn145] CMD: validate config.version, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.004-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test4_fsmdb0.fsmcoll0 (c7f3cab2-be92-4a48-8ca9-60ce74a83411)'. Ident: 'index-231--7234316082034423155', commit timestamp: 'Timestamp(1574796739, 23)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.004-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'config.cache.chunks.test4_fsmdb0.fsmcoll0'. Ident: collection-229--7234316082034423155, commit timestamp: Timestamp(1574796739, 23)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:21.981-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796741, 2526), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e\", to: \"test5_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:714 protocol:op_msg 713ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.692-0500 I NETWORK [conn101] end connection 127.0.0.1:53148 (14 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.641-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.679-0500 W CONTROL [conn104] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 327 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.652-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-255--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796717, 2531)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.172-0500 I INDEX [conn145] validating the internal structure of index _id_ on collection config.version
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.551-0500 I NETWORK [conn97] received client metadata from 127.0.0.1:52788 conn97: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.007-0500 I COMMAND [ReplWriterWorker-13] dropDatabase test4_fsmdb0 - starting
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:22.035-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46104 #206 (8 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.752-0500 I STORAGE [ReplWriterWorker-3] createCollection: config.cache.chunks.test5_fsmdb0.fsmcoll0 with provided UUID: a801c5e5-16a4-42f8-a221-89c1b6217d87 and options: { uuid: UUID("a801c5e5-16a4-42f8-a221-89c1b6217d87") }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.644-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 9c38cd91-3e8b-478f-96a3-a2c4cdc184ba: test5_fsmdb0.fsmcoll0 ( aad04aec-10f6-4c2e-aadf-f1052ef9cc6a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.682-0500 I NETWORK [conn104] end connection 127.0.0.1:54116 (16 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.653-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-263--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796718, 506)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.654-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-266--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796718, 506)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.655-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-261--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796718, 506)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.553-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52806 #98 (13 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:22.035-0500 I NETWORK [conn206] received client metadata from 127.0.0.1:46104 conn206: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:22.035-0500 I NETWORK [listener] connection accepted from 127.0.0.1:46106 #207 (9 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.644-0500 I INDEX [conn37] Index build completed: 9c38cd91-3e8b-478f-96a3-a2c4cdc184ba
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.692-0500 I NETWORK [conn101] end connection 127.0.0.1:54038 (15 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.175-0500 I INDEX [conn145] validating collection config.version (UUID: d52b8328-6d55-4f54-8cfd-e715a58e3315)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.007-0500 I COMMAND [ReplWriterWorker-13] dropDatabase test4_fsmdb0 - dropped 0 collection(s)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.655-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-272--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796718, 1015)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.656-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-276--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796718, 1015)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.767-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns config.cache.chunks.test5_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:22.036-0500 I NETWORK [conn207] received client metadata from 127.0.0.1:46106 conn207: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.661-0500 I SHARDING [conn37] CMD: shardcollection: { _shardsvrShardCollection: "test5_fsmdb0.fsmcoll0", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("dd718102-e73d-4e8c-9c24-9aea49593289"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796740, 19), signature: { hash: BinData(0, 2F6A9CF6DC62114B4DBA952DB92DB422C8EABEED), keyId: 6763700092420489256 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46012", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 19), t: 1 } }, $db: "admin" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.734-0500 I STORAGE [ReplWriterWorker-4] createCollection: config.cache.chunks.test5_fsmdb0.fsmcoll0 with provided UUID: a801c5e5-16a4-42f8-a221-89c1b6217d87 and options: { uuid: UUID("a801c5e5-16a4-42f8-a221-89c1b6217d87") }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.175-0500 I INDEX [conn145] validating index consistency _id_ on collection config.version
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.007-0500 I COMMAND [ReplWriterWorker-13] dropDatabase test4_fsmdb0 - finished
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.554-0500 I NETWORK [conn98] received client metadata from 127.0.0.1:52806 conn98: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.657-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-268--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796718, 1015)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.659-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-273--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:22.047-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796741, 3041), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c\", to: \"test5_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:714 protocol:op_msg 196ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.661-0500 I SHARDING [conn37] about to log metadata event into changelog: { _id: "nz_desktop:20001-2019-11-26T14:32:20.661-0500-5ddd7dc43bbfe7fa5630eb03", server: "nz_desktop:20001", shard: "shard-rs0", clientAddr: "127.0.0.1:38444", time: new Date(1574796740661), what: "shardCollection.start", ns: "test5_fsmdb0.fsmcoll0", details: { shardKey: { _id: "hashed" }, collection: "test5_fsmdb0.fsmcoll0", uuid: UUID("aad04aec-10f6-4c2e-aadf-f1052ef9cc6a"), empty: true, fromMapReduce: false, primary: "shard-rs0:shard-rs0/localhost:20001,localhost:20002,localhost:20003", numChunks: 4 } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.751-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns config.cache.chunks.test5_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.175-0500 I INDEX [conn145] Validation complete for collection config.version (UUID: d52b8328-6d55-4f54-8cfd-e715a58e3315). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.018-0500 I SHARDING [ReplWriterWorker-5] setting this node's cached database version for test4_fsmdb0 to {}
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.584-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52840 #99 (14 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.787-0500 I INDEX [ReplWriterWorker-15] index build: starting on config.cache.chunks.test5_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.660-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-280--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:22.075-0500 I NETWORK [conn206] end connection 127.0.0.1:46104 (8 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.678-0500 W CONTROL [conn214] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 331 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.773-0500 I INDEX [ReplWriterWorker-5] index build: starting on config.cache.chunks.test5_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.178-0500 I COMMAND [conn145] CMD: validate local.oplog.rs, full:true
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.550-0500 I NETWORK [listener] connection accepted from 127.0.0.1:36150 #97 (12 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.584-0500 I NETWORK [conn99] received client metadata from 127.0.0.1:52840 conn99: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.787-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.661-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-269--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.681-0500 I NETWORK [conn213] end connection 127.0.0.1:40186 (48 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.773-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.179-0500 W STORAGE [conn145] Could not complete validation of table:collection-10-1646426263028043156. This is a transient issue as the collection was actively in use by other operations.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.551-0500 I NETWORK [conn97] received client metadata from 127.0.0.1:36150 conn97: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.619-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52862 #100 (15 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.787-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: a0baad31-22cb-4ad7-b437-cab805744398: config.cache.chunks.test5_fsmdb0.fsmcoll0 (a801c5e5-16a4-42f8-a221-89c1b6217d87 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.662-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-275--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 507)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.681-0500 I NETWORK [conn214] end connection 127.0.0.1:40190 (47 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.773-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 20f7e977-339e-4dd4-b052-2102d68e8a11: config.cache.chunks.test5_fsmdb0.fsmcoll0 (a801c5e5-16a4-42f8-a221-89c1b6217d87 ): indexes: 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.179-0500 I INDEX [conn145] validating collection local.oplog.rs (UUID: 5bb0c359-7cb9-48f8-8ff8-4b4c84c12ec5)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.553-0500 I NETWORK [listener] connection accepted from 127.0.0.1:36162 #98 (13 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.619-0500 I NETWORK [conn100] received client metadata from 127.0.0.1:52862 conn100: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.629-0500 W CONTROL [conn100] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 603 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.634-0500 W CONTROL [conn100] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 603 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.685-0500 I NETWORK [conn206] end connection 127.0.0.1:40120 (46 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.773-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.180-0500 I INDEX [conn145] Validation complete for collection local.oplog.rs (UUID: 5bb0c359-7cb9-48f8-8ff8-4b4c84c12ec5). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.553-0500 I NETWORK [conn98] received client metadata from 127.0.0.1:36162 conn98: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.585-0500 I NETWORK [listener] connection accepted from 127.0.0.1:36202 #99 (14 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.585-0500 I NETWORK [conn99] received client metadata from 127.0.0.1:36202 conn99: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.619-0500 I NETWORK [listener] connection accepted from 127.0.0.1:36224 #100 (15 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.692-0500 I NETWORK [conn205] end connection 127.0.0.1:40118 (45 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.774-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.181-0500 I COMMAND [conn145] CMD: validate local.replset.election, full:true
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.787-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.663-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-282--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 507)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.636-0500 I NETWORK [conn100] end connection 127.0.0.1:52862 (14 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.620-0500 I NETWORK [conn100] received client metadata from 127.0.0.1:36224 conn100: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.717-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test5_fsmdb0.fsmcoll0 to version 1|3||5ddd7dc43bbfe7fa5630eb06 took 1 ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.775-0500 I SHARDING [ReplWriterWorker-12] Marking collection config.cache.chunks.test5_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.182-0500 I INDEX [conn145] validating the internal structure of index _id_ on collection local.replset.election
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.185-0500 I INDEX [conn145] validating collection local.replset.election (UUID: 5f00e271-c3c6-4d7b-9d39-1c8e9e8a77d4)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.185-0500 I INDEX [conn145] validating index consistency _id_ on collection local.replset.election
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.185-0500 I INDEX [conn145] Validation complete for collection local.replset.election (UUID: 5f00e271-c3c6-4d7b-9d39-1c8e9e8a77d4). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.630-0500 W CONTROL [conn100] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 492 }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.634-0500 W CONTROL [conn100] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 492 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.777-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 4 side writes (inserted: 4, deleted: 0) for 'lastmod_1' in 0 ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.788-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.663-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-271--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 507)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.677-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.fsmcoll0 with provided UUID: aad04aec-10f6-4c2e-aadf-f1052ef9cc6a and options: { uuid: UUID("aad04aec-10f6-4c2e-aadf-f1052ef9cc6a") }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.186-0500 I COMMAND [conn145] CMD: validate local.replset.minvalid, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.717-0500 I SHARDING [conn37] Marking collection test5_fsmdb0.fsmcoll0 as collection version: 1|3||5ddd7dc43bbfe7fa5630eb06, shard version: 1|1||5ddd7dc43bbfe7fa5630eb06
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.636-0500 I NETWORK [conn100] end connection 127.0.0.1:36224 (14 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.777-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.test5_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.788-0500 I SHARDING [ReplWriterWorker-0] Marking collection config.cache.chunks.test5_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.664-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-274--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 1012)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.692-0500 I NETWORK [conn97] end connection 127.0.0.1:52788 (13 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.187-0500 I INDEX [conn145] validating the internal structure of index _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.717-0500 I STORAGE [ShardServerCatalogCacheLoader-0] createCollection: config.cache.chunks.test5_fsmdb0.fsmcoll0 with provided UUID: a801c5e5-16a4-42f8-a221-89c1b6217d87 and options: { uuid: UUID("a801c5e5-16a4-42f8-a221-89c1b6217d87") }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.677-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.fsmcoll0 with provided UUID: aad04aec-10f6-4c2e-aadf-f1052ef9cc6a and options: { uuid: UUID("aad04aec-10f6-4c2e-aadf-f1052ef9cc6a") }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.779-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 20f7e977-339e-4dd4-b052-2102d68e8a11: config.cache.chunks.test5_fsmdb0.fsmcoll0 ( a801c5e5-16a4-42f8-a221-89c1b6217d87 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.791-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: drain applied 4 side writes (inserted: 4, deleted: 0) for 'lastmod_1' in 0 ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.665-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-284--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 1012)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.692-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.191-0500 I INDEX [conn145] validating collection local.replset.minvalid (UUID: ce934bfb-84f4-4d44-a963-37c09c6c95a6)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.732-0500 I INDEX [ShardServerCatalogCacheLoader-0] index build: done building index _id_ on ns config.cache.chunks.test5_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.692-0500 I NETWORK [conn97] end connection 127.0.0.1:36150 (13 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.934-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.agg_out with provided UUID: bafc131b-c28e-440d-97ac-3b147505078e and options: { uuid: UUID("bafc131b-c28e-440d-97ac-3b147505078e") }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.791-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index lastmod_1 on ns config.cache.chunks.test5_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.667-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-270--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 1012)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.709-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.191-0500 I INDEX [conn145] validating index consistency _id_ on collection local.replset.minvalid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.733-0500 I INDEX [ShardServerCatalogCacheLoader-0] Registering index build: 395dc708-267f-42ff-9074-096cf0e93fdd
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.693-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.945-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.792-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: a0baad31-22cb-4ad7-b437-cab805744398: config.cache.chunks.test5_fsmdb0.fsmcoll0 ( a801c5e5-16a4-42f8-a221-89c1b6217d87 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.668-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-279--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 1517)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.709-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.191-0500 I INDEX [conn145] Validation complete for collection local.replset.minvalid (UUID: ce934bfb-84f4-4d44-a963-37c09c6c95a6). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.748-0500 I INDEX [ShardServerCatalogCacheLoader-0] index build: starting on config.cache.chunks.test5_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.710-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.fsmcoll0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.980-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.agg_out properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.946-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.agg_out with provided UUID: bafc131b-c28e-440d-97ac-3b147505078e and options: { uuid: UUID("bafc131b-c28e-440d-97ac-3b147505078e") }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.669-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-286--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 1517)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.709-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: efdb5b57-1578-4c8d-b415-7a69d7650d6a: test5_fsmdb0.fsmcoll0 (aad04aec-10f6-4c2e-aadf-f1052ef9cc6a ): indexes: 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.192-0500 I COMMAND [conn145] CMD: validate local.replset.oplogTruncateAfterPoint, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.748-0500 I INDEX [ShardServerCatalogCacheLoader-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.710-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.980-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.959-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.670-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-277--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 1517)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.709-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.196-0500 I INDEX [conn145] validating the internal structure of index _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.748-0500 I STORAGE [ShardServerCatalogCacheLoader-0] Index build initialized: 395dc708-267f-42ff-9074-096cf0e93fdd: config.cache.chunks.test5_fsmdb0.fsmcoll0 (a801c5e5-16a4-42f8-a221-89c1b6217d87 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.710-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 6132a19e-7f82-464f-b93f-9610a710909b: test5_fsmdb0.fsmcoll0 (aad04aec-10f6-4c2e-aadf-f1052ef9cc6a ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.980-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 5b9643e3-981a-4493-a17e-6aa2d30183df: test5_fsmdb0.agg_out (bafc131b-c28e-440d-97ac-3b147505078e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.994-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.agg_out properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.670-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-289--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2021)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.710-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.200-0500 I INDEX [conn145] validating collection local.replset.oplogTruncateAfterPoint (UUID: b5258dce-fb89-4436-a191-b8586ea2e6c0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.748-0500 I INDEX [ShardServerCatalogCacheLoader-0] Waiting for index build to complete: 395dc708-267f-42ff-9074-096cf0e93fdd
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.710-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.980-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.994-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.671-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-290--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2021)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.712-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.200-0500 I INDEX [conn145] validating index consistency _id_ on collection local.replset.oplogTruncateAfterPoint
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.748-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.711-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.981-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.994-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 04cc8323-6a4b-47c0-b17b-1d4642fe20a3: test5_fsmdb0.agg_out (bafc131b-c28e-440d-97ac-3b147505078e ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.672-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-287--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2021)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.713-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: efdb5b57-1578-4c8d-b415-7a69d7650d6a: test5_fsmdb0.fsmcoll0 ( aad04aec-10f6-4c2e-aadf-f1052ef9cc6a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.200-0500 I INDEX [conn145] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: b5258dce-fb89-4436-a191-b8586ea2e6c0). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.749-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.713-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.983-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.994-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.673-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-293--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2526)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.736-0500 I STORAGE [ReplWriterWorker-1] createCollection: config.cache.chunks.test5_fsmdb0.fsmcoll0 with provided UUID: d4610810-07b8-4865-9f1e-e437f23b4c75 and options: { uuid: UUID("d4610810-07b8-4865-9f1e-e437f23b4c75") }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.201-0500 I COMMAND [conn145] CMD: validate local.startup_log, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.752-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index lastmod_1 on ns config.cache.chunks.test5_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.715-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 6132a19e-7f82-464f-b93f-9610a710909b: test5_fsmdb0.fsmcoll0 ( aad04aec-10f6-4c2e-aadf-f1052ef9cc6a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:20.985-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 5b9643e3-981a-4493-a17e-6aa2d30183df: test5_fsmdb0.agg_out ( bafc131b-c28e-440d-97ac-3b147505078e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.995-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.675-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-294--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2526)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.753-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns config.cache.chunks.test5_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.202-0500 I INDEX [conn145] validating the internal structure of index _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.755-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 395dc708-267f-42ff-9074-096cf0e93fdd: config.cache.chunks.test5_fsmdb0.fsmcoll0 ( a801c5e5-16a4-42f8-a221-89c1b6217d87 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.736-0500 I STORAGE [ReplWriterWorker-13] createCollection: config.cache.chunks.test5_fsmdb0.fsmcoll0 with provided UUID: d4610810-07b8-4865-9f1e-e437f23b4c75 and options: { uuid: UUID("d4610810-07b8-4865-9f1e-e437f23b4c75") }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.103-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3 with provided UUID: e225d742-d440-49e9-983f-4f289379cc6d and options: { uuid: UUID("e225d742-d440-49e9-983f-4f289379cc6d"), temp: true }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.997-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.676-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-292--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2526)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.774-0500 I INDEX [ReplWriterWorker-6] index build: starting on config.cache.chunks.test5_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.204-0500 I INDEX [conn145] validating collection local.startup_log (UUID: a1488758-c116-4144-adba-02b8f3b8144d)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.755-0500 I INDEX [ShardServerCatalogCacheLoader-0] Index build completed: 395dc708-267f-42ff-9074-096cf0e93fdd
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.752-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns config.cache.chunks.test5_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.117-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:20.998-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 04cc8323-6a4b-47c0-b17b-1d4642fe20a3: test5_fsmdb0.agg_out ( bafc131b-c28e-440d-97ac-3b147505078e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.677-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-297--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 3029)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.774-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.204-0500 I INDEX [conn145] validating index consistency _id_ on collection local.startup_log
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.755-0500 I SHARDING [ShardServerCatalogCacheLoader-0] Marking collection config.cache.chunks.test5_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.774-0500 I INDEX [ReplWriterWorker-12] index build: starting on config.cache.chunks.test5_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.117-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b with provided UUID: e4ec382f-4f92-4cc8-aa2a-63aca9447890 and options: { uuid: UUID("e4ec382f-4f92-4cc8-aa2a-63aca9447890"), temp: true }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.118-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3 with provided UUID: e225d742-d440-49e9-983f-4f289379cc6d and options: { uuid: UUID("e225d742-d440-49e9-983f-4f289379cc6d"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.677-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-298--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 3029)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.774-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: e65da898-864a-4de7-9b2c-8556959d5ba8: config.cache.chunks.test5_fsmdb0.fsmcoll0 (d4610810-07b8-4865-9f1e-e437f23b4c75 ): indexes: 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.204-0500 I INDEX [conn145] Validation complete for collection local.startup_log (UUID: a1488758-c116-4144-adba-02b8f3b8144d). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.759-0500 I SHARDING [conn37] Created 4 chunk(s) for: test5_fsmdb0.fsmcoll0, producing collection version 1|3||5ddd7dc43bbfe7fa5630eb06
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.774-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.131-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.132-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.678-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-295--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 3029)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.774-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.205-0500 I COMMAND [conn145] CMD: validate local.system.replset, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.759-0500 I SHARDING [conn37] about to log metadata event into changelog: { _id: "nz_desktop:20001-2019-11-26T14:32:20.759-0500-5ddd7dc43bbfe7fa5630eb30", server: "nz_desktop:20001", shard: "shard-rs0", clientAddr: "127.0.0.1:38444", time: new Date(1574796740759), what: "shardCollection.end", ns: "test5_fsmdb0.fsmcoll0", details: { version: "1|3||5ddd7dc43bbfe7fa5630eb06" } }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.774-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 672a0e35-0d5b-4fd6-a887-3435eff651d5: config.cache.chunks.test5_fsmdb0.fsmcoll0 (d4610810-07b8-4865-9f1e-e437f23b4c75 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.132-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7 with provided UUID: ca540cbc-f78b-457b-85d5-dc4bf7272510 and options: { uuid: UUID("ca540cbc-f78b-457b-85d5-dc4bf7272510"), temp: true }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.133-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b with provided UUID: e4ec382f-4f92-4cc8-aa2a-63aca9447890 and options: { uuid: UUID("e4ec382f-4f92-4cc8-aa2a-63aca9447890"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.679-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-301--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.775-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.205-0500 I INDEX [conn145] validating the internal structure of index _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.761-0500 I COMMAND [conn37] command admin.$cmd appName: "MongoDB Shell" command: _shardsvrShardCollection { _shardsvrShardCollection: "test5_fsmdb0.fsmcoll0", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("dd718102-e73d-4e8c-9c24-9aea49593289"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796740, 19), signature: { hash: BinData(0, 2F6A9CF6DC62114B4DBA952DB92DB422C8EABEED), keyId: 6763700092420489256 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46012", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 19), t: 1 } }, $db: "admin" } numYields:0 reslen:415 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 9 } }, ReplicationStateTransition: { acquireCount: { w: 15 } }, Global: { acquireCount: { r: 8, w: 7 } }, Database: { acquireCount: { r: 8, w: 7, W: 1 } }, Collection: { acquireCount: { r: 8, w: 3, W: 4 } }, Mutex: { acquireCount: { r: 16, W: 4 } } } flowControl:{ acquireCount: 5 } storage:{} protocol:op_msg 147ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.774-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.147-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.149-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.680-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-302--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.776-0500 I SHARDING [ReplWriterWorker-14] Marking collection config.cache.chunks.test5_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.207-0500 I INDEX [conn145] validating collection local.system.replset (UUID: ea98bf03-b956-4e01-b9a4-857e601cceda)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.921-0500 I STORAGE [conn37] createCollection: test5_fsmdb0.agg_out with generated UUID: bafc131b-c28e-440d-97ac-3b147505078e and options: {}
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.775-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.147-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7 with provided UUID: 70d61678-f53b-4e07-b162-586616bbfc51 and options: { uuid: UUID("70d61678-f53b-4e07-b162-586616bbfc51"), temp: true }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.149-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7 with provided UUID: ca540cbc-f78b-457b-85d5-dc4bf7272510 and options: { uuid: UUID("ca540cbc-f78b-457b-85d5-dc4bf7272510"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.681-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-299--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.778-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: drain applied 4 side writes (inserted: 4, deleted: 0) for 'lastmod_1' in 0 ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.207-0500 I INDEX [conn145] validating index consistency _id_ on collection local.system.replset
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.932-0500 I INDEX [conn37] index build: done building index _id_ on ns test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.776-0500 I SHARDING [ReplWriterWorker-1] Marking collection config.cache.chunks.test5_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.161-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.165-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.683-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-305--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 510)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.778-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index lastmod_1 on ns config.cache.chunks.test5_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.207-0500 I INDEX [conn145] Validation complete for collection local.system.replset (UUID: ea98bf03-b956-4e01-b9a4-857e601cceda). No corruption found.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.952-0500 I INDEX [conn65] Registering index build: 9479624b-d115-4a19-91a0-95ed24095007
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.778-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: drain applied 4 side writes (inserted: 4, deleted: 0) for 'lastmod_1' in 0 ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.162-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2 with provided UUID: 9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc and options: { uuid: UUID("9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc"), temp: true }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.166-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7 with provided UUID: 70d61678-f53b-4e07-b162-586616bbfc51 and options: { uuid: UUID("70d61678-f53b-4e07-b162-586616bbfc51"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.684-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-306--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 510)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:20.781-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: e65da898-864a-4de7-9b2c-8556959d5ba8: config.cache.chunks.test5_fsmdb0.fsmcoll0 ( d4610810-07b8-4865-9f1e-e437f23b4c75 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.208-0500 I COMMAND [conn145] CMD: validate local.system.rollback.id, full:true
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.965-0500 I INDEX [conn65] index build: starting on test5_fsmdb0.agg_out properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.778-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index lastmod_1 on ns config.cache.chunks.test5_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.176-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.180-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.685-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-303--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 510)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.209-0500 I INDEX [conn145] validating the internal structure of index _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:21.977-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52902 #101 (14 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.965-0500 I INDEX [conn65] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:20.780-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 672a0e35-0d5b-4fd6-a887-3435eff651d5: config.cache.chunks.test5_fsmdb0.fsmcoll0 ( d4610810-07b8-4865-9f1e-e437f23b4c75 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.195-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.180-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2 with provided UUID: 9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc and options: { uuid: UUID("9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.685-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-310--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 1850)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.211-0500 I INDEX [conn145] validating collection local.system.rollback.id (UUID: 0ad52f2a-9d3e-4f9f-b91b-17a9c570ab7e)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:21.977-0500 I NETWORK [conn101] received client metadata from 127.0.0.1:52902 conn101: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.965-0500 I STORAGE [conn65] Index build initialized: 9479624b-d115-4a19-91a0-95ed24095007: test5_fsmdb0.agg_out (bafc131b-c28e-440d-97ac-3b147505078e ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:21.977-0500 I NETWORK [listener] connection accepted from 127.0.0.1:36264 #101 (14 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.195-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.197-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.686-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-312--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 1850)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.211-0500 I INDEX [conn145] validating index consistency _id_ on collection local.system.rollback.id
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:22.045-0500 I NETWORK [listener] connection accepted from 127.0.0.1:52920 #102 (15 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.965-0500 I INDEX [conn65] Waiting for index build to complete: 9479624b-d115-4a19-91a0-95ed24095007
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:21.977-0500 I NETWORK [conn101] received client metadata from 127.0.0.1:36264 conn101: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.195-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: c1c320a5-1979-4a3c-83a8-fc966bdbdcc6: test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3 (e225d742-d440-49e9-983f-4f289379cc6d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.214-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.687-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-308--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 1850)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.211-0500 I INDEX [conn145] Validation complete for collection local.system.rollback.id (UUID: 0ad52f2a-9d3e-4f9f-b91b-17a9c570ab7e). No corruption found.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:22.045-0500 I NETWORK [conn102] received client metadata from 127.0.0.1:52920 conn102: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.965-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:22.046-0500 I NETWORK [listener] connection accepted from 127.0.0.1:36282 #102 (15 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:22.046-0500 I NETWORK [conn102] received client metadata from 127.0.0.1:36282 conn102: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.214-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.688-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-311--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2021)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.213-0500 I NETWORK [conn145] end connection 127.0.0.1:57338 (22 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:22.056-0500 W CONTROL [conn102] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 603 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.966-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.195-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:22.057-0500 W CONTROL [conn102] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 492 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.214-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: f2055154-b51e-4ea2-9460-351cbd7f22fa: test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3 (e225d742-d440-49e9-983f-4f289379cc6d ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.689-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-316--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2021)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.336-0500 I NETWORK [conn144] end connection 127.0.0.1:57302 (21 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:22.072-0500 W CONTROL [conn102] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 603 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.967-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.196-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:22.073-0500 W CONTROL [conn102] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 492 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.214-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.691-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-309--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2021)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.346-0500 I NETWORK [conn143] end connection 127.0.0.1:57300 (20 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:22.075-0500 I NETWORK [conn102] end connection 127.0.0.1:52920 (14 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.968-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 9479624b-d115-4a19-91a0-95ed24095007: test5_fsmdb0.agg_out ( bafc131b-c28e-440d-97ac-3b147505078e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:20.969-0500 I INDEX [conn65] Index build completed: 9479624b-d115-4a19-91a0-95ed24095007
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.072-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.072-0500 I NETWORK [listener] connection accepted from 127.0.0.1:40222 #216 (46 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.215-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.967-0500 I SHARDING [conn17] distributed lock 'test4_fsmdb0' acquired for 'dropDatabase', ts : 5ddd7dc35cde74b6784bba91
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.198-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:22.075-0500 I NETWORK [conn102] end connection 127.0.0.1:36282 (14 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.072-0500 I NETWORK [conn216] received client metadata from 127.0.0.1:40222 conn216: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.692-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-315--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2022)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.218-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.968-0500 I SHARDING [conn17] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:32:19.968-0500-5ddd7dc35cde74b6784bba94", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55576", time: new Date(1574796739968), what: "dropDatabase.start", ns: "test4_fsmdb0", details: {} }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.206-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: c1c320a5-1979-4a3c-83a8-fc966bdbdcc6: test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3 ( e225d742-d440-49e9-983f-4f289379cc6d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.073-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3 with generated UUID: e225d742-d440-49e9-983f-4f289379cc6d and options: { temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.692-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-320--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2022)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.228-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: f2055154-b51e-4ea2-9460-351cbd7f22fa: test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3 ( e225d742-d440-49e9-983f-4f289379cc6d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.971-0500 I SHARDING [conn17] distributed lock 'test4_fsmdb0.agg_out' acquired for 'dropCollection', ts : 5ddd7dc35cde74b6784bba97
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.213-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.073-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b with generated UUID: e4ec382f-4f92-4cc8-aa2a-63aca9447890 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.693-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-313--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2022)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.235-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.971-0500 I SHARDING [conn17] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:32:19.971-0500-5ddd7dc35cde74b6784bba99", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55576", time: new Date(1574796739971), what: "dropCollection.start", ns: "test4_fsmdb0.agg_out", details: {} }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.213-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.075-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7 with generated UUID: ca540cbc-f78b-457b-85d5-dc4bf7272510 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.694-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-319--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 373)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.236-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.981-0500 I SHARDING [conn17] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:32:19.981-0500-5ddd7dc35cde74b6784bbaa1", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55576", time: new Date(1574796739981), what: "dropCollection", ns: "test4_fsmdb0.agg_out", details: {} }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.213-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 8aa52441-79db-4b55-ad20-17a3bf6c6b24: test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b (e4ec382f-4f92-4cc8-aa2a-63aca9447890 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.075-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7 with generated UUID: 70d61678-f53b-4e07-b162-586616bbfc51 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.695-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-324--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 373)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.236-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: c46e3be0-b9d9-4d70-af47-7afee07a4fc7: test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b (e4ec382f-4f92-4cc8-aa2a-63aca9447890 ): indexes: 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.983-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dc35cde74b6784bba97' unlocked.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.213-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.076-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2 with generated UUID: 9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc and options: { temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.696-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-317--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 373)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.236-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.984-0500 I SHARDING [conn17] distributed lock 'test4_fsmdb0.fsmcoll0' acquired for 'dropCollection', ts : 5ddd7dc35cde74b6784bbaa4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.214-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.101-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.697-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-329--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796) with drop timestamp Timestamp(1574796726, 1203)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.236-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.984-0500 I SHARDING [conn17] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:32:19.984-0500-5ddd7dc35cde74b6784bbaa6", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55576", time: new Date(1574796739984), what: "dropCollection.start", ns: "test4_fsmdb0.fsmcoll0", details: {} }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.216-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.101-0500 I INDEX [conn110] Registering index build: d924d271-9469-4782-9f3a-820008ed730b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.698-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-330--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796) with drop timestamp Timestamp(1574796726, 1203)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.239-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:19.999-0500 I SHARDING [conn17] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:32:19.999-0500-5ddd7dc35cde74b6784bbaaf", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55576", time: new Date(1574796739999), what: "dropCollection", ns: "test4_fsmdb0.fsmcoll0", details: {} }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.222-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 8aa52441-79db-4b55-ad20-17a3bf6c6b24: test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b ( e4ec382f-4f92-4cc8-aa2a-63aca9447890 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.106-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.699-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-327--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796) with drop timestamp Timestamp(1574796726, 1203)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.245-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: c46e3be0-b9d9-4d70-af47-7afee07a4fc7: test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b ( e4ec382f-4f92-4cc8-aa2a-63aca9447890 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.002-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dc35cde74b6784bbaa4' unlocked.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.241-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.107-0500 I INDEX [conn114] Registering index build: 2f7e13c5-6650-459e-afe0-5945a2fb15b2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.700-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-337--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b) with drop timestamp Timestamp(1574796726, 2083)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.262-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.016-0500 I SHARDING [conn17] about to log metadata event into changelog: { _id: "nz_desktop:20000-2019-11-26T14:32:20.016-0500-5ddd7dc45cde74b6784bbab7", server: "nz_desktop:20000", shard: "config", clientAddr: "127.0.0.1:55576", time: new Date(1574796740016), what: "dropDatabase", ns: "test4_fsmdb0", details: {} }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.241-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.113-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.701-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-338--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b) with drop timestamp Timestamp(1574796726, 2083)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.262-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.018-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dc35cde74b6784bba91' unlocked.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.241-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 053ba25c-f322-4962-82e0-ed479b2a6711: test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7 (ca540cbc-f78b-457b-85d5-dc4bf7272510 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.113-0500 I INDEX [conn112] Registering index build: c7d843b8-e111-408f-9b3e-de14e38352e5
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.702-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-334--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b) with drop timestamp Timestamp(1574796726, 2083)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.262-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 4fd0637e-068b-40eb-9a8f-3c3ede826327: test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7 (ca540cbc-f78b-457b-85d5-dc4bf7272510 ): indexes: 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.544-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57366 #146 (21 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.241-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.119-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.703-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-336--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05) with drop timestamp Timestamp(1574796726, 2150)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.262-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.544-0500 I NETWORK [conn146] received client metadata from 127.0.0.1:57366 conn146: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.242-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.119-0500 I INDEX [conn108] Registering index build: aee64474-be3a-4d7e-bd58-0d0aad855427
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.704-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-340--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05) with drop timestamp Timestamp(1574796726, 2150)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.263-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.545-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57368 #147 (22 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.245-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.126-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.705-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-333--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05) with drop timestamp Timestamp(1574796726, 2150)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.265-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.545-0500 I NETWORK [conn147] received client metadata from 127.0.0.1:57368 conn147: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.252-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 053ba25c-f322-4962-82e0-ed479b2a6711: test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7 ( ca540cbc-f78b-457b-85d5-dc4bf7272510 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.126-0500 I INDEX [conn46] Registering index build: 2228e2fe-d849-42ea-b796-e1c996dae0ed
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.706-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-335--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235) with drop timestamp Timestamp(1574796726, 3334)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.274-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 4fd0637e-068b-40eb-9a8f-3c3ede826327: test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7 ( ca540cbc-f78b-457b-85d5-dc4bf7272510 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.547-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57370 #148 (23 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.260-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.141-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.707-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-344--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235) with drop timestamp Timestamp(1574796726, 3334)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.283-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.547-0500 I NETWORK [conn148] received client metadata from 127.0.0.1:57370 conn148: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.260-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.141-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.708-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-332--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235) with drop timestamp Timestamp(1574796726, 3334)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.283-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.548-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57372 #149 (24 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.260-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: c540dd4c-b40b-4204-8c49-eaa95b170cbb: test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7 (70d61678-f53b-4e07-b162-586616bbfc51 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.141-0500 I STORAGE [conn110] Index build initialized: d924d271-9469-4782-9f3a-820008ed730b: test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3 (e225d742-d440-49e9-983f-4f289379cc6d ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.709-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-323--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 3335)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.283-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 51b285a7-cba2-46f0-9bfe-7d1c16edb606: test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7 (70d61678-f53b-4e07-b162-586616bbfc51 ): indexes: 1
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.548-0500 I NETWORK [conn149] received client metadata from 127.0.0.1:57372 conn149: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.260-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.141-0500 I INDEX [conn110] Waiting for index build to complete: d924d271-9469-4782-9f3a-820008ed730b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.710-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-326--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 3335)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.283-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.565-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57408 #150 (25 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.261-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.154-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.711-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-321--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 3335)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.284-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.565-0500 I NETWORK [conn150] received client metadata from 127.0.0.1:57408 conn150: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.264-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.154-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.712-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-349--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 3336)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.287-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.573-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57418 #151 (26 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.267-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: c540dd4c-b40b-4204-8c49-eaa95b170cbb: test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7 ( 70d61678-f53b-4e07-b162-586616bbfc51 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.154-0500 I STORAGE [conn114] Index build initialized: 2f7e13c5-6650-459e-afe0-5945a2fb15b2: test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b (e4ec382f-4f92-4cc8-aa2a-63aca9447890 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.712-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-350--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 3336)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.291-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 51b285a7-cba2-46f0-9bfe-7d1c16edb606: test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7 ( 70d61678-f53b-4e07-b162-586616bbfc51 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.573-0500 I NETWORK [conn151] received client metadata from 127.0.0.1:57418 conn151: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:25.503-0500 I CONNPOOL [ShardRegistry] Ending idle connection to host localhost:20000 because the pool meets constraints; 2 connections to that host remain open
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.283-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.154-0500 I INDEX [conn114] Waiting for index build to complete: 2f7e13c5-6650-459e-afe0-5945a2fb15b2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.714-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-347--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 3336)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.306-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.575-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57420 #152 (27 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.283-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.154-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.715-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-354--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b) with drop timestamp Timestamp(1574796726, 4350)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.306-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.306-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: bc554fd5-15ef-486f-9771-118b651af1ac: test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2 (9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.283-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 00e21b13-c326-49c3-aa7e-562804330fbf: test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2 (9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.154-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.716-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-356--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b) with drop timestamp Timestamp(1574796726, 4350)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.575-0500 I NETWORK [conn152] received client metadata from 127.0.0.1:57420 conn152: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.306-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.284-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.155-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.717-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-352--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b) with drop timestamp Timestamp(1574796726, 4350)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.596-0500 I SHARDING [conn17] distributed lock 'test5_fsmdb0' acquired for 'dropCollection', ts : 5ddd7dc45cde74b6784bbad0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.307-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.284-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.155-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.718-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-355--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a) with drop timestamp Timestamp(1574796726, 4354)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.597-0500 I SHARDING [conn17] distributed lock 'test5_fsmdb0.fsmcoll0' acquired for 'dropCollection', ts : 5ddd7dc45cde74b6784bbad2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.308-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b (e4ec382f-4f92-4cc8-aa2a-63aca9447890) to test5_fsmdb0.agg_out and drop bafc131b-c28e-440d-97ac-3b147505078e.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.285-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b (e4ec382f-4f92-4cc8-aa2a-63aca9447890) to test5_fsmdb0.agg_out and drop bafc131b-c28e-440d-97ac-3b147505078e.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.164-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.719-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-358--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a) with drop timestamp Timestamp(1574796726, 4354)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.599-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dc45cde74b6784bbad2' unlocked.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.309-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.287-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.166-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.719-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-353--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a) with drop timestamp Timestamp(1574796726, 4354)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.600-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dc45cde74b6784bbad0' unlocked.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.309-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (bafc131b-c28e-440d-97ac-3b147505078e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 1021), t: 1 } and commit timestamp Timestamp(1574796741, 1021)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.287-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (bafc131b-c28e-440d-97ac-3b147505078e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 1021), t: 1 } and commit timestamp Timestamp(1574796741, 1021)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.173-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.720-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-343--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5360)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.602-0500 I SHARDING [conn17] distributed lock 'test5_fsmdb0' acquired for 'enableSharding', ts : 5ddd7dc45cde74b6784bbada
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.309-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (bafc131b-c28e-440d-97ac-3b147505078e).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.287-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (bafc131b-c28e-440d-97ac-3b147505078e).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.173-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.722-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-346--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5360)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.605-0500 I SHARDING [conn17] Registering new database { _id: "test5_fsmdb0", primary: "shard-rs0", partitioned: false, version: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 } } in sharding catalog
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.309-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection e4ec382f-4f92-4cc8-aa2a-63aca9447890 from test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.287-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection e4ec382f-4f92-4cc8-aa2a-63aca9447890 from test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.173-0500 I STORAGE [conn112] Index build initialized: c7d843b8-e111-408f-9b3e-de14e38352e5: test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7 (ca540cbc-f78b-457b-85d5-dc4bf7272510 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.723-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-341--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5360)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.607-0500 I SHARDING [conn17] Enabling sharding for database [test5_fsmdb0] in config db
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.309-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bafc131b-c28e-440d-97ac-3b147505078e)'. Ident: 'index-370--4104909142373009110', commit timestamp: 'Timestamp(1574796741, 1021)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.287-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bafc131b-c28e-440d-97ac-3b147505078e)'. Ident: 'index-370--8000595249233899911', commit timestamp: 'Timestamp(1574796741, 1021)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.173-0500 I INDEX [conn112] Waiting for index build to complete: c7d843b8-e111-408f-9b3e-de14e38352e5
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.724-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-363--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5361)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.609-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dc45cde74b6784bbada' unlocked.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.309-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bafc131b-c28e-440d-97ac-3b147505078e)'. Ident: 'index-371--4104909142373009110', commit timestamp: 'Timestamp(1574796741, 1021)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.287-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bafc131b-c28e-440d-97ac-3b147505078e)'. Ident: 'index-371--8000595249233899911', commit timestamp: 'Timestamp(1574796741, 1021)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.174-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: d924d271-9469-4782-9f3a-820008ed730b: test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3 ( e225d742-d440-49e9-983f-4f289379cc6d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.725-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-366--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5361)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.612-0500 I SHARDING [conn17] distributed lock 'test5_fsmdb0' acquired for 'shardCollection', ts : 5ddd7dc45cde74b6784bbae3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.309-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-369--4104909142373009110, commit timestamp: Timestamp(1574796741, 1021)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.287-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-369--8000595249233899911, commit timestamp: Timestamp(1574796741, 1021)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.177-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 2f7e13c5-6650-459e-afe0-5945a2fb15b2: test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b ( e4ec382f-4f92-4cc8-aa2a-63aca9447890 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.725-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-360--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5361)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.613-0500 I SHARDING [conn17] distributed lock 'test5_fsmdb0.fsmcoll0' acquired for 'shardCollection', ts : 5ddd7dc45cde74b6784bbae5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.310-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3 (e225d742-d440-49e9-983f-4f289379cc6d) to test5_fsmdb0.agg_out and drop e4ec382f-4f92-4cc8-aa2a-63aca9447890.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.288-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3 (e225d742-d440-49e9-983f-4f289379cc6d) to test5_fsmdb0.agg_out and drop e4ec382f-4f92-4cc8-aa2a-63aca9447890.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.192-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.726-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-364--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5869)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.685-0500 I NETWORK [conn147] end connection 127.0.0.1:57368 (26 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.310-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (e4ec382f-4f92-4cc8-aa2a-63aca9447890) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 1022), t: 1 } and commit timestamp Timestamp(1574796741, 1022)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.288-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test5_fsmdb0.agg_out (e4ec382f-4f92-4cc8-aa2a-63aca9447890) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 1022), t: 1 } and commit timestamp Timestamp(1574796741, 1022)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.192-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.727-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-368--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5869)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.692-0500 I NETWORK [conn146] end connection 127.0.0.1:57366 (25 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.310-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (e4ec382f-4f92-4cc8-aa2a-63aca9447890).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.288-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test5_fsmdb0.agg_out (e4ec382f-4f92-4cc8-aa2a-63aca9447890).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.192-0500 I STORAGE [conn108] Index build initialized: aee64474-be3a-4d7e-bd58-0d0aad855427: test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7 (70d61678-f53b-4e07-b162-586616bbfc51 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.728-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-361--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5869)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.713-0500 D4 TXN [conn31] New transaction started with txnNumber: 0 on session with lsid 5a42fc82-8fed-4d56-af9c-31ac83dbd2bc
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.310-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection e225d742-d440-49e9-983f-4f289379cc6d from test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.288-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection e225d742-d440-49e9-983f-4f289379cc6d from test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.192-0500 I INDEX [conn108] Waiting for index build to complete: aee64474-be3a-4d7e-bd58-0d0aad855427
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.730-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-365--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.761-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test5_fsmdb0 from version {} to version { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.310-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e4ec382f-4f92-4cc8-aa2a-63aca9447890)'. Ident: 'index-376--4104909142373009110', commit timestamp: 'Timestamp(1574796741, 1022)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.288-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e4ec382f-4f92-4cc8-aa2a-63aca9447890)'. Ident: 'index-376--8000595249233899911', commit timestamp: 'Timestamp(1574796741, 1022)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.192-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.730-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-370--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.762-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test5_fsmdb0.fsmcoll0 to version 1|3||5ddd7dc43bbfe7fa5630eb06 took 0 ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.310-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e4ec382f-4f92-4cc8-aa2a-63aca9447890)'. Ident: 'index-385--4104909142373009110', commit timestamp: 'Timestamp(1574796741, 1022)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.288-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e4ec382f-4f92-4cc8-aa2a-63aca9447890)'. Ident: 'index-385--8000595249233899911', commit timestamp: 'Timestamp(1574796741, 1022)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.192-0500 I INDEX [conn110] Index build completed: d924d271-9469-4782-9f3a-820008ed730b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.731-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-362--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.763-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dc45cde74b6784bbae5' unlocked.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.310-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-375--4104909142373009110, commit timestamp: Timestamp(1574796741, 1022)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.288-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-375--8000595249233899911, commit timestamp: Timestamp(1574796741, 1022)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.192-0500 I INDEX [conn114] Index build completed: 2f7e13c5-6650-459e-afe0-5945a2fb15b2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.732-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-374--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 509)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.765-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dc45cde74b6784bbae3' unlocked.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.311-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883 with provided UUID: 2615395f-33b1-4b4a-907f-869755b6e215 and options: { uuid: UUID("2615395f-33b1-4b4a-907f-869755b6e215"), temp: true }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.289-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883 with provided UUID: 2615395f-33b1-4b4a-907f-869755b6e215 and options: { uuid: UUID("2615395f-33b1-4b4a-907f-869755b6e215"), temp: true }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.192-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.733-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-376--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 509)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.765-0500 I COMMAND [conn17] command admin.$cmd appName: "MongoDB Shell" command: _configsvrShardCollection { _configsvrShardCollection: "test5_fsmdb0.fsmcoll0", key: { _id: "hashed" }, unique: false, numInitialChunks: 0, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("dd718102-e73d-4e8c-9c24-9aea49593289"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1574796740, 17), signature: { hash: BinData(0, 2F6A9CF6DC62114B4DBA952DB92DB422C8EABEED), keyId: 6763700092420489256 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46012", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 17), t: 1 } }, $db: "admin" } numYields:0 reslen:587 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 6 } }, Global: { acquireCount: { r: 2, w: 4 } }, Database: { acquireCount: { r: 2, w: 4 } }, Collection: { acquireCount: { r: 2, w: 4 } }, Mutex: { acquireCount: { r: 10, W: 1 } } } flowControl:{ acquireCount: 4 } storage:{} protocol:op_msg 154ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.312-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: bc554fd5-15ef-486f-9771-118b651af1ac: test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2 ( 9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.289-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 00e21b13-c326-49c3-aa7e-562804330fbf: test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2 ( 9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.193-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.734-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-372--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 509)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.769-0500 I SHARDING [conn17] distributed lock 'test5_fsmdb0' acquired for 'enableSharding', ts : 5ddd7dc45cde74b6784bbb04
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.328-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.305-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.193-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.734-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-375--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 1077)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.769-0500 I SHARDING [conn17] Enabling sharding for database [test5_fsmdb0] in config db
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.329-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037 with provided UUID: 50313562-2926-49b7-94f9-3777d5535866 and options: { uuid: UUID("50313562-2926-49b7-94f9-3777d5535866"), temp: true }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.307-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037 with provided UUID: 50313562-2926-49b7-94f9-3777d5535866 and options: { uuid: UUID("50313562-2926-49b7-94f9-3777d5535866"), temp: true }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.204-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.735-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-378--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 1077)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.771-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dc45cde74b6784bbb04' unlocked.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.342-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.322-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.206-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.737-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-373--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 1077)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.774-0500 I SHARDING [conn17] distributed lock 'test5_fsmdb0' acquired for 'shardCollection', ts : 5ddd7dc45cde74b6784bbb0a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.358-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7 (70d61678-f53b-4e07-b162-586616bbfc51) to test5_fsmdb0.agg_out and drop e225d742-d440-49e9-983f-4f289379cc6d.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.330-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7 (70d61678-f53b-4e07-b162-586616bbfc51) to test5_fsmdb0.agg_out and drop e225d742-d440-49e9-983f-4f289379cc6d.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.215-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.738-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-382--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 1518)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.775-0500 I SHARDING [conn17] distributed lock 'test5_fsmdb0.fsmcoll0' acquired for 'shardCollection', ts : 5ddd7dc45cde74b6784bbb0c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.358-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (e225d742-d440-49e9-983f-4f289379cc6d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 2525), t: 1 } and commit timestamp Timestamp(1574796741, 2525)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.330-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (e225d742-d440-49e9-983f-4f289379cc6d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 2525), t: 1 } and commit timestamp Timestamp(1574796741, 2525)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.215-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.739-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-384--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 1518)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.777-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test5_fsmdb0 from version {} to version { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.358-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (e225d742-d440-49e9-983f-4f289379cc6d).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.330-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (e225d742-d440-49e9-983f-4f289379cc6d).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.215-0500 I STORAGE [conn46] Index build initialized: 2228e2fe-d849-42ea-b796-e1c996dae0ed: test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2 (9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.740-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-379--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 1518)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.777-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test5_fsmdb0.fsmcoll0 to version 1|3||5ddd7dc43bbfe7fa5630eb06 took 0 ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.358-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 70d61678-f53b-4e07-b162-586616bbfc51 from test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.330-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 70d61678-f53b-4e07-b162-586616bbfc51 from test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.215-0500 I INDEX [conn46] Waiting for index build to complete: 2228e2fe-d849-42ea-b796-e1c996dae0ed
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.741-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-383--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2023)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.779-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dc45cde74b6784bbb0c' unlocked.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.358-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e225d742-d440-49e9-983f-4f289379cc6d)'. Ident: 'index-374--4104909142373009110', commit timestamp: 'Timestamp(1574796741, 2525)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.330-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e225d742-d440-49e9-983f-4f289379cc6d)'. Ident: 'index-374--8000595249233899911', commit timestamp: 'Timestamp(1574796741, 2525)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.215-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.741-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-386--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2023)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.781-0500 I SHARDING [conn17] distributed lock with ts: 5ddd7dc45cde74b6784bbb0a' unlocked.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.358-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e225d742-d440-49e9-983f-4f289379cc6d)'. Ident: 'index-383--4104909142373009110', commit timestamp: 'Timestamp(1574796741, 2525)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.330-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e225d742-d440-49e9-983f-4f289379cc6d)'. Ident: 'index-383--8000595249233899911', commit timestamp: 'Timestamp(1574796741, 2525)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.218-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: c7d843b8-e111-408f-9b3e-de14e38352e5: test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7 ( ca540cbc-f78b-457b-85d5-dc4bf7272510 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.742-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-380--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2023)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.920-0500 I SHARDING [conn31] distributed lock 'test5_fsmdb0' acquired for 'createCollection', ts : 5ddd7dc45cde74b6784bbb1b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.359-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-373--4104909142373009110, commit timestamp: Timestamp(1574796741, 2525)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.330-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-373--8000595249233899911, commit timestamp: Timestamp(1574796741, 2525)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.218-0500 I INDEX [conn112] Index build completed: c7d843b8-e111-408f-9b3e-de14e38352e5
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.743-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-389--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2532)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.921-0500 I SHARDING [conn31] distributed lock 'test5_fsmdb0.agg_out' acquired for 'createCollection', ts : 5ddd7dc45cde74b6784bbb1d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.359-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7 (ca540cbc-f78b-457b-85d5-dc4bf7272510) to test5_fsmdb0.agg_out and drop 70d61678-f53b-4e07-b162-586616bbfc51.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.331-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7 (ca540cbc-f78b-457b-85d5-dc4bf7272510) to test5_fsmdb0.agg_out and drop 70d61678-f53b-4e07-b162-586616bbfc51.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.218-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7 appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796741, 5), signature: { hash: BinData(0, 53890F2F3CE644810CE799AEF8EAECB8AD11AFC2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 104ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.745-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-390--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2532)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.948-0500 I SHARDING [conn31] distributed lock with ts: 5ddd7dc45cde74b6784bbb1d' unlocked.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.359-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.agg_out (70d61678-f53b-4e07-b162-586616bbfc51) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 2526), t: 1 } and commit timestamp Timestamp(1574796741, 2526)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.331-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.agg_out (70d61678-f53b-4e07-b162-586616bbfc51) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 2526), t: 1 } and commit timestamp Timestamp(1574796741, 2526)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.221-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: aee64474-be3a-4d7e-bd58-0d0aad855427: test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7 ( 70d61678-f53b-4e07-b162-586616bbfc51 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.746-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-388--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2532)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:20.950-0500 I SHARDING [conn31] distributed lock with ts: 5ddd7dc45cde74b6784bbb1b' unlocked.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.359-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.agg_out (70d61678-f53b-4e07-b162-586616bbfc51).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.331-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.agg_out (70d61678-f53b-4e07-b162-586616bbfc51).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.221-0500 I INDEX [conn108] Index build completed: aee64474-be3a-4d7e-bd58-0d0aad855427
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.747-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-394--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 3537)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:21.971-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57486 #153 (26 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.359-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection ca540cbc-f78b-457b-85d5-dc4bf7272510 from test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.331-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection ca540cbc-f78b-457b-85d5-dc4bf7272510 from test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.221-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796741, 5), signature: { hash: BinData(0, 53890F2F3CE644810CE799AEF8EAECB8AD11AFC2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 102ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.748-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-396--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 3537)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:21.971-0500 I NETWORK [conn153] received client metadata from 127.0.0.1:57486 conn153: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.359-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (70d61678-f53b-4e07-b162-586616bbfc51)'. Ident: 'index-380--4104909142373009110', commit timestamp: 'Timestamp(1574796741, 2526)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.331-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (70d61678-f53b-4e07-b162-586616bbfc51)'. Ident: 'index-380--8000595249233899911', commit timestamp: 'Timestamp(1574796741, 2526)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.222-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.748-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-391--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 3537)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:21.971-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57488 #154 (27 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.359-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (70d61678-f53b-4e07-b162-586616bbfc51)'. Ident: 'index-389--4104909142373009110', commit timestamp: 'Timestamp(1574796741, 2526)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.331-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (70d61678-f53b-4e07-b162-586616bbfc51)'. Ident: 'index-389--8000595249233899911', commit timestamp: 'Timestamp(1574796741, 2526)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.225-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.749-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-395--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 3540)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:21.972-0500 I NETWORK [conn154] received client metadata from 127.0.0.1:57488 conn154: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.359-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-379--4104909142373009110, commit timestamp: Timestamp(1574796741, 2526)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.331-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-379--8000595249233899911, commit timestamp: Timestamp(1574796741, 2526)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.225-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.750-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-400--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 3540)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:25.503-0500 I NETWORK [conn20] end connection 127.0.0.1:55582 (26 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.360-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2 (9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc) to test5_fsmdb0.agg_out and drop ca540cbc-f78b-457b-85d5-dc4bf7272510.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.332-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2 (9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc) to test5_fsmdb0.agg_out and drop ca540cbc-f78b-457b-85d5-dc4bf7272510.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.225-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (bafc131b-c28e-440d-97ac-3b147505078e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 1021), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.751-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-392--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 3540)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.360-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (ca540cbc-f78b-457b-85d5-dc4bf7272510) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 2527), t: 1 } and commit timestamp Timestamp(1574796741, 2527)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.332-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.agg_out (ca540cbc-f78b-457b-85d5-dc4bf7272510) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 2527), t: 1 } and commit timestamp Timestamp(1574796741, 2527)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.225-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (bafc131b-c28e-440d-97ac-3b147505078e).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.753-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-399--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 4046)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.360-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (ca540cbc-f78b-457b-85d5-dc4bf7272510).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.332-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.agg_out (ca540cbc-f78b-457b-85d5-dc4bf7272510).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.225-0500 I STORAGE [conn110] renameCollection: renaming collection e4ec382f-4f92-4cc8-aa2a-63aca9447890 from test5_fsmdb0.tmp.agg_out.9f0c1e69-29ba-4eb1-a5c9-52d1f2ae874b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.754-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-404--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 4046)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.360-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc from test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.332-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection 9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc from test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.225-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bafc131b-c28e-440d-97ac-3b147505078e)'. Ident: 'index-358-8224331490264904478', commit timestamp: 'Timestamp(1574796741, 1021)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.754-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-397--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 4046)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.360-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ca540cbc-f78b-457b-85d5-dc4bf7272510)'. Ident: 'index-378--4104909142373009110', commit timestamp: 'Timestamp(1574796741, 2527)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.332-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ca540cbc-f78b-457b-85d5-dc4bf7272510)'. Ident: 'index-378--8000595249233899911', commit timestamp: 'Timestamp(1574796741, 2527)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.225-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bafc131b-c28e-440d-97ac-3b147505078e)'. Ident: 'index-359-8224331490264904478', commit timestamp: 'Timestamp(1574796741, 1021)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.755-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-403--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.360-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ca540cbc-f78b-457b-85d5-dc4bf7272510)'. Ident: 'index-387--4104909142373009110', commit timestamp: 'Timestamp(1574796741, 2527)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.332-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ca540cbc-f78b-457b-85d5-dc4bf7272510)'. Ident: 'index-387--8000595249233899911', commit timestamp: 'Timestamp(1574796741, 2527)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.225-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-357-8224331490264904478, commit timestamp: Timestamp(1574796741, 1021)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.756-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-406--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.360-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-377--4104909142373009110, commit timestamp: Timestamp(1574796741, 2527)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.332-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-377--8000595249233899911, commit timestamp: Timestamp(1574796741, 2527)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.225-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.757-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-401--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.361-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27 with provided UUID: f6a113a6-8575-4740-88fd-f168cda34531 and options: { uuid: UUID("f6a113a6-8575-4740-88fd-f168cda34531"), temp: true }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.333-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27 with provided UUID: f6a113a6-8575-4740-88fd-f168cda34531 and options: { uuid: UUID("f6a113a6-8575-4740-88fd-f168cda34531"), temp: true }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.225-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (e4ec382f-4f92-4cc8-aa2a-63aca9447890) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 1022), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.758-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-409--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 510)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.376-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.347-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.225-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (e4ec382f-4f92-4cc8-aa2a-63aca9447890).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.759-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-410--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 510)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.377-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e with provided UUID: 3dc16a5f-4783-4567-b5b2-8333419cb2e6 and options: { uuid: UUID("3dc16a5f-4783-4567-b5b2-8333419cb2e6"), temp: true }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.348-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e with provided UUID: 3dc16a5f-4783-4567-b5b2-8333419cb2e6 and options: { uuid: UUID("3dc16a5f-4783-4567-b5b2-8333419cb2e6"), temp: true }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.225-0500 I STORAGE [conn114] renameCollection: renaming collection e225d742-d440-49e9-983f-4f289379cc6d from test5_fsmdb0.tmp.agg_out.282b5526-59f0-4a07-82f1-4e34d6830fd3 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.760-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-407--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 510)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.392-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.362-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.225-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1445206794142797213, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 574649458584676378, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796741071), clusterTime: Timestamp(1574796740, 570) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796740, 570), signature: { hash: BinData(0, 2F6A9CF6DC62114B4DBA952DB92DB422C8EABEED), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 153ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.761-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-413--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 1081)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.392-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a with provided UUID: 9e21fffc-edbd-4983-97fd-f506e4fc1c85 and options: { uuid: UUID("9e21fffc-edbd-4983-97fd-f506e4fc1c85"), temp: true }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.363-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a with provided UUID: 9e21fffc-edbd-4983-97fd-f506e4fc1c85 and options: { uuid: UUID("9e21fffc-edbd-4983-97fd-f506e4fc1c85"), temp: true }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.225-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e4ec382f-4f92-4cc8-aa2a-63aca9447890)'. Ident: 'index-367-8224331490264904478', commit timestamp: 'Timestamp(1574796741, 1022)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.762-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-414--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 1081)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.408-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.377-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.225-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e4ec382f-4f92-4cc8-aa2a-63aca9447890)'. Ident: 'index-373-8224331490264904478', commit timestamp: 'Timestamp(1574796741, 1022)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.763-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-411--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 1081)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.425-0500 I INDEX [ReplWriterWorker-12] index build: starting on test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.393-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.225-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-362-8224331490264904478, commit timestamp: Timestamp(1574796741, 1022)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.764-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-418--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 1650)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.425-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.393-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.226-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3664996396988759853, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8283646325760901699, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796741071), clusterTime: Timestamp(1574796740, 570) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796740, 570), signature: { hash: BinData(0, 2F6A9CF6DC62114B4DBA952DB92DB422C8EABEED), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 154ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.765-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-420--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 1650)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.425-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 31216f0c-5a2c-4143-b2fc-0dfc7b983906: test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883 (2615395f-33b1-4b4a-907f-869755b6e215 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.393-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 2d414fab-fb4a-45b5-b747-e5618c614821: test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883 (2615395f-33b1-4b4a-907f-869755b6e215 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.226-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 2228e2fe-d849-42ea-b796-e1c996dae0ed: test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2 ( 9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.765-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-415--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 1650)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.425-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.393-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.226-0500 I INDEX [conn46] Index build completed: 2228e2fe-d849-42ea-b796-e1c996dae0ed
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.766-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-419--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2091)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.426-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.393-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.227-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2 appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796741, 5), signature: { hash: BinData(0, 53890F2F3CE644810CE799AEF8EAECB8AD11AFC2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 100ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.768-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-422--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2091)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.428-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.396-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.229-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883 with generated UUID: 2615395f-33b1-4b4a-907f-869755b6e215 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.769-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-416--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2091)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.435-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 31216f0c-5a2c-4143-b2fc-0dfc7b983906: test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883 ( 2615395f-33b1-4b4a-907f-869755b6e215 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.404-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 2d414fab-fb4a-45b5-b747-e5618c614821: test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883 ( 2615395f-33b1-4b4a-907f-869755b6e215 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.229-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037 with generated UUID: 50313562-2926-49b7-94f9-3777d5535866 and options: { temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.770-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-426--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2596)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.441-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.409-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.256-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.771-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-428--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2596)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.441-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.410-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.264-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.772-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-423--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2596)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.441-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: fba310fa-e57e-49b0-a23e-9d767c8d36f5: test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037 (50313562-2926-49b7-94f9-3777d5535866 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.410-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 2bbb3d52-149d-4cdf-b924-c6b09b216c03: test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037 (50313562-2926-49b7-94f9-3777d5535866 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.265-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.772-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-427--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 3164)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.442-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.410-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.265-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (e225d742-d440-49e9-983f-4f289379cc6d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 2525), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.773-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-430--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 3164)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.442-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.411-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.265-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (e225d742-d440-49e9-983f-4f289379cc6d).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.774-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-424--2588534479858262356 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 3164)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.444-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.413-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.265-0500 I STORAGE [conn108] renameCollection: renaming collection 70d61678-f53b-4e07-b162-586616bbfc51 from test5_fsmdb0.tmp.agg_out.07a41a89-ae73-44a6-9871-49da082879b7 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.776-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-437--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948) with drop timestamp Timestamp(1574796731, 4546)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.450-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: fba310fa-e57e-49b0-a23e-9d767c8d36f5: test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037 ( 50313562-2926-49b7-94f9-3777d5535866 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.422-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 2bbb3d52-149d-4cdf-b924-c6b09b216c03: test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037 ( 50313562-2926-49b7-94f9-3777d5535866 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.265-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e225d742-d440-49e9-983f-4f289379cc6d)'. Ident: 'index-366-8224331490264904478', commit timestamp: 'Timestamp(1574796741, 2525)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.777-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-438--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948) with drop timestamp Timestamp(1574796731, 4546)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.456-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.429-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.265-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e225d742-d440-49e9-983f-4f289379cc6d)'. Ident: 'index-371-8224331490264904478', commit timestamp: 'Timestamp(1574796741, 2525)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.778-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-435--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948) with drop timestamp Timestamp(1574796731, 4546)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.456-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.429-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.265-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-361-8224331490264904478, commit timestamp: Timestamp(1574796741, 2525)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.779-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-445--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94) with drop timestamp Timestamp(1574796731, 4547)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.456-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 3d35ffee-e6b8-4846-b7f1-e222d7c7a17d: test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27 (f6a113a6-8575-4740-88fd-f168cda34531 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.429-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: f3673ed7-a9b5-4fce-a620-3109db030d7e: test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27 (f6a113a6-8575-4740-88fd-f168cda34531 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.265-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.779-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-446--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94) with drop timestamp Timestamp(1574796731, 4547)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.457-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.430-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.265-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (70d61678-f53b-4e07-b162-586616bbfc51) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 2526), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.780-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-443--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94) with drop timestamp Timestamp(1574796731, 4547)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.457-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.430-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.265-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (70d61678-f53b-4e07-b162-586616bbfc51).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.781-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-441--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7) with drop timestamp Timestamp(1574796732, 1)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.459-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.433-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.265-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4599650064875069107, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3597353500424106326, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796741071), clusterTime: Timestamp(1574796740, 570) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796740, 570), signature: { hash: BinData(0, 2F6A9CF6DC62114B4DBA952DB92DB422C8EABEED), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 193ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.782-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-442--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7) with drop timestamp Timestamp(1574796732, 1)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.462-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 3d35ffee-e6b8-4846-b7f1-e222d7c7a17d: test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27 ( f6a113a6-8575-4740-88fd-f168cda34531 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.437-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: f3673ed7-a9b5-4fce-a620-3109db030d7e: test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27 ( f6a113a6-8575-4740-88fd-f168cda34531 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.265-0500 I STORAGE [conn112] renameCollection: renaming collection ca540cbc-f78b-457b-85d5-dc4bf7272510 from test5_fsmdb0.tmp.agg_out.88424a21-4308-4ad2-8059-11521d097be7 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.784-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-439--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7) with drop timestamp Timestamp(1574796732, 1)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.853-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883 (2615395f-33b1-4b4a-907f-869755b6e215) to test5_fsmdb0.agg_out and drop 9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.265-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (70d61678-f53b-4e07-b162-586616bbfc51)'. Ident: 'index-369-8224331490264904478', commit timestamp: 'Timestamp(1574796741, 2526)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.852-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883 (2615395f-33b1-4b4a-907f-869755b6e215) to test5_fsmdb0.agg_out and drop 9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.785-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-449--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810) with drop timestamp Timestamp(1574796732, 517)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.853-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 3041), t: 1 } and commit timestamp Timestamp(1574796741, 3041)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.265-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (70d61678-f53b-4e07-b162-586616bbfc51)'. Ident: 'index-377-8224331490264904478', commit timestamp: 'Timestamp(1574796741, 2526)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.852-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 3041), t: 1 } and commit timestamp Timestamp(1574796741, 3041)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.785-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-452--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810) with drop timestamp Timestamp(1574796732, 517)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.853-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.265-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-364-8224331490264904478, commit timestamp: Timestamp(1574796741, 2526)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.852-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.786-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-448--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810) with drop timestamp Timestamp(1574796732, 517)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.853-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 2615395f-33b1-4b4a-907f-869755b6e215 from test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.265-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.852-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 2615395f-33b1-4b4a-907f-869755b6e215 from test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.787-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-459--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204) with drop timestamp Timestamp(1574796732, 2027)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.853-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc)'. Ident: 'index-382--4104909142373009110', commit timestamp: 'Timestamp(1574796741, 3041)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.265-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (ca540cbc-f78b-457b-85d5-dc4bf7272510) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 2527), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.852-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc)'. Ident: 'index-382--8000595249233899911', commit timestamp: 'Timestamp(1574796741, 3041)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.788-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-462--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204) with drop timestamp Timestamp(1574796732, 2027)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.853-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc)'. Ident: 'index-391--4104909142373009110', commit timestamp: 'Timestamp(1574796741, 3041)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.265-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6466871470186773490, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8910638351809075862, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796741073), clusterTime: Timestamp(1574796740, 567) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796741, 2), signature: { hash: BinData(0, 53890F2F3CE644810CE799AEF8EAECB8AD11AFC2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 191ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.852-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc)'. Ident: 'index-391--8000595249233899911', commit timestamp: 'Timestamp(1574796741, 3041)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.789-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-456--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204) with drop timestamp Timestamp(1574796732, 2027)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.853-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-381--4104909142373009110, commit timestamp: Timestamp(1574796741, 3041)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.265-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (ca540cbc-f78b-457b-85d5-dc4bf7272510).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.852-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-381--8000595249233899911, commit timestamp: Timestamp(1574796741, 3041)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.789-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-460--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0) with drop timestamp Timestamp(1574796732, 2029)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.896-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c with provided UUID: 8f2a656c-d231-4a1a-aa58-38198f7f7579 and options: { uuid: UUID("8f2a656c-d231-4a1a-aa58-38198f7f7579"), temp: true }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.266-0500 I STORAGE [conn46] renameCollection: renaming collection 9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc from test5_fsmdb0.tmp.agg_out.9c99a23b-cc4d-4263-90fc-dfc72a6cfed2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.880-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c with provided UUID: 8f2a656c-d231-4a1a-aa58-38198f7f7579 and options: { uuid: UUID("8f2a656c-d231-4a1a-aa58-38198f7f7579"), temp: true }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.791-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-464--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0) with drop timestamp Timestamp(1574796732, 2029)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.911-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.266-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ca540cbc-f78b-457b-85d5-dc4bf7272510)'. Ident: 'index-368-8224331490264904478', commit timestamp: 'Timestamp(1574796741, 2527)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.895-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.792-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-457--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0) with drop timestamp Timestamp(1574796732, 2029)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.931-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.266-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ca540cbc-f78b-457b-85d5-dc4bf7272510)'. Ident: 'index-375-8224331490264904478', commit timestamp: 'Timestamp(1574796741, 2527)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.915-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.793-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-461--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8) with drop timestamp Timestamp(1574796732, 2030)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.931-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.266-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-363-8224331490264904478, commit timestamp: Timestamp(1574796741, 2527)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.915-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.794-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-466--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8) with drop timestamp Timestamp(1574796732, 2030)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.931-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 57e8cd9b-dd39-4720-abfc-f8c0a7c5f1b9: test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e (3dc16a5f-4783-4567-b5b2-8333419cb2e6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.266-0500 I INDEX [conn114] Registering index build: 9d25d79d-0f83-47f2-a111-526b4d4b0063
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.915-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 029fb17d-df5a-42b8-a0fa-c4c86ed396bb: test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e (3dc16a5f-4783-4567-b5b2-8333419cb2e6 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.795-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-458--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8) with drop timestamp Timestamp(1574796732, 2030)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.931-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.266-0500 I INDEX [conn110] Registering index build: be227a1b-5ad8-4744-a38e-0b1886fc5fd0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.915-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.796-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-471--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e) with drop timestamp Timestamp(1574796732, 2809)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.931-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.266-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5556064178352349095, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4078406078375720862, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796741073), clusterTime: Timestamp(1574796740, 567) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796741, 2), signature: { hash: BinData(0, 53890F2F3CE644810CE799AEF8EAECB8AD11AFC2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 192ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.916-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.796-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-474--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e) with drop timestamp Timestamp(1574796732, 2809)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.934-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.269-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27 with generated UUID: f6a113a6-8575-4740-88fd-f168cda34531 and options: { temp: true }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.918-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.797-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-469--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e) with drop timestamp Timestamp(1574796732, 2809)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.938-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 57e8cd9b-dd39-4720-abfc-f8c0a7c5f1b9: test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e ( 3dc16a5f-4783-4567-b5b2-8333419cb2e6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.269-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e with generated UUID: 3dc16a5f-4783-4567-b5b2-8333419cb2e6 and options: { temp: true }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.922-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 029fb17d-df5a-42b8-a0fa-c4c86ed396bb: test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e ( 3dc16a5f-4783-4567-b5b2-8333419cb2e6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.799-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-470--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298) with drop timestamp Timestamp(1574796732, 3056)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.975-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53264 #105 (15 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.269-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a with generated UUID: 9e21fffc-edbd-4983-97fd-f506e4fc1c85 and options: { temp: true }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.938-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.800-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-472--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298) with drop timestamp Timestamp(1574796732, 3056)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.975-0500 I NETWORK [conn105] received client metadata from 127.0.0.1:53264 conn105: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.303-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.938-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.801-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-468--2588534479858262356 (ns: test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298) with drop timestamp Timestamp(1574796732, 3056)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.987-0500 I INDEX [ReplWriterWorker-12] index build: starting on test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.303-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.938-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: aea6e264-eb12-406b-88d5-acdd7bdbe452: test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a (9e21fffc-edbd-4983-97fd-f506e4fc1c85 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.987-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.972-0500 I COMMAND [conn55] CMD: drop test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.303-0500 I STORAGE [conn114] Index build initialized: 9d25d79d-0f83-47f2-a111-526b4d4b0063: test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883 (2615395f-33b1-4b4a-907f-869755b6e215 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.938-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.987-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 03189780-db68-43cb-b5d6-77bfb3196411: test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a (9e21fffc-edbd-4983-97fd-f506e4fc1c85 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.972-0500 I STORAGE [conn55] dropCollection: test4_fsmdb0.agg_out (6d7b1b53-805f-4e82-a6e8-dfd96f7e7393) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.303-0500 I INDEX [conn114] Waiting for index build to complete: 9d25d79d-0f83-47f2-a111-526b4d4b0063
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.938-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.987-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.973-0500 I STORAGE [conn55] Finishing collection drop for test4_fsmdb0.agg_out (6d7b1b53-805f-4e82-a6e8-dfd96f7e7393).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.304-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.939-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27 (f6a113a6-8575-4740-88fd-f168cda34531) to test5_fsmdb0.agg_out and drop 2615395f-33b1-4b4a-907f-869755b6e215.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.987-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.973-0500 I STORAGE [conn55] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.agg_out (6d7b1b53-805f-4e82-a6e8-dfd96f7e7393)'. Ident: 'index-433--2588534479858262356', commit timestamp: 'Timestamp(1574796739, 5)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.311-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.941-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.988-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27 (f6a113a6-8575-4740-88fd-f168cda34531) to test5_fsmdb0.agg_out and drop 2615395f-33b1-4b4a-907f-869755b6e215.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.973-0500 I STORAGE [conn55] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.agg_out (6d7b1b53-805f-4e82-a6e8-dfd96f7e7393)'. Ident: 'index-434--2588534479858262356', commit timestamp: 'Timestamp(1574796739, 5)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.311-0500 I INDEX [conn46] Registering index build: a3943689-e23a-461f-858f-4d0d6d07abe8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.941-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (2615395f-33b1-4b4a-907f-869755b6e215) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 3996), t: 1 } and commit timestamp Timestamp(1574796741, 3996)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.990-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.973-0500 I STORAGE [conn55] Deferring table drop for collection 'test4_fsmdb0.agg_out'. Ident: collection-431--2588534479858262356, commit timestamp: Timestamp(1574796739, 5)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.318-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.941-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (2615395f-33b1-4b4a-907f-869755b6e215).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.991-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.agg_out (2615395f-33b1-4b4a-907f-869755b6e215) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 3996), t: 1 } and commit timestamp Timestamp(1574796741, 3996)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.980-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.agg_out took 0 ms and found the collection is not sharded
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.319-0500 I INDEX [conn112] Registering index build: 126f8ddb-54f6-4ce6-812e-ae6a4800e510
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.941-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection f6a113a6-8575-4740-88fd-f168cda34531 from test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.991-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.agg_out (2615395f-33b1-4b4a-907f-869755b6e215).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.980-0500 I SHARDING [conn55] Updating metadata for collection test4_fsmdb0.agg_out from collection version: 1|0||5ddd7dbbcf8184c2e1494ea3, shard version: 1|0||5ddd7dbbcf8184c2e1494ea3 to collection version: due to UUID change
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.326-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.941-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (2615395f-33b1-4b4a-907f-869755b6e215)'. Ident: 'index-394--8000595249233899911', commit timestamp: 'Timestamp(1574796741, 3996)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.991-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection f6a113a6-8575-4740-88fd-f168cda34531 from test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.980-0500 I COMMAND [ShardServerCatalogCacheLoader-2] CMD: drop config.cache.chunks.test4_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.326-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.941-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (2615395f-33b1-4b4a-907f-869755b6e215)'. Ident: 'index-403--8000595249233899911', commit timestamp: 'Timestamp(1574796741, 3996)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.991-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (2615395f-33b1-4b4a-907f-869755b6e215)'. Ident: 'index-394--4104909142373009110', commit timestamp: 'Timestamp(1574796741, 3996)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.980-0500 I STORAGE [ShardServerCatalogCacheLoader-2] dropCollection: config.cache.chunks.test4_fsmdb0.agg_out (b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.326-0500 I INDEX [conn108] Registering index build: f2d242b0-fb10-4a24-99a0-30040482ee7e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.941-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-393--8000595249233899911, commit timestamp: Timestamp(1574796741, 3996)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.991-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (2615395f-33b1-4b4a-907f-869755b6e215)'. Ident: 'index-403--4104909142373009110', commit timestamp: 'Timestamp(1574796741, 3996)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.980-0500 I STORAGE [ShardServerCatalogCacheLoader-2] Finishing collection drop for config.cache.chunks.test4_fsmdb0.agg_out (b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.334-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.943-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: aea6e264-eb12-406b-88d5-acdd7bdbe452: test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a ( 9e21fffc-edbd-4983-97fd-f506e4fc1c85 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.991-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-393--4104909142373009110, commit timestamp: Timestamp(1574796741, 3996)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.981-0500 I STORAGE [ShardServerCatalogCacheLoader-2] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test4_fsmdb0.agg_out (b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2)'. Ident: 'index-451--2588534479858262356', commit timestamp: 'Timestamp(1574796739, 9)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.341-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.944-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037 (50313562-2926-49b7-94f9-3777d5535866) to test5_fsmdb0.agg_out and drop f6a113a6-8575-4740-88fd-f168cda34531.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.992-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 03189780-db68-43cb-b5d6-77bfb3196411: test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a ( 9e21fffc-edbd-4983-97fd-f506e4fc1c85 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.981-0500 I STORAGE [ShardServerCatalogCacheLoader-2] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test4_fsmdb0.agg_out (b7fc7959-7e1a-4c43-a1d7-088a9ffbc6d2)'. Ident: 'index-454--2588534479858262356', commit timestamp: 'Timestamp(1574796739, 9)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.341-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.944-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (f6a113a6-8575-4740-88fd-f168cda34531) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 4050), t: 1 } and commit timestamp Timestamp(1574796741, 4050)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.993-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037 (50313562-2926-49b7-94f9-3777d5535866) to test5_fsmdb0.agg_out and drop f6a113a6-8575-4740-88fd-f168cda34531.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.981-0500 I STORAGE [ShardServerCatalogCacheLoader-2] Deferring table drop for collection 'config.cache.chunks.test4_fsmdb0.agg_out'. Ident: collection-450--2588534479858262356, commit timestamp: Timestamp(1574796739, 9)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.341-0500 I STORAGE [conn110] Index build initialized: be227a1b-5ad8-4744-a38e-0b1886fc5fd0: test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037 (50313562-2926-49b7-94f9-3777d5535866 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.944-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (f6a113a6-8575-4740-88fd-f168cda34531).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.994-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (f6a113a6-8575-4740-88fd-f168cda34531) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 4050), t: 1 } and commit timestamp Timestamp(1574796741, 4050)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.989-0500 I COMMAND [conn55] CMD: drop test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.341-0500 I INDEX [conn110] Waiting for index build to complete: be227a1b-5ad8-4744-a38e-0b1886fc5fd0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.944-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection 50313562-2926-49b7-94f9-3777d5535866 from test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.994-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (f6a113a6-8575-4740-88fd-f168cda34531).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.989-0500 I STORAGE [conn55] dropCollection: test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.341-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.944-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f6a113a6-8575-4740-88fd-f168cda34531)'. Ident: 'index-398--8000595249233899911', commit timestamp: 'Timestamp(1574796741, 4050)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.994-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 50313562-2926-49b7-94f9-3777d5535866 from test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.989-0500 I STORAGE [conn55] Finishing collection drop for test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.342-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 9d25d79d-0f83-47f2-a111-526b4d4b0063: test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883 ( 2615395f-33b1-4b4a-907f-869755b6e215 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.944-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f6a113a6-8575-4740-88fd-f168cda34531)'. Ident: 'index-407--8000595249233899911', commit timestamp: 'Timestamp(1574796741, 4050)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.994-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f6a113a6-8575-4740-88fd-f168cda34531)'. Ident: 'index-398--4104909142373009110', commit timestamp: 'Timestamp(1574796741, 4050)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.989-0500 I STORAGE [conn55] Deferring table drop for index '_id_' on collection 'test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd)'. Ident: 'index-217--2588534479858262356', commit timestamp: 'Timestamp(1574796739, 14)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.343-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.944-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-397--8000595249233899911, commit timestamp: Timestamp(1574796741, 4050)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.994-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f6a113a6-8575-4740-88fd-f168cda34531)'. Ident: 'index-407--4104909142373009110', commit timestamp: 'Timestamp(1574796741, 4050)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.989-0500 I STORAGE [conn55] Deferring table drop for index '_id_hashed' on collection 'test4_fsmdb0.fsmcoll0 (08555f78-3db2-4ee9-9e10-8c80139ec7dd)'. Ident: 'index-218--2588534479858262356', commit timestamp: 'Timestamp(1574796739, 14)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.352-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.949-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2 with provided UUID: 6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513 and options: { uuid: UUID("6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.994-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-397--4104909142373009110, commit timestamp: Timestamp(1574796741, 4050)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.989-0500 I STORAGE [conn55] Deferring table drop for collection 'test4_fsmdb0.fsmcoll0'. Ident: collection-216--2588534479858262356, commit timestamp: Timestamp(1574796739, 14)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.359-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.975-0500 I NETWORK [listener] connection accepted from 127.0.0.1:54154 #105 (16 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:21.998-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2 with provided UUID: 6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513 and options: { uuid: UUID("6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.999-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test4_fsmdb0.fsmcoll0 took 0 ms and found the collection is not sharded
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.359-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.975-0500 I NETWORK [conn105] received client metadata from 127.0.0.1:54154 conn105: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.013-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.999-0500 I SHARDING [conn55] Updating metadata for collection test4_fsmdb0.fsmcoll0 from collection version: 1|3||5ddd7daccf8184c2e1494359, shard version: 1|3||5ddd7daccf8184c2e1494359 to collection version: due to UUID change
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.359-0500 I STORAGE [conn46] Index build initialized: a3943689-e23a-461f-858f-4d0d6d07abe8: test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27 (f6a113a6-8575-4740-88fd-f168cda34531 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:21.986-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.033-0500 I INDEX [ReplWriterWorker-12] index build: starting on test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.999-0500 I COMMAND [ShardServerCatalogCacheLoader-2] CMD: drop config.cache.chunks.test4_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.360-0500 I INDEX [conn46] Waiting for index build to complete: a3943689-e23a-461f-858f-4d0d6d07abe8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.003-0500 I INDEX [ReplWriterWorker-12] index build: starting on test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.033-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.999-0500 I STORAGE [ShardServerCatalogCacheLoader-2] dropCollection: config.cache.chunks.test4_fsmdb0.fsmcoll0 (c7f3cab2-be92-4a48-8ca9-60ce74a83411) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.360-0500 I INDEX [conn114] Index build completed: 9d25d79d-0f83-47f2-a111-526b4d4b0063
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.003-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.034-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: f6916f59-5f28-4393-bf5e-e4d22bd163e6: test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c (8f2a656c-d231-4a1a-aa58-38198f7f7579 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.999-0500 I STORAGE [ShardServerCatalogCacheLoader-2] Finishing collection drop for config.cache.chunks.test4_fsmdb0.fsmcoll0 (c7f3cab2-be92-4a48-8ca9-60ce74a83411).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.360-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.003-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 627c6755-bbb5-4ae9-bff1-546a20476f50: test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c (8f2a656c-d231-4a1a-aa58-38198f7f7579 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.034-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.999-0500 I STORAGE [ShardServerCatalogCacheLoader-2] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test4_fsmdb0.fsmcoll0 (c7f3cab2-be92-4a48-8ca9-60ce74a83411)'. Ident: 'index-221--2588534479858262356', commit timestamp: 'Timestamp(1574796739, 23)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.360-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883 appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796741, 2524), signature: { hash: BinData(0, 53890F2F3CE644810CE799AEF8EAECB8AD11AFC2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 9088 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 103ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.004-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.034-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.999-0500 I STORAGE [ShardServerCatalogCacheLoader-2] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test4_fsmdb0.fsmcoll0 (c7f3cab2-be92-4a48-8ca9-60ce74a83411)'. Ident: 'index-222--2588534479858262356', commit timestamp: 'Timestamp(1574796739, 23)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.360-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: be227a1b-5ad8-4744-a38e-0b1886fc5fd0: test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037 ( 50313562-2926-49b7-94f9-3777d5535866 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.006-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.037-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f with provided UUID: a8cc5830-449d-459a-a062-36665203d501 and options: { uuid: UUID("a8cc5830-449d-459a-a062-36665203d501"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:19.999-0500 I STORAGE [ShardServerCatalogCacheLoader-2] Deferring table drop for collection 'config.cache.chunks.test4_fsmdb0.fsmcoll0'. Ident: collection-220--2588534479858262356, commit timestamp: Timestamp(1574796739, 23)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.361-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.006-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f with provided UUID: a8cc5830-449d-459a-a062-36665203d501 and options: { uuid: UUID("a8cc5830-449d-459a-a062-36665203d501"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.037-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.002-0500 I COMMAND [conn55] dropDatabase test4_fsmdb0 - starting
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.363-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.008-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.045-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: f6916f59-5f28-4393-bf5e-e4d22bd163e6: test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c ( 8f2a656c-d231-4a1a-aa58-38198f7f7579 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.002-0500 I COMMAND [conn55] dropDatabase test4_fsmdb0 - dropped 0 collection(s)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.372-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: a3943689-e23a-461f-858f-4d0d6d07abe8: test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27 ( f6a113a6-8575-4740-88fd-f168cda34531 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.017-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 627c6755-bbb5-4ae9-bff1-546a20476f50: test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c ( 8f2a656c-d231-4a1a-aa58-38198f7f7579 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.047-0500 I NETWORK [listener] connection accepted from 127.0.0.1:53296 #106 (16 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.002-0500 I COMMAND [conn55] dropDatabase test4_fsmdb0 - finished
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.382-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.025-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.047-0500 I NETWORK [conn106] received client metadata from 127.0.0.1:53296 conn106: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.016-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test4_fsmdb0 took 0 ms and failed :: caused by :: NamespaceNotFound: database test4_fsmdb0 not found
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.382-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.030-0500 I COMMAND [ReplWriterWorker-6] CMD: drop test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.053-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.016-0500 I SHARDING [conn55] setting this node's cached database version for test4_fsmdb0 to {}
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.382-0500 I STORAGE [conn112] Index build initialized: 126f8ddb-54f6-4ce6-812e-ae6a4800e510: test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e (3dc16a5f-4783-4567-b5b2-8333419cb2e6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.031-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e (3dc16a5f-4783-4567-b5b2-8333419cb2e6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 5057), t: 1 } and commit timestamp Timestamp(1574796741, 5057)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.058-0500 I COMMAND [ReplWriterWorker-11] CMD: drop test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.550-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47588 #209 (41 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.382-0500 I INDEX [conn112] Waiting for index build to complete: 126f8ddb-54f6-4ce6-812e-ae6a4800e510
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.031-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e (3dc16a5f-4783-4567-b5b2-8333419cb2e6).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.058-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e (3dc16a5f-4783-4567-b5b2-8333419cb2e6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 5057), t: 1 } and commit timestamp Timestamp(1574796741, 5057)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.550-0500 I NETWORK [conn209] received client metadata from 127.0.0.1:47588 conn209: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.382-0500 I INDEX [conn46] Index build completed: a3943689-e23a-461f-858f-4d0d6d07abe8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.031-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e (3dc16a5f-4783-4567-b5b2-8333419cb2e6)'. Ident: 'index-400--8000595249233899911', commit timestamp: 'Timestamp(1574796741, 5057)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.058-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e (3dc16a5f-4783-4567-b5b2-8333419cb2e6).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.551-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47600 #210 (42 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.382-0500 I INDEX [conn110] Index build completed: be227a1b-5ad8-4744-a38e-0b1886fc5fd0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.031-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e (3dc16a5f-4783-4567-b5b2-8333419cb2e6)'. Ident: 'index-411--8000595249233899911', commit timestamp: 'Timestamp(1574796741, 5057)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.058-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e (3dc16a5f-4783-4567-b5b2-8333419cb2e6)'. Ident: 'index-400--4104909142373009110', commit timestamp: 'Timestamp(1574796741, 5057)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.551-0500 I NETWORK [conn210] received client metadata from 127.0.0.1:47600 conn210: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.382-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.031-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e'. Ident: collection-399--8000595249233899911, commit timestamp: Timestamp(1574796741, 5057)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.058-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e (3dc16a5f-4783-4567-b5b2-8333419cb2e6)'. Ident: 'index-411--4104909142373009110', commit timestamp: 'Timestamp(1574796741, 5057)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.553-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47606 #211 (43 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.383-0500 I COMMAND [conn110] command test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037 appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796741, 2525), signature: { hash: BinData(0, 53890F2F3CE644810CE799AEF8EAECB8AD11AFC2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 817 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 117ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.047-0500 I NETWORK [listener] connection accepted from 127.0.0.1:54186 #106 (17 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.058-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e'. Ident: collection-399--4104909142373009110, commit timestamp: Timestamp(1574796741, 5057)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.553-0500 I NETWORK [conn211] received client metadata from 127.0.0.1:47606 conn211: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.048-0500 I NETWORK [conn106] received client metadata from 127.0.0.1:54186 conn106: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.059-0500 W CONTROL [conn106] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 724 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.554-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47610 #212 (44 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.383-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 3041), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.052-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.077-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.554-0500 I NETWORK [conn212] received client metadata from 127.0.0.1:47610 conn212: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.848-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.052-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:25.658-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796741, 4115), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:821 protocol:op_msg 3761ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.077-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.569-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47618 #213 (45 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.848-0500 I STORAGE [conn114] renameCollection: renaming collection 2615395f-33b1-4b4a-907f-869755b6e215 from test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.052-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: aa598dd6-83b7-4317-a607-a8f643e88ab0: test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2 (6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.077-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 0b563ee3-4094-4eb0-891d-0c5b87e19d1e: test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2 (6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.569-0500 I NETWORK [conn213] received client metadata from 127.0.0.1:47618 conn213: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.848-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc)'. Ident: 'index-370-8224331490264904478', commit timestamp: 'Timestamp(1574796741, 3041)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.052-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:25.659-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796741, 5121), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:821 protocol:op_msg 3676ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.077-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.582-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47638 #214 (46 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.848-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (9a9df9e7-0bdb-44d7-be6b-165d1a43b9dc)'. Ident: 'index-379-8224331490264904478', commit timestamp: 'Timestamp(1574796741, 3041)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.053-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.078-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.582-0500 I NETWORK [conn214] received client metadata from 127.0.0.1:47638 conn214: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.848-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-365-8224331490264904478, commit timestamp: Timestamp(1574796741, 3041)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.053-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c with provided UUID: 809f5cc3-11f0-44bf-a06b-4b5d9b09c34e and options: { uuid: UUID("809f5cc3-11f0-44bf-a06b-4b5d9b09c34e"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.079-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c with provided UUID: 809f5cc3-11f0-44bf-a06b-4b5d9b09c34e and options: { uuid: UUID("809f5cc3-11f0-44bf-a06b-4b5d9b09c34e"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.583-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47640 #215 (47 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.848-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.056-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.082-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.584-0500 I NETWORK [conn215] received client metadata from 127.0.0.1:47640 conn215: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.848-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883 appName: "tid:2" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.5961d121-2549-4e2a-b253-059dc7e7f883", to: "test5_fsmdb0.agg_out", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796741, 3039), signature: { hash: BinData(0, 53890F2F3CE644810CE799AEF8EAECB8AD11AFC2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 1776 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 467ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.059-0500 W CONTROL [conn106] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 327 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.089-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 0b563ee3-4094-4eb0-891d-0c5b87e19d1e: test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2 ( 6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.585-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47646 #216 (48 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.849-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3390965430176238431, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3012690013921837117, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796741228), clusterTime: Timestamp(1574796741, 1022) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796741, 1022), signature: { hash: BinData(0, 53890F2F3CE644810CE799AEF8EAECB8AD11AFC2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 620ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.063-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: aa598dd6-83b7-4317-a607-a8f643e88ab0: test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2 ( 6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.098-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.585-0500 I NETWORK [conn216] received client metadata from 127.0.0.1:47646 conn216: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.849-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.071-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.102-0500 I COMMAND [ReplWriterWorker-1] CMD: drop test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.616-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47654 #217 (49 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.850-0500 I COMMAND [conn46] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: getMore { getMore: 7560385850058468303, collection: "fsmcoll0", lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796741, 3041), signature: { hash: BinData(0, 53890F2F3CE644810CE799AEF8EAECB8AD11AFC2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } originatingCommand: { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } } ], fromMongos: true, needsMerge: true, collation: { locale: "simple" }, cursor: { batchSize: 0 }, runtimeConstants: { localNow: new Date(1574796741268), clusterTime: Timestamp(1574796741, 2527) }, use44SortKeys: true, allowImplicitCollectionCreation: false, shardVersion: [ Timestamp(1, 1), ObjectId('5ddd7dc43bbfe7fa5630eb06') ], lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796741, 2527), signature: { hash: BinData(0, 53890F2F3CE644810CE799AEF8EAECB8AD11AFC2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } planSummary: COLLSCAN cursorid:7560385850058468303 keysExamined:0 docsExamined:508 cursorExhausted:1 numYields:3 nreturned:250 reslen:255794 locks:{ ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 4 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 465444 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 467ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.075-0500 I COMMAND [ReplWriterWorker-2] CMD: drop test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.102-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a (9e21fffc-edbd-4983-97fd-f506e4fc1c85) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796742, 2), t: 1 } and commit timestamp Timestamp(1574796742, 2)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.616-0500 I NETWORK [conn217] received client metadata from 127.0.0.1:47654 conn217: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.852-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c with generated UUID: 8f2a656c-d231-4a1a-aa58-38198f7f7579 and options: { temp: true }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.076-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a (9e21fffc-edbd-4983-97fd-f506e4fc1c85) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796742, 2), t: 1 } and commit timestamp Timestamp(1574796742, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.102-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a (9e21fffc-edbd-4983-97fd-f506e4fc1c85).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.618-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47662 #218 (50 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.857-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.076-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a (9e21fffc-edbd-4983-97fd-f506e4fc1c85).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.102-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a (9e21fffc-edbd-4983-97fd-f506e4fc1c85)'. Ident: 'index-402--4104909142373009110', commit timestamp: 'Timestamp(1574796742, 2)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.618-0500 I NETWORK [conn218] received client metadata from 127.0.0.1:47662 conn218: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.869-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.076-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a (9e21fffc-edbd-4983-97fd-f506e4fc1c85)'. Ident: 'index-402--8000595249233899911', commit timestamp: 'Timestamp(1574796742, 2)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.102-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a (9e21fffc-edbd-4983-97fd-f506e4fc1c85)'. Ident: 'index-413--4104909142373009110', commit timestamp: 'Timestamp(1574796742, 2)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.629-0500 W CONTROL [conn218] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 377 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.869-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.076-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a (9e21fffc-edbd-4983-97fd-f506e4fc1c85)'. Ident: 'index-413--8000595249233899911', commit timestamp: 'Timestamp(1574796742, 2)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.102-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a'. Ident: collection-401--4104909142373009110, commit timestamp: Timestamp(1574796742, 2)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.633-0500 W CONTROL [conn218] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 377 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.869-0500 I STORAGE [conn108] Index build initialized: f2d242b0-fb10-4a24-99a0-30040482ee7e: test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a (9e21fffc-edbd-4983-97fd-f506e4fc1c85 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.076-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a'. Ident: collection-401--8000595249233899911, commit timestamp: Timestamp(1574796742, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.103-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 with provided UUID: e598322a-824f-46b8-8433-6606348c63f2 and options: { uuid: UUID("e598322a-824f-46b8-8433-6606348c63f2"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.635-0500 I NETWORK [conn217] end connection 127.0.0.1:47654 (49 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.869-0500 I INDEX [conn108] Waiting for index build to complete: f2d242b0-fb10-4a24-99a0-30040482ee7e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.076-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 with provided UUID: e598322a-824f-46b8-8433-6606348c63f2 and options: { uuid: UUID("e598322a-824f-46b8-8433-6606348c63f2"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.117-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.636-0500 I NETWORK [conn218] end connection 127.0.0.1:47662 (48 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.869-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.089-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.136-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.664-0500 I STORAGE [conn125] createCollection: test5_fsmdb0.fsmcoll0 with provided UUID: aad04aec-10f6-4c2e-aadf-f1052ef9cc6a and options: { uuid: UUID("aad04aec-10f6-4c2e-aadf-f1052ef9cc6a") }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.870-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 126f8ddb-54f6-4ce6-812e-ae6a4800e510: test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e ( 3dc16a5f-4783-4567-b5b2-8333419cb2e6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.106-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.136-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.675-0500 I INDEX [conn125] index build: done building index _id_ on ns test5_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.870-0500 I INDEX [conn112] Index build completed: 126f8ddb-54f6-4ce6-812e-ae6a4800e510
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.106-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.136-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 528fac18-9af3-4801-ab79-d157b4370f33: test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f (a8cc5830-449d-459a-a062-36665203d501 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.685-0500 I INDEX [conn125] index build: done building index _id_hashed on ns test5_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.870-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796741, 2531), signature: { hash: BinData(0, 53890F2F3CE644810CE799AEF8EAECB8AD11AFC2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 551ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.106-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: b0e90f1a-e59d-4838-8ba1-c3716350173d: test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f (a8cc5830-449d-459a-a062-36665203d501 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.136-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.685-0500 I NETWORK [conn210] end connection 127.0.0.1:47600 (47 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.876-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.106-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.137-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.685-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test5_fsmdb0 from version {} to version { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 } took 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.876-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.107-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.139-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.686-0500 I SHARDING [conn125] Marking collection test5_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.878-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.109-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.140-0500 I COMMAND [ReplWriterWorker-5] CMD: drop test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.692-0500 I NETWORK [conn209] end connection 127.0.0.1:47588 (46 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.878-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.110-0500 I COMMAND [ReplWriterWorker-15] CMD: drop test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.140-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c (8f2a656c-d231-4a1a-aa58-38198f7f7579) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796742, 507), t: 1 } and commit timestamp Timestamp(1574796742, 507)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.718-0500 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test5_fsmdb0.fsmcoll0 to version 1|3||5ddd7dc43bbfe7fa5630eb06 took 1 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.878-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (2615395f-33b1-4b4a-907f-869755b6e215) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 3996), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.110-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c (8f2a656c-d231-4a1a-aa58-38198f7f7579) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796742, 507), t: 1 } and commit timestamp Timestamp(1574796742, 507)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.140-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c (8f2a656c-d231-4a1a-aa58-38198f7f7579).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.718-0500 I SHARDING [conn59] Updating metadata for collection test5_fsmdb0.fsmcoll0 from collection version: to collection version: 1|3||5ddd7dc43bbfe7fa5630eb06, shard version: 1|3||5ddd7dc43bbfe7fa5630eb06 due to version change
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.878-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (2615395f-33b1-4b4a-907f-869755b6e215).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.110-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c (8f2a656c-d231-4a1a-aa58-38198f7f7579).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.140-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c (8f2a656c-d231-4a1a-aa58-38198f7f7579)'. Ident: 'index-410--4104909142373009110', commit timestamp: 'Timestamp(1574796742, 507)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.718-0500 I STORAGE [ShardServerCatalogCacheLoader-2] createCollection: config.cache.chunks.test5_fsmdb0.fsmcoll0 with provided UUID: d4610810-07b8-4865-9f1e-e437f23b4c75 and options: { uuid: UUID("d4610810-07b8-4865-9f1e-e437f23b4c75") }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.878-0500 I STORAGE [conn46] renameCollection: renaming collection f6a113a6-8575-4740-88fd-f168cda34531 from test5_fsmdb0.tmp.agg_out.4154b7af-79b7-4329-b453-a871284d0d27 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.110-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c (8f2a656c-d231-4a1a-aa58-38198f7f7579)'. Ident: 'index-410--8000595249233899911', commit timestamp: 'Timestamp(1574796742, 507)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.140-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c (8f2a656c-d231-4a1a-aa58-38198f7f7579)'. Ident: 'index-417--4104909142373009110', commit timestamp: 'Timestamp(1574796742, 507)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.734-0500 I INDEX [ShardServerCatalogCacheLoader-2] index build: done building index _id_ on ns config.cache.chunks.test5_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.878-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (2615395f-33b1-4b4a-907f-869755b6e215)'. Ident: 'index-383-8224331490264904478', commit timestamp: 'Timestamp(1574796741, 3996)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.111-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c (8f2a656c-d231-4a1a-aa58-38198f7f7579)'. Ident: 'index-417--8000595249233899911', commit timestamp: 'Timestamp(1574796742, 507)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.140-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c'. Ident: collection-409--4104909142373009110, commit timestamp: Timestamp(1574796742, 507)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.734-0500 I INDEX [ShardServerCatalogCacheLoader-2] Registering index build: 95e4f5db-4f26-4876-97ed-e0cebb8d40ca
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.878-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (2615395f-33b1-4b4a-907f-869755b6e215)'. Ident: 'index-385-8224331490264904478', commit timestamp: 'Timestamp(1574796741, 3996)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.111-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c'. Ident: collection-409--8000595249233899911, commit timestamp: Timestamp(1574796742, 507)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.141-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2 (6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513) to test5_fsmdb0.agg_out and drop 50313562-2926-49b7-94f9-3777d5535866.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.749-0500 I INDEX [ShardServerCatalogCacheLoader-2] index build: starting on config.cache.chunks.test5_fsmdb0.fsmcoll0 properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.878-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-381-8224331490264904478, commit timestamp: Timestamp(1574796741, 3996)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.111-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2 (6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513) to test5_fsmdb0.agg_out and drop 50313562-2926-49b7-94f9-3777d5535866.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.141-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (50313562-2926-49b7-94f9-3777d5535866) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796742, 508), t: 1 } and commit timestamp Timestamp(1574796742, 508)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.749-0500 I INDEX [ShardServerCatalogCacheLoader-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.878-0500 I INDEX [conn110] Registering index build: 6004c8ad-c044-4e86-9438-8a7c7a491af3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.111-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (50313562-2926-49b7-94f9-3777d5535866) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796742, 508), t: 1 } and commit timestamp Timestamp(1574796742, 508)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.141-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (50313562-2926-49b7-94f9-3777d5535866).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.749-0500 I STORAGE [ShardServerCatalogCacheLoader-2] Index build initialized: 95e4f5db-4f26-4876-97ed-e0cebb8d40ca: config.cache.chunks.test5_fsmdb0.fsmcoll0 (d4610810-07b8-4865-9f1e-e437f23b4c75 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.879-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7560385850058468303, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6051221315063906413, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796741268), clusterTime: Timestamp(1574796741, 2527) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796741, 2527), signature: { hash: BinData(0, 53890F2F3CE644810CE799AEF8EAECB8AD11AFC2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 610ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.111-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (50313562-2926-49b7-94f9-3777d5535866).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.141-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513 from test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.749-0500 I INDEX [ShardServerCatalogCacheLoader-2] Waiting for index build to complete: 95e4f5db-4f26-4876-97ed-e0cebb8d40ca
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.880-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: f2d242b0-fb10-4a24-99a0-30040482ee7e: test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a ( 9e21fffc-edbd-4983-97fd-f506e4fc1c85 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.111-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513 from test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.141-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (50313562-2926-49b7-94f9-3777d5535866)'. Ident: 'index-396--4104909142373009110', commit timestamp: 'Timestamp(1574796742, 508)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.749-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.892-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.112-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (50313562-2926-49b7-94f9-3777d5535866)'. Ident: 'index-396--8000595249233899911', commit timestamp: 'Timestamp(1574796742, 508)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.141-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (50313562-2926-49b7-94f9-3777d5535866)'. Ident: 'index-405--4104909142373009110', commit timestamp: 'Timestamp(1574796742, 508)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.750-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.892-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.112-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (50313562-2926-49b7-94f9-3777d5535866)'. Ident: 'index-405--8000595249233899911', commit timestamp: 'Timestamp(1574796742, 508)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.141-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-395--4104909142373009110, commit timestamp: Timestamp(1574796742, 508)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.754-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index lastmod_1 on ns config.cache.chunks.test5_fsmdb0.fsmcoll0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.892-0500 I STORAGE [conn110] Index build initialized: 6004c8ad-c044-4e86-9438-8a7c7a491af3: test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c (8f2a656c-d231-4a1a-aa58-38198f7f7579 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.112-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-395--8000595249233899911, commit timestamp: Timestamp(1574796742, 508)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.144-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 528fac18-9af3-4801-ab79-d157b4370f33: test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f ( a8cc5830-449d-459a-a062-36665203d501 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.756-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 95e4f5db-4f26-4876-97ed-e0cebb8d40ca: config.cache.chunks.test5_fsmdb0.fsmcoll0 ( d4610810-07b8-4865-9f1e-e437f23b4c75 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.892-0500 I INDEX [conn110] Waiting for index build to complete: 6004c8ad-c044-4e86-9438-8a7c7a491af3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.113-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: b0e90f1a-e59d-4838-8ba1-c3716350173d: test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f ( a8cc5830-449d-459a-a062-36665203d501 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.160-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.756-0500 I INDEX [ShardServerCatalogCacheLoader-2] Index build completed: 95e4f5db-4f26-4876-97ed-e0cebb8d40ca
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.892-0500 I INDEX [conn108] Index build completed: f2d242b0-fb10-4a24-99a0-30040482ee7e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.129-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.160-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:20.757-0500 I SHARDING [ShardServerCatalogCacheLoader-2] Marking collection config.cache.chunks.test5_fsmdb0.fsmcoll0 as collection version:
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.892-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.129-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.160-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: d0808ee6-fbc2-4853-81f4-3ec9334e74ff: test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c (809f5cc3-11f0-44bf-a06b-4b5d9b09c34e ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:21.977-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47708 #219 (47 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.892-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796741, 2531), signature: { hash: BinData(0, 53890F2F3CE644810CE799AEF8EAECB8AD11AFC2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 465811 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 566ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.129-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 7c03ba4d-7867-469f-b0aa-52c47b9a4e71: test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c (809f5cc3-11f0-44bf-a06b-4b5d9b09c34e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.160-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:21.977-0500 I NETWORK [conn219] received client metadata from 127.0.0.1:47708 conn219: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.893-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (f6a113a6-8575-4740-88fd-f168cda34531) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796741, 4050), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.129-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:25.730-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796742, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:823 protocol:op_msg 3718ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:25.731-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796742, 511), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:821 protocol:op_msg 3668ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.161-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:21.978-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47710 #220 (48 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.893-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (f6a113a6-8575-4740-88fd-f168cda34531).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.130-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:25.731-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796742, 508), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:821 protocol:op_msg 3672ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:25.782-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796745, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 122ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.163-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:21.978-0500 I NETWORK [conn220] received client metadata from 127.0.0.1:47710 conn220: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.893-0500 I STORAGE [conn112] renameCollection: renaming collection 50313562-2926-49b7-94f9-3777d5535866 from test5_fsmdb0.tmp.agg_out.8fa49dba-d755-4c39-9b69-bd2c4e7b9037 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.133-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:25.882-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796745, 1516), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 149ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:25.812-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796745, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 148ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.164-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 with provided UUID: 3d9c7996-cf3a-4167-904c-93a488a83f20 and options: { uuid: UUID("3d9c7996-cf3a-4167-904c-93a488a83f20"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:22.042-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47716 #221 (49 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.893-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f6a113a6-8575-4740-88fd-f168cda34531)'. Ident: 'index-390-8224331490264904478', commit timestamp: 'Timestamp(1574796741, 4050)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.134-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 with provided UUID: 3d9c7996-cf3a-4167-904c-93a488a83f20 and options: { uuid: UUID("3d9c7996-cf3a-4167-904c-93a488a83f20"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.165-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: d0808ee6-fbc2-4853-81f4-3ec9334e74ff: test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c ( 809f5cc3-11f0-44bf-a06b-4b5d9b09c34e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:22.042-0500 I NETWORK [conn221] received client metadata from 127.0.0.1:47716 conn221: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.893-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f6a113a6-8575-4740-88fd-f168cda34531)'. Ident: 'index-395-8224331490264904478', commit timestamp: 'Timestamp(1574796741, 4050)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.134-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 7c03ba4d-7867-469f-b0aa-52c47b9a4e71: test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c ( 809f5cc3-11f0-44bf-a06b-4b5d9b09c34e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.177-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:22.045-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47720 #222 (50 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.893-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-386-8224331490264904478, commit timestamp: Timestamp(1574796741, 4050)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.150-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.178-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b with provided UUID: 32c46f55-894c-4f71-9d2c-0bcf5cd6d5b8 and options: { uuid: UUID("32c46f55-894c-4f71-9d2c-0bcf5cd6d5b8"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:22.045-0500 I NETWORK [conn222] received client metadata from 127.0.0.1:47720 conn222: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.893-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.151-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b with provided UUID: 32c46f55-894c-4f71-9d2c-0bcf5cd6d5b8 and options: { uuid: UUID("32c46f55-894c-4f71-9d2c-0bcf5cd6d5b8"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.191-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:22.055-0500 W CONTROL [conn222] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 377 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:22.072-0500 W CONTROL [conn222] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 0, data: {}, timesEntered: 377 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.163-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.212-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.893-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3250232689151740139, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2459575951558114154, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796741228), clusterTime: Timestamp(1574796741, 1022) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796741, 1022), signature: { hash: BinData(0, 53890F2F3CE644810CE799AEF8EAECB8AD11AFC2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 664ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:22.075-0500 I NETWORK [conn221] end connection 127.0.0.1:47716 (49 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.183-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.212-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.894-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:22.075-0500 I NETWORK [conn222] end connection 127.0.0.1:47720 (48 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.183-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.212-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 41e27977-501d-435f-b60f-c24ccca39920: test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 (e598322a-824f-46b8-8433-6606348c63f2 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.895-0500 I COMMAND [conn67] CMD: dropIndexes test5_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.183-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 978ec891-2827-4981-8296-3c8698114c56: test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 (e598322a-824f-46b8-8433-6606348c63f2 ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:25.658-0500 I NETWORK [conn119] end connection 127.0.0.1:46450 (47 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.212-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.896-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2 with generated UUID: 6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.183-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.212-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.896-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.183-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.215-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.898-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f with generated UUID: a8cc5830-449d-459a-a062-36665203d501 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.186-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.216-0500 I COMMAND [ReplWriterWorker-15] CMD: drop test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.907-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 6004c8ad-c044-4e86-9438-8a7c7a491af3: test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c ( 8f2a656c-d231-4a1a-aa58-38198f7f7579 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.190-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 978ec891-2827-4981-8296-3c8698114c56: test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 ( e598322a-824f-46b8-8433-6606348c63f2 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.216-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f (a8cc5830-449d-459a-a062-36665203d501) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796742, 1518), t: 1 } and commit timestamp Timestamp(1574796742, 1518)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.907-0500 I INDEX [conn110] Index build completed: 6004c8ad-c044-4e86-9438-8a7c7a491af3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.191-0500 I COMMAND [ReplWriterWorker-10] CMD: drop test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.216-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f (a8cc5830-449d-459a-a062-36665203d501).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.923-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.191-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f (a8cc5830-449d-459a-a062-36665203d501) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796742, 1518), t: 1 } and commit timestamp Timestamp(1574796742, 1518)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.216-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f (a8cc5830-449d-459a-a062-36665203d501)'. Ident: 'index-420--4104909142373009110', commit timestamp: 'Timestamp(1574796742, 1518)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.930-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.191-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f (a8cc5830-449d-459a-a062-36665203d501).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.216-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f (a8cc5830-449d-459a-a062-36665203d501)'. Ident: 'index-427--4104909142373009110', commit timestamp: 'Timestamp(1574796742, 1518)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.930-0500 I INDEX [conn108] Registering index build: 57f3b0b4-b6e9-4114-a29c-37c167af6fd6
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.191-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f (a8cc5830-449d-459a-a062-36665203d501)'. Ident: 'index-420--8000595249233899911', commit timestamp: 'Timestamp(1574796742, 1518)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.216-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f'. Ident: collection-419--4104909142373009110, commit timestamp: Timestamp(1574796742, 1518)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.930-0500 I INDEX [conn112] Registering index build: 63853552-7a1a-48cc-baf4-48a48e69621e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.191-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f (a8cc5830-449d-459a-a062-36665203d501)'. Ident: 'index-427--8000595249233899911', commit timestamp: 'Timestamp(1574796742, 1518)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:22.217-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 41e27977-501d-435f-b60f-c24ccca39920: test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 ( e598322a-824f-46b8-8433-6606348c63f2 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.931-0500 I COMMAND [conn114] CMD: drop test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:22.191-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f'. Ident: collection-419--8000595249233899911, commit timestamp: Timestamp(1574796742, 1518)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.667-0500 I COMMAND [ReplWriterWorker-9] CMD: drop test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.975-0500 I NETWORK [listener] connection accepted from 127.0.0.1:40234 #217 (47 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.665-0500 I COMMAND [ReplWriterWorker-0] CMD: drop test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.667-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c (809f5cc3-11f0-44bf-a06b-4b5d9b09c34e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796745, 2), t: 1 } and commit timestamp Timestamp(1574796745, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.975-0500 I NETWORK [conn217] received client metadata from 127.0.0.1:40234 conn217: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.665-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c (809f5cc3-11f0-44bf-a06b-4b5d9b09c34e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796745, 2), t: 1 } and commit timestamp Timestamp(1574796745, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.667-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c (809f5cc3-11f0-44bf-a06b-4b5d9b09c34e).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.975-0500 I NETWORK [listener] connection accepted from 127.0.0.1:40236 #218 (48 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.665-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c (809f5cc3-11f0-44bf-a06b-4b5d9b09c34e).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.667-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c (809f5cc3-11f0-44bf-a06b-4b5d9b09c34e)'. Ident: 'index-424--4104909142373009110', commit timestamp: 'Timestamp(1574796745, 2)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.975-0500 I NETWORK [conn218] received client metadata from 127.0.0.1:40236 conn218: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.665-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c (809f5cc3-11f0-44bf-a06b-4b5d9b09c34e)'. Ident: 'index-424--8000595249233899911', commit timestamp: 'Timestamp(1574796745, 2)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.667-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c (809f5cc3-11f0-44bf-a06b-4b5d9b09c34e)'. Ident: 'index-429--4104909142373009110', commit timestamp: 'Timestamp(1574796745, 2)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.980-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.665-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c (809f5cc3-11f0-44bf-a06b-4b5d9b09c34e)'. Ident: 'index-429--8000595249233899911', commit timestamp: 'Timestamp(1574796745, 2)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.667-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c'. Ident: collection-423--4104909142373009110, commit timestamp: Timestamp(1574796745, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.980-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.665-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c'. Ident: collection-423--8000595249233899911, commit timestamp: Timestamp(1574796745, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.704-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.980-0500 I STORAGE [conn108] Index build initialized: 57f3b0b4-b6e9-4114-a29c-37c167af6fd6: test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2 (6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.687-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.704-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.980-0500 I INDEX [conn108] Waiting for index build to complete: 57f3b0b4-b6e9-4114-a29c-37c167af6fd6
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.687-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.704-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: b7359d7b-a353-42d3-bad5-b6d1ee7ec7dd: test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 (3d9c7996-cf3a-4167-904c-93a488a83f20 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.980-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e (3dc16a5f-4783-4567-b5b2-8333419cb2e6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.687-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: ad0e005b-9060-43c6-b3a0-9192e3dfa3b7: test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 (3d9c7996-cf3a-4167-904c-93a488a83f20 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.705-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.980-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e (3dc16a5f-4783-4567-b5b2-8333419cb2e6).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.687-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.705-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.980-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e (3dc16a5f-4783-4567-b5b2-8333419cb2e6)'. Ident: 'index-391-8224331490264904478', commit timestamp: 'Timestamp(1574796741, 5057)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.688-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.707-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.980-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e (3dc16a5f-4783-4567-b5b2-8333419cb2e6)'. Ident: 'index-397-8224331490264904478', commit timestamp: 'Timestamp(1574796741, 5057)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.690-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.710-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: b7359d7b-a353-42d3-bad5-b6d1ee7ec7dd: test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 ( 3d9c7996-cf3a-4167-904c-93a488a83f20 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.980-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e'. Ident: collection-387-8224331490264904478, commit timestamp: Timestamp(1574796741, 5057)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.700-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: ad0e005b-9060-43c6-b3a0-9192e3dfa3b7: test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 ( 3d9c7996-cf3a-4167-904c-93a488a83f20 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.726-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.980-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.707-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.726-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.980-0500 I COMMAND [conn65] command test5_fsmdb0.agg_out appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3628179453001375208, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2442796773062099878, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796741267), clusterTime: Timestamp(1574796741, 2526) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796741, 2527), signature: { hash: BinData(0, 53890F2F3CE644810CE799AEF8EAECB8AD11AFC2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.8be3464c-354e-4079-ba47-14118d25ab8e\", to: \"test5_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:884 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 712ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.707-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.726-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 6ff69ee5-00d2-45e7-89d6-fcf76f9bd1aa: test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b (32c46f55-894c-4f71-9d2c-0bcf5cd6d5b8 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.981-0500 I COMMAND [conn110] CMD: drop test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.707-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 4b5171bd-253f-4a70-b547-9acd471c483d: test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b (32c46f55-894c-4f71-9d2c-0bcf5cd6d5b8 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.726-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.981-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.707-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.726-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.983-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.708-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.729-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.984-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c with generated UUID: 809f5cc3-11f0-44bf-a06b-4b5d9b09c34e and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.711-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.730-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1 with provided UUID: 7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46 and options: { uuid: UUID("7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:21.994-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 57f3b0b4-b6e9-4114-a29c-37c167af6fd6: test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2 ( 6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.712-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1 with provided UUID: 7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46 and options: { uuid: UUID("7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.732-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 6ff69ee5-00d2-45e7-89d6-fcf76f9bd1aa: test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b ( 32c46f55-894c-4f71-9d2c-0bcf5cd6d5b8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.009-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.713-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 4b5171bd-253f-4a70-b547-9acd471c483d: test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b ( 32c46f55-894c-4f71-9d2c-0bcf5cd6d5b8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.747-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.009-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.728-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.748-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558 with provided UUID: 9dfabf25-5277-441e-b92a-0df4e5a93c44 and options: { uuid: UUID("9dfabf25-5277-441e-b92a-0df4e5a93c44"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.009-0500 I STORAGE [conn112] Index build initialized: 63853552-7a1a-48cc-baf4-48a48e69621e: test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f (a8cc5830-449d-459a-a062-36665203d501 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.728-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558 with provided UUID: 9dfabf25-5277-441e-b92a-0df4e5a93c44 and options: { uuid: UUID("9dfabf25-5277-441e-b92a-0df4e5a93c44"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.764-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.009-0500 I INDEX [conn112] Waiting for index build to complete: 63853552-7a1a-48cc-baf4-48a48e69621e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.743-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.800-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.009-0500 I INDEX [conn108] Index build completed: 57f3b0b4-b6e9-4114-a29c-37c167af6fd6
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.009-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a (9e21fffc-edbd-4983-97fd-f506e4fc1c85) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.800-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.771-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.009-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a (9e21fffc-edbd-4983-97fd-f506e4fc1c85).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.800-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: aa82e3c1-969d-4b32-913c-a14d3a3ec4aa: test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1 (7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.771-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.010-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a (9e21fffc-edbd-4983-97fd-f506e4fc1c85)'. Ident: 'index-392-8224331490264904478', commit timestamp: 'Timestamp(1574796742, 2)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.800-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.771-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 5392a642-e48b-4bce-8102-9632fc6ca3c2: test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1 (7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.010-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a (9e21fffc-edbd-4983-97fd-f506e4fc1c85)'. Ident: 'index-399-8224331490264904478', commit timestamp: 'Timestamp(1574796742, 2)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.801-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.771-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.010-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a'. Ident: collection-388-8224331490264904478, commit timestamp: Timestamp(1574796742, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.802-0500 I COMMAND [ReplWriterWorker-9] CMD: drop test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.772-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.010-0500 I COMMAND [conn71] command test5_fsmdb0.agg_out appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4662557735776056798, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5138932834447427600, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796741268), clusterTime: Timestamp(1574796741, 2527) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796741, 2527), signature: { hash: BinData(0, 53890F2F3CE644810CE799AEF8EAECB8AD11AFC2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.e2ef3257-005a-4036-ac21-5b4783033f9a\", to: \"test5_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:884 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 741ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.802-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 (e598322a-824f-46b8-8433-6606348c63f2) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796745, 1514), t: 1 } and commit timestamp Timestamp(1574796745, 1514)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.773-0500 I COMMAND [ReplWriterWorker-3] CMD: drop test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.017-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.802-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 (e598322a-824f-46b8-8433-6606348c63f2).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.773-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 (e598322a-824f-46b8-8433-6606348c63f2) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796745, 1514), t: 1 } and commit timestamp Timestamp(1574796745, 1514)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.018-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.802-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 (e598322a-824f-46b8-8433-6606348c63f2)'. Ident: 'index-426--4104909142373009110', commit timestamp: 'Timestamp(1574796745, 1514)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.773-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 (e598322a-824f-46b8-8433-6606348c63f2).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.018-0500 I INDEX [conn114] Registering index build: 87de212a-dd6d-4ea9-95ac-542a4dfa5e5e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.802-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 (e598322a-824f-46b8-8433-6606348c63f2)'. Ident: 'index-435--4104909142373009110', commit timestamp: 'Timestamp(1574796745, 1514)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.773-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 (e598322a-824f-46b8-8433-6606348c63f2)'. Ident: 'index-426--8000595249233899911', commit timestamp: 'Timestamp(1574796745, 1514)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.018-0500 I COMMAND [conn46] CMD: drop test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.802-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0'. Ident: collection-425--4104909142373009110, commit timestamp: Timestamp(1574796745, 1514)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.773-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 (e598322a-824f-46b8-8433-6606348c63f2)'. Ident: 'index-435--8000595249233899911', commit timestamp: 'Timestamp(1574796745, 1514)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.018-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.803-0500 I COMMAND [ReplWriterWorker-15] CMD: drop test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.773-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0'. Ident: collection-425--8000595249233899911, commit timestamp: Timestamp(1574796745, 1514)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.019-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 with generated UUID: e598322a-824f-46b8-8433-6606348c63f2 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.803-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b (32c46f55-894c-4f71-9d2c-0bcf5cd6d5b8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796745, 1515), t: 1 } and commit timestamp Timestamp(1574796745, 1515)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.774-0500 I COMMAND [ReplWriterWorker-0] CMD: drop test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.029-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.803-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b (32c46f55-894c-4f71-9d2c-0bcf5cd6d5b8).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.774-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.043-0500 I NETWORK [listener] connection accepted from 127.0.0.1:40252 #219 (49 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.803-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b (32c46f55-894c-4f71-9d2c-0bcf5cd6d5b8)'. Ident: 'index-434--4104909142373009110', commit timestamp: 'Timestamp(1574796745, 1515)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.774-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b (32c46f55-894c-4f71-9d2c-0bcf5cd6d5b8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796745, 1515), t: 1 } and commit timestamp Timestamp(1574796745, 1515)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.044-0500 I NETWORK [conn219] received client metadata from 127.0.0.1:40252 conn219: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.803-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b (32c46f55-894c-4f71-9d2c-0bcf5cd6d5b8)'. Ident: 'index-439--4104909142373009110', commit timestamp: 'Timestamp(1574796745, 1515)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.774-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b (32c46f55-894c-4f71-9d2c-0bcf5cd6d5b8).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.046-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.803-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b'. Ident: collection-433--4104909142373009110, commit timestamp: Timestamp(1574796745, 1515)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.774-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b (32c46f55-894c-4f71-9d2c-0bcf5cd6d5b8)'. Ident: 'index-434--8000595249233899911', commit timestamp: 'Timestamp(1574796745, 1515)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.046-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.803-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.774-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b (32c46f55-894c-4f71-9d2c-0bcf5cd6d5b8)'. Ident: 'index-439--8000595249233899911', commit timestamp: 'Timestamp(1574796745, 1515)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.046-0500 I STORAGE [conn114] Index build initialized: 87de212a-dd6d-4ea9-95ac-542a4dfa5e5e: test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c (809f5cc3-11f0-44bf-a06b-4b5d9b09c34e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.804-0500 I COMMAND [ReplWriterWorker-6] CMD: drop test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.774-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b'. Ident: collection-433--8000595249233899911, commit timestamp: Timestamp(1574796745, 1515)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.046-0500 I INDEX [conn114] Waiting for index build to complete: 87de212a-dd6d-4ea9-95ac-542a4dfa5e5e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.804-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 (3d9c7996-cf3a-4167-904c-93a488a83f20) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796745, 1516), t: 1 } and commit timestamp Timestamp(1574796745, 1516)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.775-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.046-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c (8f2a656c-d231-4a1a-aa58-38198f7f7579) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.804-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 (3d9c7996-cf3a-4167-904c-93a488a83f20).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.775-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 (3d9c7996-cf3a-4167-904c-93a488a83f20) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796745, 1516), t: 1 } and commit timestamp Timestamp(1574796745, 1516)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.046-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c (8f2a656c-d231-4a1a-aa58-38198f7f7579).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.804-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 (3d9c7996-cf3a-4167-904c-93a488a83f20)'. Ident: 'index-432--4104909142373009110', commit timestamp: 'Timestamp(1574796745, 1516)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.775-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 (3d9c7996-cf3a-4167-904c-93a488a83f20).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.046-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c (8f2a656c-d231-4a1a-aa58-38198f7f7579)'. Ident: 'index-402-8224331490264904478', commit timestamp: 'Timestamp(1574796742, 507)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.804-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 (3d9c7996-cf3a-4167-904c-93a488a83f20)'. Ident: 'index-437--4104909142373009110', commit timestamp: 'Timestamp(1574796745, 1516)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.775-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 (3d9c7996-cf3a-4167-904c-93a488a83f20)'. Ident: 'index-432--8000595249233899911', commit timestamp: 'Timestamp(1574796745, 1516)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.046-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c (8f2a656c-d231-4a1a-aa58-38198f7f7579)'. Ident: 'index-403-8224331490264904478', commit timestamp: 'Timestamp(1574796742, 507)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.804-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6'. Ident: collection-431--4104909142373009110, commit timestamp: Timestamp(1574796745, 1516)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.775-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 (3d9c7996-cf3a-4167-904c-93a488a83f20)'. Ident: 'index-437--8000595249233899911', commit timestamp: 'Timestamp(1574796745, 1516)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.046-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c'. Ident: collection-400-8224331490264904478, commit timestamp: Timestamp(1574796742, 507)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.805-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623 with provided UUID: c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7 and options: { uuid: UUID("c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.775-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6'. Ident: collection-431--8000595249233899911, commit timestamp: Timestamp(1574796745, 1516)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.046-0500 I NETWORK [listener] connection accepted from 127.0.0.1:40260 #220 (50 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.806-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: aa82e3c1-969d-4b32-913c-a14d3a3ec4aa: test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1 ( 7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.776-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623 with provided UUID: c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7 and options: { uuid: UUID("c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.046-0500 I COMMAND [conn68] command test5_fsmdb0.agg_out appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5265973246418246414, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 586278164479884562, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796741850), clusterTime: Timestamp(1574796741, 3041) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796741, 3041), signature: { hash: BinData(0, 53890F2F3CE644810CE799AEF8EAECB8AD11AFC2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.703f0462-475d-4dde-9c50-b8dc3f1d187c\", to: \"test5_fsmdb0.agg_out\", collectionOptions: {}, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: {}, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:884 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 195ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.819-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.777-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 5392a642-e48b-4bce-8102-9632fc6ca3c2: test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1 ( 7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.046-0500 I NETWORK [conn220] received client metadata from 127.0.0.1:40260 conn220: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.820-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e with provided UUID: 0946c9fd-46e0-432e-9edf-44f5a5717c66 and options: { uuid: UUID("0946c9fd-46e0-432e-9edf-44f5a5717c66"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.793-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.048-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 63853552-7a1a-48cc-baf4-48a48e69621e: test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f ( a8cc5830-449d-459a-a062-36665203d501 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.836-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.794-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e with provided UUID: 0946c9fd-46e0-432e-9edf-44f5a5717c66 and options: { uuid: UUID("0946c9fd-46e0-432e-9edf-44f5a5717c66"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.048-0500 I INDEX [conn112] Index build completed: 63853552-7a1a-48cc-baf4-48a48e69621e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.851-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.809-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.048-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796741, 4683), signature: { hash: BinData(0, 53890F2F3CE644810CE799AEF8EAECB8AD11AFC2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 488 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 117ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.851-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.827-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.056-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.851-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 0cc66799-ede8-4cbf-8f57-437ed3bde7d9: test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558 (9dfabf25-5277-441e-b92a-0df4e5a93c44 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.827-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.056-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.851-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.827-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 467699b1-7b6c-470c-a5a1-c503eb51f4a7: test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558 (9dfabf25-5277-441e-b92a-0df4e5a93c44 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.056-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (50313562-2926-49b7-94f9-3777d5535866) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796742, 508), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.852-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.827-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.056-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (50313562-2926-49b7-94f9-3777d5535866).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.852-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0 with provided UUID: 7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1 and options: { uuid: UUID("7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.828-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.056-0500 I STORAGE [conn108] renameCollection: renaming collection 6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513 from test5_fsmdb0.tmp.agg_out.8a12051a-2b6c-46be-9624-f093e5610fa2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.853-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.828-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0 with provided UUID: 7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1 and options: { uuid: UUID("7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.056-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (50313562-2926-49b7-94f9-3777d5535866)'. Ident: 'index-384-8224331490264904478', commit timestamp: 'Timestamp(1574796742, 508)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.861-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 0cc66799-ede8-4cbf-8f57-437ed3bde7d9: test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558 ( 9dfabf25-5277-441e-b92a-0df4e5a93c44 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.831-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.056-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (50313562-2926-49b7-94f9-3777d5535866)'. Ident: 'index-393-8224331490264904478', commit timestamp: 'Timestamp(1574796742, 508)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.869-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.839-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 467699b1-7b6c-470c-a5a1-c503eb51f4a7: test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558 ( 9dfabf25-5277-441e-b92a-0df4e5a93c44 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.056-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-382-8224331490264904478, commit timestamp: Timestamp(1574796742, 508)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.873-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1 (7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46) to test5_fsmdb0.agg_out and drop 6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.847-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.057-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.873-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796745, 2086), t: 1 } and commit timestamp Timestamp(1574796745, 2086)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.852-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1 (7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46) to test5_fsmdb0.agg_out and drop 6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.057-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4737112332481730144, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6108444870722721602, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796741894), clusterTime: Timestamp(1574796741, 4051) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796741, 4115), signature: { hash: BinData(0, 53890F2F3CE644810CE799AEF8EAECB8AD11AFC2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 161ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.873-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.852-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796745, 2086), t: 1 } and commit timestamp Timestamp(1574796745, 2086)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.057-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.873-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection 7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46 from test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.852-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.058-0500 W CONTROL [conn220] failpoint: WTPreserveSnapshotHistoryIndefinitely set to: { mode: 1, data: {}, timesEntered: 331 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.873-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513)'. Ident: 'index-416--4104909142373009110', commit timestamp: 'Timestamp(1574796745, 2086)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.852-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46 from test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.061-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.873-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513)'. Ident: 'index-421--4104909142373009110', commit timestamp: 'Timestamp(1574796745, 2086)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.852-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513)'. Ident: 'index-416--8000595249233899911', commit timestamp: 'Timestamp(1574796745, 2086)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.061-0500 I INDEX [conn110] Registering index build: b8d9c1d2-d544-4fca-80cd-4a21030420b3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.873-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-415--4104909142373009110, commit timestamp: Timestamp(1574796745, 2086)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.852-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513)'. Ident: 'index-421--8000595249233899911', commit timestamp: 'Timestamp(1574796745, 2086)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.062-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 with generated UUID: 3d9c7996-cf3a-4167-904c-93a488a83f20 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.876-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47 with provided UUID: 34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98 and options: { uuid: UUID("34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.852-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-415--8000595249233899911, commit timestamp: Timestamp(1574796745, 2086)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.063-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 87de212a-dd6d-4ea9-95ac-542a4dfa5e5e: test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c ( 809f5cc3-11f0-44bf-a06b-4b5d9b09c34e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.892-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.855-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47 with provided UUID: 34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98 and options: { uuid: UUID("34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.063-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b with generated UUID: 32c46f55-894c-4f71-9d2c-0bcf5cd6d5b8 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.896-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558 (9dfabf25-5277-441e-b92a-0df4e5a93c44) to test5_fsmdb0.agg_out and drop 7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.870-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.096-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.896-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796745, 2525), t: 1 } and commit timestamp Timestamp(1574796745, 2525)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.874-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558 (9dfabf25-5277-441e-b92a-0df4e5a93c44) to test5_fsmdb0.agg_out and drop 7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.096-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.896-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.874-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.agg_out (7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796745, 2525), t: 1 } and commit timestamp Timestamp(1574796745, 2525)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.096-0500 I STORAGE [conn110] Index build initialized: b8d9c1d2-d544-4fca-80cd-4a21030420b3: test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 (e598322a-824f-46b8-8433-6606348c63f2 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.896-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 9dfabf25-5277-441e-b92a-0df4e5a93c44 from test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.874-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.agg_out (7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.096-0500 I INDEX [conn110] Waiting for index build to complete: b8d9c1d2-d544-4fca-80cd-4a21030420b3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.896-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46)'. Ident: 'index-442--4104909142373009110', commit timestamp: 'Timestamp(1574796745, 2525)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.874-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection 9dfabf25-5277-441e-b92a-0df4e5a93c44 from test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.097-0500 I INDEX [conn114] Index build completed: 87de212a-dd6d-4ea9-95ac-542a4dfa5e5e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.896-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46)'. Ident: 'index-445--4104909142373009110', commit timestamp: 'Timestamp(1574796745, 2525)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.874-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46)'. Ident: 'index-442--8000595249233899911', commit timestamp: 'Timestamp(1574796745, 2525)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.103-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.896-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-441--4104909142373009110, commit timestamp: Timestamp(1574796745, 2525)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.874-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46)'. Ident: 'index-445--8000595249233899911', commit timestamp: 'Timestamp(1574796745, 2525)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.111-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.897-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39 with provided UUID: aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273 and options: { uuid: UUID("aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.874-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-441--8000595249233899911, commit timestamp: Timestamp(1574796745, 2525)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.111-0500 I INDEX [conn108] Registering index build: 12f6d0e4-95b1-435d-8a14-c63bf659569b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.912-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.875-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39 with provided UUID: aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273 and options: { uuid: UUID("aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.111-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.926-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.890-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.111-0500 I INDEX [conn46] Registering index build: c2b4f544-3dd6-4730-a89a-3b9b01bcc238
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.926-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.906-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.111-0500 I COMMAND [conn112] CMD: drop test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.926-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 3cddca9f-2e26-4802-8b16-13adb8901a01: test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e (0946c9fd-46e0-432e-9edf-44f5a5717c66 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.906-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.112-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.926-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.906-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: ee5e5baa-fbcb-4d13-8ae7-9acec03de84a: test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e (0946c9fd-46e0-432e-9edf-44f5a5717c66 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.121-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.927-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.906-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.130-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.931-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.907-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.130-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.942-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 3cddca9f-2e26-4802-8b16-13adb8901a01: test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e ( 0946c9fd-46e0-432e-9edf-44f5a5717c66 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.909-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.130-0500 I STORAGE [conn108] Index build initialized: 12f6d0e4-95b1-435d-8a14-c63bf659569b: test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 (3d9c7996-cf3a-4167-904c-93a488a83f20 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.949-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.918-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: ee5e5baa-fbcb-4d13-8ae7-9acec03de84a: test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e ( 0946c9fd-46e0-432e-9edf-44f5a5717c66 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.130-0500 I INDEX [conn108] Waiting for index build to complete: 12f6d0e4-95b1-435d-8a14-c63bf659569b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.949-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.925-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.130-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f (a8cc5830-449d-459a-a062-36665203d501) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.949-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 31bd6ef4-3b5d-4ec3-a310-004c343b963f: test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623 (c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.925-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.130-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f (a8cc5830-449d-459a-a062-36665203d501).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.949-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.925-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: d5dbb780-cbee-4849-9bcc-a142795537b8: test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623 (c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.130-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f (a8cc5830-449d-459a-a062-36665203d501)'. Ident: 'index-408-8224331490264904478', commit timestamp: 'Timestamp(1574796742, 1518)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.950-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.925-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.131-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f (a8cc5830-449d-459a-a062-36665203d501)'. Ident: 'index-411-8224331490264904478', commit timestamp: 'Timestamp(1574796742, 1518)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.952-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.926-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.131-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f'. Ident: collection-406-8224331490264904478, commit timestamp: Timestamp(1574796742, 1518)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.955-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e (0946c9fd-46e0-432e-9edf-44f5a5717c66) to test5_fsmdb0.agg_out and drop 9dfabf25-5277-441e-b92a-0df4e5a93c44.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.929-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.131-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.955-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (9dfabf25-5277-441e-b92a-0df4e5a93c44) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796745, 3034), t: 1 } and commit timestamp Timestamp(1574796745, 3034)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.933-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e (0946c9fd-46e0-432e-9edf-44f5a5717c66) to test5_fsmdb0.agg_out and drop 9dfabf25-5277-441e-b92a-0df4e5a93c44.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.131-0500 I COMMAND [conn67] command test5_fsmdb0.agg_out appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1048354557667804749, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1817551531129507812, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796741896), clusterTime: Timestamp(1574796741, 4115) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796741, 4246), signature: { hash: BinData(0, 53890F2F3CE644810CE799AEF8EAECB8AD11AFC2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.5ab0db06-d6cc-4cb2-b6fd-13dd206bb96f\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:991 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 234ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.955-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (9dfabf25-5277-441e-b92a-0df4e5a93c44).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.933-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.agg_out (9dfabf25-5277-441e-b92a-0df4e5a93c44) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796745, 3034), t: 1 } and commit timestamp Timestamp(1574796745, 3034)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.933-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.agg_out (9dfabf25-5277-441e-b92a-0df4e5a93c44).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.955-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection 0946c9fd-46e0-432e-9edf-44f5a5717c66 from test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.131-0500 I COMMAND [conn114] CMD: drop test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.933-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection 0946c9fd-46e0-432e-9edf-44f5a5717c66 from test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.955-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (9dfabf25-5277-441e-b92a-0df4e5a93c44)'. Ident: 'index-444--4104909142373009110', commit timestamp: 'Timestamp(1574796745, 3034)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.136-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: b8d9c1d2-d544-4fca-80cd-4a21030420b3: test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 ( e598322a-824f-46b8-8433-6606348c63f2 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.933-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (9dfabf25-5277-441e-b92a-0df4e5a93c44)'. Ident: 'index-444--8000595249233899911', commit timestamp: 'Timestamp(1574796745, 3034)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.955-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (9dfabf25-5277-441e-b92a-0df4e5a93c44)'. Ident: 'index-451--4104909142373009110', commit timestamp: 'Timestamp(1574796745, 3034)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:28.862-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796745, 1516), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3130ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.136-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.933-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (9dfabf25-5277-441e-b92a-0df4e5a93c44)'. Ident: 'index-451--8000595249233899911', commit timestamp: 'Timestamp(1574796745, 3034)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.955-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-443--4104909142373009110, commit timestamp: Timestamp(1574796745, 3034)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:22.152-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.933-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-443--8000595249233899911, commit timestamp: Timestamp(1574796745, 3034)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.956-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2 with provided UUID: c3b2f2c7-1ecd-4f72-8da4-27a519319358 and options: { uuid: UUID("c3b2f2c7-1ecd-4f72-8da4-27a519319358"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.657-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.934-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2 with provided UUID: c3b2f2c7-1ecd-4f72-8da4-27a519319358 and options: { uuid: UUID("c3b2f2c7-1ecd-4f72-8da4-27a519319358"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.956-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 31bd6ef4-3b5d-4ec3-a310-004c343b963f: test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623 ( c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:23.322-0500 I CONNPOOL [TaskExecutorPool-0] Ending idle connection to host localhost:20004 because the pool meets constraints; 2 connections to that host remain open
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:28.863-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796745, 2150), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3079ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.935-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: d5dbb780-cbee-4849-9bcc-a142795537b8: test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623 ( c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.972-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.658-0500 I STORAGE [conn46] Index build initialized: c2b4f544-3dd6-4730-a89a-3b9b01bcc238: test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b (32c46f55-894c-4f71-9d2c-0bcf5cd6d5b8 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.950-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.989-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.658-0500 I INDEX [conn46] Waiting for index build to complete: c2b4f544-3dd6-4730-a89a-3b9b01bcc238
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.966-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.989-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.658-0500 I INDEX [conn110] Index build completed: b8d9c1d2-d544-4fca-80cd-4a21030420b3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.966-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.989-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 230d3f8e-d797-420b-9ef5-08389e0feead: test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47 (34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.658-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c (809f5cc3-11f0-44bf-a06b-4b5d9b09c34e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.967-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 2f46c2dd-72b2-4f88-8a20-24bcdfc0737c: test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47 (34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.989-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.658-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.967-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.989-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.992-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.969-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.658-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c (809f5cc3-11f0-44bf-a06b-4b5d9b09c34e).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:25.996-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 230d3f8e-d797-420b-9ef5-08389e0feead: test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47 ( 34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.973-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.658-0500 I COMMAND [conn220] command test5_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796742, 511), lsid: { id: UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") }, $clusterTime: { clusterTime: Timestamp(1574796742, 577), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796742, 511). Collection minimum timestamp is Timestamp(1574796745, 1)" errName:SnapshotUnavailable errCode:246 reslen:578 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 3492195 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 3492ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:26.009-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.981-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 2f46c2dd-72b2-4f88-8a20-24bcdfc0737c: test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47 ( 34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.658-0500 I COMMAND [conn110] command test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796742, 508), signature: { hash: BinData(0, 028CA2BD441AA7DBBFCDC1142AD7B5D92C285430), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 4512 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 3601ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:26.009-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.987-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.658-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c (809f5cc3-11f0-44bf-a06b-4b5d9b09c34e)'. Ident: 'index-414-8224331490264904478', commit timestamp: 'Timestamp(1574796745, 2)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:26.009-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: e132731e-c1c7-4f97-a9bf-7e93254da6ef: test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0 (7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.987-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.658-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c (809f5cc3-11f0-44bf-a06b-4b5d9b09c34e)'. Ident: 'index-415-8224331490264904478', commit timestamp: 'Timestamp(1574796745, 2)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:26.009-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.987-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: d78a7eda-50c8-413a-ae66-fd9dfaba1a27: test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0 (7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.658-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c'. Ident: collection-412-8224331490264904478, commit timestamp: Timestamp(1574796745, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:26.010-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.987-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.658-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c command: drop { drop: "tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c", databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, $clusterTime: { clusterTime: Timestamp(1574796742, 1518), signature: { hash: BinData(0, 028CA2BD441AA7DBBFCDC1142AD7B5D92C285430), keyId: 6763700092420489256 } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:420 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 3526ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:26.012-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.988-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.658-0500 I COMMAND [conn65] command test5_fsmdb0.agg_out appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 9164089836250909855, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6503802721154235405, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796741982), clusterTime: Timestamp(1574796741, 5121) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796741, 5122), signature: { hash: BinData(0, 53890F2F3CE644810CE799AEF8EAECB8AD11AFC2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.458e1e27-dfcc-4e8d-a1c1-b345e0d8238c\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:991 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3674ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:26.018-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: e132731e-c1c7-4f97-a9bf-7e93254da6ef: test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0 ( 7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.990-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.661-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:26.023-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:25.998-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: d78a7eda-50c8-413a-ae66-fd9dfaba1a27: test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0 ( 7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.661-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:26.023-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:26.003-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.663-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:26.023-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: c57877e5-5c95-4fc8-b242-e41b53ca764c: test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39 (aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:26.004-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.664-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1 with generated UUID: 7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46 and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:26.024-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:26.004-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 65a90787-144b-4e79-82e2-7329daec3921: test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39 (aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.664-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 12f6d0e4-95b1-435d-8a14-c63bf659569b: test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 ( 3d9c7996-cf3a-4167-904c-93a488a83f20 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:26.024-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:26.004-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.664-0500 I INDEX [conn108] Index build completed: 12f6d0e4-95b1-435d-8a14-c63bf659569b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:26.025-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623 (c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7) to test5_fsmdb0.agg_out and drop 0946c9fd-46e0-432e-9edf-44f5a5717c66.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:26.004-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.665-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796742, 1014), signature: { hash: BinData(0, 028CA2BD441AA7DBBFCDC1142AD7B5D92C285430), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 7494 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 3561ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:26.026-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:26.005-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623 (c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7) to test5_fsmdb0.agg_out and drop 0946c9fd-46e0-432e-9edf-44f5a5717c66.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:26.007-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:26.026-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (0946c9fd-46e0-432e-9edf-44f5a5717c66) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796745, 3543), t: 1 } and commit timestamp Timestamp(1574796745, 3543)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.665-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558 with generated UUID: 9dfabf25-5277-441e-b92a-0df4e5a93c44 and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:26.007-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test5_fsmdb0.agg_out (0946c9fd-46e0-432e-9edf-44f5a5717c66) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796745, 3543), t: 1 } and commit timestamp Timestamp(1574796745, 3543)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:26.026-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (0946c9fd-46e0-432e-9edf-44f5a5717c66).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.666-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: c2b4f544-3dd6-4730-a89a-3b9b01bcc238: test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b ( 32c46f55-894c-4f71-9d2c-0bcf5cd6d5b8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:26.007-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test5_fsmdb0.agg_out (0946c9fd-46e0-432e-9edf-44f5a5717c66).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:26.026-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7 from test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.666-0500 I INDEX [conn46] Index build completed: c2b4f544-3dd6-4730-a89a-3b9b01bcc238
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:26.007-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7 from test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:26.026-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0946c9fd-46e0-432e-9edf-44f5a5717c66)'. Ident: 'index-450--4104909142373009110', commit timestamp: 'Timestamp(1574796745, 3543)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.666-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796742, 1014), signature: { hash: BinData(0, 028CA2BD441AA7DBBFCDC1142AD7B5D92C285430), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 372 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 3555ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:26.007-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0946c9fd-46e0-432e-9edf-44f5a5717c66)'. Ident: 'index-450--8000595249233899911', commit timestamp: 'Timestamp(1574796745, 3543)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:26.026-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0946c9fd-46e0-432e-9edf-44f5a5717c66)'. Ident: 'index-459--4104909142373009110', commit timestamp: 'Timestamp(1574796745, 3543)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.686-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:26.007-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0946c9fd-46e0-432e-9edf-44f5a5717c66)'. Ident: 'index-459--8000595249233899911', commit timestamp: 'Timestamp(1574796745, 3543)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:26.026-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-449--4104909142373009110, commit timestamp: Timestamp(1574796745, 3543)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.686-0500 I INDEX [conn110] Registering index build: 0e056e54-e3d3-4f39-92e0-1ae7d7fa0577
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:26.007-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-449--8000595249233899911, commit timestamp: Timestamp(1574796745, 3543)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:26.028-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: c57877e5-5c95-4fc8-b242-e41b53ca764c: test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39 ( aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.695-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:26.007-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 65a90787-144b-4e79-82e2-7329daec3921: test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39 ( aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.868-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47 (34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98) to test5_fsmdb0.agg_out and drop c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.709-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.867-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47 (34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98) to test5_fsmdb0.agg_out and drop c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.868-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796748, 2), t: 1 } and commit timestamp Timestamp(1574796748, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.709-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.867-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.agg_out (c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796748, 2), t: 1 } and commit timestamp Timestamp(1574796748, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.868-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.709-0500 I STORAGE [conn110] Index build initialized: 0e056e54-e3d3-4f39-92e0-1ae7d7fa0577: test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1 (7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.867-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.agg_out (c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.868-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection 34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98 from test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.709-0500 I INDEX [conn110] Waiting for index build to complete: 0e056e54-e3d3-4f39-92e0-1ae7d7fa0577
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.867-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection 34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98 from test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.868-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7)'. Ident: 'index-448--4104909142373009110', commit timestamp: 'Timestamp(1574796748, 2)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.709-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.867-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7)'. Ident: 'index-448--8000595249233899911', commit timestamp: 'Timestamp(1574796748, 2)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.868-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7)'. Ident: 'index-461--4104909142373009110', commit timestamp: 'Timestamp(1574796748, 2)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.709-0500 I INDEX [conn108] Registering index build: b0b371a3-1f65-441f-b3e1-4b89da4bc011
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.867-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7)'. Ident: 'index-461--8000595249233899911', commit timestamp: 'Timestamp(1574796748, 2)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.868-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-447--4104909142373009110, commit timestamp: Timestamp(1574796748, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.709-0500 I COMMAND [conn112] CMD: drop test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.867-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-447--8000595249233899911, commit timestamp: Timestamp(1574796748, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.709-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.721-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.729-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.729-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.729-0500 I STORAGE [conn108] Index build initialized: b0b371a3-1f65-441f-b3e1-4b89da4bc011: test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558 (9dfabf25-5277-441e-b92a-0df4e5a93c44 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.729-0500 I INDEX [conn108] Waiting for index build to complete: b0b371a3-1f65-441f-b3e1-4b89da4bc011
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.729-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 (e598322a-824f-46b8-8433-6606348c63f2) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.729-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 (e598322a-824f-46b8-8433-6606348c63f2).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.729-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 (e598322a-824f-46b8-8433-6606348c63f2)'. Ident: 'index-418-8224331490264904478', commit timestamp: 'Timestamp(1574796745, 1514)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.729-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0 (e598322a-824f-46b8-8433-6606348c63f2)'. Ident: 'index-419-8224331490264904478', commit timestamp: 'Timestamp(1574796745, 1514)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.729-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0'. Ident: collection-416-8224331490264904478, commit timestamp: Timestamp(1574796745, 1514)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.730-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 0e056e54-e3d3-4f39-92e0-1ae7d7fa0577: test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1 ( 7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.730-0500 I INDEX [conn110] Index build completed: 0e056e54-e3d3-4f39-92e0-1ae7d7fa0577
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.730-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.730-0500 I COMMAND [conn71] command test5_fsmdb0.agg_out appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1378088436904515703, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5027090800613394544, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796742012), clusterTime: Timestamp(1574796742, 2) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796742, 2), signature: { hash: BinData(0, 028CA2BD441AA7DBBFCDC1142AD7B5D92C285430), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.5573877a-882e-4982-a8f4-dd74c3da5dc0\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:993 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3711ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.730-0500 I COMMAND [conn110] CMD: drop test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.730-0500 I COMMAND [conn114] CMD: drop test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.730-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b (32c46f55-894c-4f71-9d2c-0bcf5cd6d5b8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.730-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b (32c46f55-894c-4f71-9d2c-0bcf5cd6d5b8).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.730-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b (32c46f55-894c-4f71-9d2c-0bcf5cd6d5b8)'. Ident: 'index-424-8224331490264904478', commit timestamp: 'Timestamp(1574796745, 1515)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.730-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b (32c46f55-894c-4f71-9d2c-0bcf5cd6d5b8)'. Ident: 'index-427-8224331490264904478', commit timestamp: 'Timestamp(1574796745, 1515)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.730-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 (3d9c7996-cf3a-4167-904c-93a488a83f20) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.730-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b'. Ident: collection-421-8224331490264904478, commit timestamp: Timestamp(1574796745, 1515)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.730-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 (3d9c7996-cf3a-4167-904c-93a488a83f20).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.730-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 (3d9c7996-cf3a-4167-904c-93a488a83f20)'. Ident: 'index-423-8224331490264904478', commit timestamp: 'Timestamp(1574796745, 1516)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.730-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6 (3d9c7996-cf3a-4167-904c-93a488a83f20)'. Ident: 'index-425-8224331490264904478', commit timestamp: 'Timestamp(1574796745, 1516)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.730-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6'. Ident: collection-420-8224331490264904478, commit timestamp: Timestamp(1574796745, 1516)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.730-0500 I COMMAND [conn68] command test5_fsmdb0.agg_out appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6435966513517694773, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3603369269496267620, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796742062), clusterTime: Timestamp(1574796742, 511) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796742, 512), signature: { hash: BinData(0, 028CA2BD441AA7DBBFCDC1142AD7B5D92C285430), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.b757bbbd-09ac-4f99-a7ee-baf2cb60e41b\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:991 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3667ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.731-0500 I COMMAND [conn70] command test5_fsmdb0.agg_out appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8251187729079198653, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8026490634381743681, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796742058), clusterTime: Timestamp(1574796742, 508) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796742, 511), signature: { hash: BinData(0, 028CA2BD441AA7DBBFCDC1142AD7B5D92C285430), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796740, 567), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.c7493b60-f756-48a5-a046-3b9791cac7c6\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:991 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3669ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.731-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.733-0500 I COMMAND [conn70] CMD: dropIndexes test5_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.733-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623 with generated UUID: c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7 and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.733-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e with generated UUID: 0946c9fd-46e0-432e-9edf-44f5a5717c66 and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.734-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.735-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0 with generated UUID: 7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1 and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.748-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: b0b371a3-1f65-441f-b3e1-4b89da4bc011: test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558 ( 9dfabf25-5277-441e-b92a-0df4e5a93c44 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.748-0500 I INDEX [conn108] Index build completed: b0b371a3-1f65-441f-b3e1-4b89da4bc011
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.765-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.772-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.780-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.781-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.781-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796745, 2086), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.781-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.781-0500 I STORAGE [conn110] renameCollection: renaming collection 7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46 from test5_fsmdb0.tmp.agg_out.43bbc6e1-c29a-4497-804e-6a33bde3eca1 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.781-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513)'. Ident: 'index-407-8224331490264904478', commit timestamp: 'Timestamp(1574796745, 2086)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.781-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (6ba7ce5d-e8e8-4f76-9a81-5d5c00b9d513)'. Ident: 'index-409-8224331490264904478', commit timestamp: 'Timestamp(1574796745, 2086)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.781-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-405-8224331490264904478, commit timestamp: Timestamp(1574796745, 2086)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.781-0500 I INDEX [conn114] Registering index build: 0ccb852a-8c66-4b67-a6dc-5d95143f3f53
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.781-0500 I INDEX [conn46] Registering index build: acd09317-508f-436f-bdfc-5992bbd07f82
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.781-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7517869455721202200, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3773493184455374643, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796745659), clusterTime: Timestamp(1574796745, 2) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796745, 2), signature: { hash: BinData(0, D92DEC46A842858CF0D1E4F77AF6BEA0ABC01273), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796745, 2), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 3416 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 121ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.781-0500 I INDEX [conn112] Registering index build: 94b6416c-817d-430b-89ac-fed13add4297
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.785-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47 with generated UUID: 34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98 and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.804-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.804-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.804-0500 I STORAGE [conn114] Index build initialized: 0ccb852a-8c66-4b67-a6dc-5d95143f3f53: test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e (0946c9fd-46e0-432e-9edf-44f5a5717c66 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.804-0500 I INDEX [conn114] Waiting for index build to complete: 0ccb852a-8c66-4b67-a6dc-5d95143f3f53
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.811-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.811-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.811-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796745, 2525), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.811-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.811-0500 I STORAGE [conn108] renameCollection: renaming collection 9dfabf25-5277-441e-b92a-0df4e5a93c44 from test5_fsmdb0.tmp.agg_out.35369c19-daea-41ee-a066-be8a7b3d7558 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.811-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46)'. Ident: 'index-431-8224331490264904478', commit timestamp: 'Timestamp(1574796745, 2525)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.811-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (7662ed2e-e39a-4dd8-a3d4-20a8fd91ed46)'. Ident: 'index-433-8224331490264904478', commit timestamp: 'Timestamp(1574796745, 2525)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.811-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-429-8224331490264904478, commit timestamp: Timestamp(1574796745, 2525)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.811-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.812-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7671070244559136805, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3853481053285499855, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796745664), clusterTime: Timestamp(1574796745, 7) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796745, 8), signature: { hash: BinData(0, D92DEC46A842858CF0D1E4F77AF6BEA0ABC01273), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796745, 2), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 147ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.812-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.812-0500 I INDEX [conn110] Registering index build: f3962420-6252-481c-b189-df943f947f47
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.815-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39 with generated UUID: aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273 and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.821-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.838-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.838-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.838-0500 I STORAGE [conn46] Index build initialized: acd09317-508f-436f-bdfc-5992bbd07f82: test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623 (c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.838-0500 I INDEX [conn46] Waiting for index build to complete: acd09317-508f-436f-bdfc-5992bbd07f82
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.838-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.839-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 0ccb852a-8c66-4b67-a6dc-5d95143f3f53: test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e ( 0946c9fd-46e0-432e-9edf-44f5a5717c66 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.845-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.845-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.845-0500 I INDEX [conn108] Registering index build: 76e1b0b9-6c83-42c5-95a4-8e18c297c229
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.855-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.862-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.862-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.862-0500 I STORAGE [conn112] Index build initialized: 94b6416c-817d-430b-89ac-fed13add4297: test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0 (7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.862-0500 I INDEX [conn112] Waiting for index build to complete: 94b6416c-817d-430b-89ac-fed13add4297
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.862-0500 I INDEX [conn114] Index build completed: 0ccb852a-8c66-4b67-a6dc-5d95143f3f53
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.864-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: acd09317-508f-436f-bdfc-5992bbd07f82: test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623 ( c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.881-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.881-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.881-0500 I STORAGE [conn110] Index build initialized: f3962420-6252-481c-b189-df943f947f47: test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47 (34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.881-0500 I INDEX [conn110] Waiting for index build to complete: f3962420-6252-481c-b189-df943f947f47
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.881-0500 I INDEX [conn46] Index build completed: acd09317-508f-436f-bdfc-5992bbd07f82
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.881-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.881-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623 appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796745, 2085), signature: { hash: BinData(0, D92DEC46A842858CF0D1E4F77AF6BEA0ABC01273), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796745, 2), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 22501 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 115ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.881-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (9dfabf25-5277-441e-b92a-0df4e5a93c44) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796745, 3034), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.881-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (9dfabf25-5277-441e-b92a-0df4e5a93c44).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.881-0500 I STORAGE [conn114] renameCollection: renaming collection 0946c9fd-46e0-432e-9edf-44f5a5717c66 from test5_fsmdb0.tmp.agg_out.a0e5a232-0f2c-4428-8306-f75792ef347e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.881-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (9dfabf25-5277-441e-b92a-0df4e5a93c44)'. Ident: 'index-432-8224331490264904478', commit timestamp: 'Timestamp(1574796745, 3034)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.881-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (9dfabf25-5277-441e-b92a-0df4e5a93c44)'. Ident: 'index-435-8224331490264904478', commit timestamp: 'Timestamp(1574796745, 3034)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.881-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-430-8224331490264904478, commit timestamp: Timestamp(1574796745, 3034)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.881-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.881-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.882-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 9072253456293314981, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2987999489595401953, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796745732), clusterTime: Timestamp(1574796745, 1516) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796745, 1516), signature: { hash: BinData(0, D92DEC46A842858CF0D1E4F77AF6BEA0ABC01273), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796745, 2), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 199 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 148ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.882-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.885-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2 with generated UUID: c3b2f2c7-1ecd-4f72-8da4-27a519319358 and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.885-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.894-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.902-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: f3962420-6252-481c-b189-df943f947f47: test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47 ( 34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.910-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.910-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.911-0500 I STORAGE [conn108] Index build initialized: 76e1b0b9-6c83-42c5-95a4-8e18c297c229: test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39 (aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.911-0500 I INDEX [conn108] Waiting for index build to complete: 76e1b0b9-6c83-42c5-95a4-8e18c297c229
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.911-0500 I INDEX [conn110] Index build completed: f3962420-6252-481c-b189-df943f947f47
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.911-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.913-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.921-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.921-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.924-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.924-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.924-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (0946c9fd-46e0-432e-9edf-44f5a5717c66) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796745, 3543), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.924-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (0946c9fd-46e0-432e-9edf-44f5a5717c66).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.924-0500 I STORAGE [conn46] renameCollection: renaming collection c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7 from test5_fsmdb0.tmp.agg_out.f886d479-b8dd-4dae-86e3-b90193e4e623 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.924-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0946c9fd-46e0-432e-9edf-44f5a5717c66)'. Ident: 'index-441-8224331490264904478', commit timestamp: 'Timestamp(1574796745, 3543)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.924-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0946c9fd-46e0-432e-9edf-44f5a5717c66)'. Ident: 'index-443-8224331490264904478', commit timestamp: 'Timestamp(1574796745, 3543)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.924-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-438-8224331490264904478, commit timestamp: Timestamp(1574796745, 3543)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.924-0500 I INDEX [conn114] Registering index build: c231a1f8-4e42-4971-b81f-315ee1728a48
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.924-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2147643846826639313, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3818182030755030640, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796745732), clusterTime: Timestamp(1574796745, 1516) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796745, 1516), signature: { hash: BinData(0, D92DEC46A842858CF0D1E4F77AF6BEA0ABC01273), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796745, 2), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 267 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 191ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.927-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 94b6416c-817d-430b-89ac-fed13add4297: test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0 ( 7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.929-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 76e1b0b9-6c83-42c5-95a4-8e18c297c229: test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39 ( aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:25.947-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.862-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.862-0500 I STORAGE [conn114] Index build initialized: c231a1f8-4e42-4971-b81f-315ee1728a48: test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2 (c3b2f2c7-1ecd-4f72-8da4-27a519319358 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.862-0500 I INDEX [conn112] Index build completed: 94b6416c-817d-430b-89ac-fed13add4297
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.862-0500 I INDEX [conn108] Index build completed: 76e1b0b9-6c83-42c5-95a4-8e18c297c229
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.862-0500 I INDEX [conn114] Waiting for index build to complete: c231a1f8-4e42-4971-b81f-315ee1728a48
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.862-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.862-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0 appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796745, 2086), signature: { hash: BinData(0, D92DEC46A842858CF0D1E4F77AF6BEA0ABC01273), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796745, 2), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 3080ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.862-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796745, 2529), signature: { hash: BinData(0, D92DEC46A842858CF0D1E4F77AF6BEA0ABC01273), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796745, 2), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 443 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 3016ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.862-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796748, 2), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.862-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.862-0500 I STORAGE [conn110] renameCollection: renaming collection 34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98 from test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.862-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7)'. Ident: 'index-440-8224331490264904478', commit timestamp: 'Timestamp(1574796748, 2)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.862-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c1cc6b6e-5519-4aef-ba0a-a1b4e3f201b7)'. Ident: 'index-447-8224331490264904478', commit timestamp: 'Timestamp(1574796748, 2)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.862-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-437-8224331490264904478, commit timestamp: Timestamp(1574796748, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.863-0500 I COMMAND [conn110] command test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47 appName: "tid:4" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.8184d3c4-cbe9-46f7-84b9-4e7e38ee8f47", to: "test5_fsmdb0.agg_out", collectionOptions: { validationLevel: "moderate", validationAction: "warn" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796745, 4043), signature: { hash: BinData(0, D92DEC46A842858CF0D1E4F77AF6BEA0ABC01273), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796745, 2), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 2917083 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2917ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.863-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.863-0500 I COMMAND [conn220] command test5_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796745, 3037), lsid: { id: UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") }, $clusterTime: { clusterTime: Timestamp(1574796745, 3101), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796745, 3037). Collection minimum timestamp is Timestamp(1574796748, 2)" errName:SnapshotUnavailable errCode:246 reslen:579 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2871395 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 2871ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.863-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8809313393948676691, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1839495326813685874, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796745783), clusterTime: Timestamp(1574796745, 2150) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796745, 2278), signature: { hash: BinData(0, D92DEC46A842858CF0D1E4F77AF6BEA0ABC01273), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796745, 2), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3078ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.864-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.866-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.866-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa with generated UUID: eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.867-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: c231a1f8-4e42-4971-b81f-315ee1728a48: test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2 ( c3b2f2c7-1ecd-4f72-8da4-27a519319358 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.867-0500 I INDEX [conn114] Index build completed: c231a1f8-4e42-4971-b81f-315ee1728a48
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.867-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2 appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796745, 3540), signature: { hash: BinData(0, D92DEC46A842858CF0D1E4F77AF6BEA0ABC01273), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796745, 2), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 2600 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2945ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.868-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 with generated UUID: a777b042-8585-46ad-bc0a-bc47f19c6395 and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.887-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.887-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.887-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 97c349a7-42bc-40d3-8bd9-bc23aa8aca4c: test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2 (c3b2f2c7-1ecd-4f72-8da4-27a519319358 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.887-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.888-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.888-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.888-0500 I INDEX [conn110] Registering index build: 7ea4c1b6-0fd3-400a-8677-9e0922837812
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.890-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.891-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 97c349a7-42bc-40d3-8bd9-bc23aa8aca4c: test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2 ( c3b2f2c7-1ecd-4f72-8da4-27a519319358 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.892-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa with provided UUID: eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a and options: { uuid: UUID("eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.896-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.902-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.902-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.902-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 4536f579-fe3f-4092-8605-ba5d4f538744: test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2 (c3b2f2c7-1ecd-4f72-8da4-27a519319358 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.902-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.903-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.905-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.906-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.906-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 with provided UUID: a777b042-8585-46ad-bc0a-bc47f19c6395 and options: { uuid: UUID("a777b042-8585-46ad-bc0a-bc47f19c6395"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.907-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa with provided UUID: eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a and options: { uuid: UUID("eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.908-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 4536f579-fe3f-4092-8605-ba5d4f538744: test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2 ( c3b2f2c7-1ecd-4f72-8da4-27a519319358 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.909-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.909-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.909-0500 I STORAGE [conn110] Index build initialized: 7ea4c1b6-0fd3-400a-8677-9e0922837812: test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa (eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.909-0500 I INDEX [conn110] Waiting for index build to complete: 7ea4c1b6-0fd3-400a-8677-9e0922837812
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.909-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.910-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796748, 1021), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.910-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.910-0500 I STORAGE [conn112] renameCollection: renaming collection 7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1 from test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.910-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98)'. Ident: 'index-446-8224331490264904478', commit timestamp: 'Timestamp(1574796748, 1021)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.910-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98)'. Ident: 'index-453-8224331490264904478', commit timestamp: 'Timestamp(1574796748, 1021)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.910-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-444-8224331490264904478, commit timestamp: Timestamp(1574796748, 1021)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.910-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.910-0500 I INDEX [conn108] Registering index build: 302b7723-628c-4a44-99d0-3e49dbcf3315
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.910-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8926642758438477714, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5364847048683658928, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796745734), clusterTime: Timestamp(1574796745, 1516) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796745, 1584), signature: { hash: BinData(0, D92DEC46A842858CF0D1E4F77AF6BEA0ABC01273), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796745, 2), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3175ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:28.910-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796745, 1516), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3176ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.911-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.922-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.922-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.924-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.925-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 with provided UUID: a777b042-8585-46ad-bc0a-bc47f19c6395 and options: { uuid: UUID("a777b042-8585-46ad-bc0a-bc47f19c6395"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.929-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:28.930-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796745, 2525), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3117ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.938-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0 (7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1) to test5_fsmdb0.agg_out and drop 34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.940-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:28.964-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796745, 3034), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3081ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.929-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:28.995-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796745, 3543), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 131ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.938-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.agg_out (34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796748, 1021), t: 1 } and commit timestamp Timestamp(1574796748, 1021)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.947-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0 (7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1) to test5_fsmdb0.agg_out and drop 34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:29.074-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796748, 1137), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:826 protocol:op_msg 162ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:29.070-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796748, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:826 protocol:op_msg 203ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.929-0500 I STORAGE [conn108] Index build initialized: 302b7723-628c-4a44-99d0-3e49dbcf3315: test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 (a777b042-8585-46ad-bc0a-bc47f19c6395 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.938-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.agg_out (34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.947-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796748, 1021), t: 1 } and commit timestamp Timestamp(1574796748, 1021)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:29.156-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796748, 1582), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:826 protocol:op_msg 190ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:29.074-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796748, 1269), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:826 protocol:op_msg 142ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.929-0500 I INDEX [conn108] Waiting for index build to complete: 302b7723-628c-4a44-99d0-3e49dbcf3315
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.938-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection 7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1 from test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.947-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98).
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:29.229-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796749, 1324), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 154ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:29.156-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796749, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 142ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.929-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.938-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98)'. Ident: 'index-456--8000595249233899911', commit timestamp: 'Timestamp(1574796748, 1021)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.948-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1 from test5_fsmdb0.tmp.agg_out.eb4e6b3b-5fc9-4176-9c00-d92355df18f0 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:29.229-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796749, 1320), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 157ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.930-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796748, 1205), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.938-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98)'. Ident: 'index-465--8000595249233899911', commit timestamp: 'Timestamp(1574796748, 1021)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.948-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98)'. Ident: 'index-456--4104909142373009110', commit timestamp: 'Timestamp(1574796748, 1021)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.930-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1).
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:32.108-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796749, 1324), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3032ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.938-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-455--8000595249233899911, commit timestamp: Timestamp(1574796748, 1021)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.948-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (34b5e0b2-9da7-434a-8c6e-1f27d0ab5e98)'. Ident: 'index-465--4104909142373009110', commit timestamp: 'Timestamp(1574796748, 1021)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.930-0500 I STORAGE [conn46] renameCollection: renaming collection aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273 from test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.956-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.948-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-455--4104909142373009110, commit timestamp: Timestamp(1574796748, 1021)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.930-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1)'. Ident: 'index-442-8224331490264904478', commit timestamp: 'Timestamp(1574796748, 1205)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.956-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.972-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.930-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1)'. Ident: 'index-451-8224331490264904478', commit timestamp: 'Timestamp(1574796748, 1205)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.956-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 326c3104-320d-4773-b730-ffa42736437f: test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa (eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.972-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.930-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-439-8224331490264904478, commit timestamp: Timestamp(1574796748, 1205)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.956-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.972-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 35ffc802-0d3a-4aad-94ed-306916953a7d: test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa (eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.930-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.957-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.972-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.930-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 7ea4c1b6-0fd3-400a-8677-9e0922837812: test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa ( eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.958-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39 (aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273) to test5_fsmdb0.agg_out and drop 7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.973-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.930-0500 I INDEX [conn110] Index build completed: 7ea4c1b6-0fd3-400a-8677-9e0922837812
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.959-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.974-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39 (aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273) to test5_fsmdb0.agg_out and drop 7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.930-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4034337965216326122, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2630275997222398421, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796745813), clusterTime: Timestamp(1574796745, 2525) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796745, 2525), signature: { hash: BinData(0, D92DEC46A842858CF0D1E4F77AF6BEA0ABC01273), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796745, 2), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3116ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.960-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796748, 1205), t: 1 } and commit timestamp Timestamp(1574796748, 1205)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.975-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.931-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.960-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.975-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796748, 1205), t: 1 } and commit timestamp Timestamp(1574796748, 1205)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.931-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c with generated UUID: 32e8414d-bc21-4662-9fd8-de58199f7587 and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.960-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273 from test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.975-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.933-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 with generated UUID: 298522d2-dfa4-4ba4-8daf-896261426c8d and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.960-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1)'. Ident: 'index-454--8000595249233899911', commit timestamp: 'Timestamp(1574796748, 1205)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.975-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273 from test5_fsmdb0.tmp.agg_out.50954aed-38cb-45f8-b3e1-a877162c5f39 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.933-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.951-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 302b7723-628c-4a44-99d0-3e49dbcf3315: test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 ( a777b042-8585-46ad-bc0a-bc47f19c6395 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.975-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1)'. Ident: 'index-454--4104909142373009110', commit timestamp: 'Timestamp(1574796748, 1205)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.960-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1)'. Ident: 'index-467--8000595249233899911', commit timestamp: 'Timestamp(1574796748, 1205)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.951-0500 I INDEX [conn108] Index build completed: 302b7723-628c-4a44-99d0-3e49dbcf3315
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.975-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (7ddfc8b3-7768-4fc5-bb22-2daaa58a8de1)'. Ident: 'index-467--4104909142373009110', commit timestamp: 'Timestamp(1574796748, 1205)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.960-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-453--8000595249233899911, commit timestamp: Timestamp(1574796748, 1205)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.957-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.975-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-453--4104909142373009110, commit timestamp: Timestamp(1574796748, 1205)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.962-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 326c3104-320d-4773-b730-ffa42736437f: test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa ( eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.962-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c with provided UUID: 32e8414d-bc21-4662-9fd8-de58199f7587 and options: { uuid: UUID("32e8414d-bc21-4662-9fd8-de58199f7587"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.977-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 35ffc802-0d3a-4aad-94ed-306916953a7d: test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa ( eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.963-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.979-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.980-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c with provided UUID: 32e8414d-bc21-4662-9fd8-de58199f7587 and options: { uuid: UUID("32e8414d-bc21-4662-9fd8-de58199f7587"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.963-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.981-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 with provided UUID: 298522d2-dfa4-4ba4-8daf-896261426c8d and options: { uuid: UUID("298522d2-dfa4-4ba4-8daf-896261426c8d"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.996-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.963-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796748, 1518), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:28.998-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:28.999-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 with provided UUID: 298522d2-dfa4-4ba4-8daf-896261426c8d and options: { uuid: UUID("298522d2-dfa4-4ba4-8daf-896261426c8d"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.963-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.014-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.012-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.963-0500 I STORAGE [conn114] renameCollection: renaming collection c3b2f2c7-1ecd-4f72-8da4-27a519319358 from test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.014-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.030-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.963-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273)'. Ident: 'index-450-8224331490264904478', commit timestamp: 'Timestamp(1574796748, 1518)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.014-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 8d5bcc21-3e5d-4092-b1b8-2eb6acea35cb: test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 (a777b042-8585-46ad-bc0a-bc47f19c6395 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.030-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.963-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273)'. Ident: 'index-455-8224331490264904478', commit timestamp: 'Timestamp(1574796748, 1518)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.014-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.030-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 63113919-eb48-46fd-897f-571d538bfe0d: test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 (a777b042-8585-46ad-bc0a-bc47f19c6395 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.963-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-448-8224331490264904478, commit timestamp: Timestamp(1574796748, 1518)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.015-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.031-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.964-0500 I INDEX [conn46] Registering index build: b9dffa22-5301-4ceb-923d-c25c679d0f43
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.017-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.031-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.964-0500 I INDEX [conn110] Registering index build: 26b05e26-6dbf-4d27-a806-1114427ff1a7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.017-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2 (c3b2f2c7-1ecd-4f72-8da4-27a519319358) to test5_fsmdb0.agg_out and drop aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.034-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2 (c3b2f2c7-1ecd-4f72-8da4-27a519319358) to test5_fsmdb0.agg_out and drop aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.964-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3682342408501194418, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5408914231546325186, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796745883), clusterTime: Timestamp(1574796745, 3034) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796745, 3034), signature: { hash: BinData(0, D92DEC46A842858CF0D1E4F77AF6BEA0ABC01273), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796745, 2), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3079ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.017-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.agg_out (aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796748, 1518), t: 1 } and commit timestamp Timestamp(1574796748, 1518)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.034-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.967-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 with generated UUID: 1e620c18-1571-41a5-a0dd-b7d2e3d7d8cd and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.017-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.agg_out (aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.034-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796748, 1518), t: 1 } and commit timestamp Timestamp(1574796748, 1518)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.987-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.017-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection c3b2f2c7-1ecd-4f72-8da4-27a519319358 from test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.034-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.987-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.017-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273)'. Ident: 'index-458--8000595249233899911', commit timestamp: 'Timestamp(1574796748, 1518)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.034-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection c3b2f2c7-1ecd-4f72-8da4-27a519319358 from test5_fsmdb0.tmp.agg_out.9c8d4c62-f9a6-4272-8112-da29fc2c3ac2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.987-0500 I STORAGE [conn46] Index build initialized: b9dffa22-5301-4ceb-923d-c25c679d0f43: test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c (32e8414d-bc21-4662-9fd8-de58199f7587 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.017-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273)'. Ident: 'index-469--8000595249233899911', commit timestamp: 'Timestamp(1574796748, 1518)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.034-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273)'. Ident: 'index-458--4104909142373009110', commit timestamp: 'Timestamp(1574796748, 1518)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.987-0500 I INDEX [conn46] Waiting for index build to complete: b9dffa22-5301-4ceb-923d-c25c679d0f43
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.017-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-457--8000595249233899911, commit timestamp: Timestamp(1574796748, 1518)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.034-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (aa45aa7c-eb2d-44a9-94e0-a6e42bb8f273)'. Ident: 'index-469--4104909142373009110', commit timestamp: 'Timestamp(1574796748, 1518)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.994-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.019-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 8d5bcc21-3e5d-4092-b1b8-2eb6acea35cb: test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 ( a777b042-8585-46ad-bc0a-bc47f19c6395 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.034-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-457--4104909142373009110, commit timestamp: Timestamp(1574796748, 1518)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.994-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.021-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 with provided UUID: 1e620c18-1571-41a5-a0dd-b7d2e3d7d8cd and options: { uuid: UUID("1e620c18-1571-41a5-a0dd-b7d2e3d7d8cd"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.036-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 63113919-eb48-46fd-897f-571d538bfe0d: test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 ( a777b042-8585-46ad-bc0a-bc47f19c6395 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.994-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (c3b2f2c7-1ecd-4f72-8da4-27a519319358) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796748, 2085), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.037-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.038-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 with provided UUID: 1e620c18-1571-41a5-a0dd-b7d2e3d7d8cd and options: { uuid: UUID("1e620c18-1571-41a5-a0dd-b7d2e3d7d8cd"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.994-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (c3b2f2c7-1ecd-4f72-8da4-27a519319358).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.043-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa (eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a) to test5_fsmdb0.agg_out and drop c3b2f2c7-1ecd-4f72-8da4-27a519319358.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.056-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.994-0500 I STORAGE [conn112] renameCollection: renaming collection eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a from test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.043-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (c3b2f2c7-1ecd-4f72-8da4-27a519319358) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796748, 2085), t: 1 } and commit timestamp Timestamp(1574796748, 2085)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.060-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa (eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a) to test5_fsmdb0.agg_out and drop c3b2f2c7-1ecd-4f72-8da4-27a519319358.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.994-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c3b2f2c7-1ecd-4f72-8da4-27a519319358)'. Ident: 'index-458-8224331490264904478', commit timestamp: 'Timestamp(1574796748, 2085)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.043-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (c3b2f2c7-1ecd-4f72-8da4-27a519319358).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.060-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (c3b2f2c7-1ecd-4f72-8da4-27a519319358) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796748, 2085), t: 1 } and commit timestamp Timestamp(1574796748, 2085)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.994-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c3b2f2c7-1ecd-4f72-8da4-27a519319358)'. Ident: 'index-459-8224331490264904478', commit timestamp: 'Timestamp(1574796748, 2085)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.043-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a from test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.060-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (c3b2f2c7-1ecd-4f72-8da4-27a519319358).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.994-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-456-8224331490264904478, commit timestamp: Timestamp(1574796748, 2085)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.043-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c3b2f2c7-1ecd-4f72-8da4-27a519319358)'. Ident: 'index-464--8000595249233899911', commit timestamp: 'Timestamp(1574796748, 2085)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.060-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a from test5_fsmdb0.tmp.agg_out.0b09322e-f329-4cf8-88f3-30de6a0c24aa to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.994-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.043-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c3b2f2c7-1ecd-4f72-8da4-27a519319358)'. Ident: 'index-471--8000595249233899911', commit timestamp: 'Timestamp(1574796748, 2085)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.060-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c3b2f2c7-1ecd-4f72-8da4-27a519319358)'. Ident: 'index-464--4104909142373009110', commit timestamp: 'Timestamp(1574796748, 2085)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.995-0500 I INDEX [conn114] Registering index build: 93447c12-e4a0-4c4a-ab8c-7ea7f14acbf4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.043-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-463--8000595249233899911, commit timestamp: Timestamp(1574796748, 2085)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.060-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c3b2f2c7-1ecd-4f72-8da4-27a519319358)'. Ident: 'index-471--4104909142373009110', commit timestamp: 'Timestamp(1574796748, 2085)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.995-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8168655674022552041, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2373422161933315866, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796748863), clusterTime: Timestamp(1574796748, 2) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796748, 2), signature: { hash: BinData(0, 738EFF41AD79DEE4C1F6CEEA4B0A3ECA9AF375C2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 130ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.059-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.060-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-463--4104909142373009110, commit timestamp: Timestamp(1574796748, 2085)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:28.995-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.059-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.080-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.005-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.059-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: c899593c-ae67-496f-9505-2e360a1e1a6d: test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c (32e8414d-bc21-4662-9fd8-de58199f7587 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.080-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.012-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.060-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.080-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 8890a233-b308-4ee7-bd2e-256927ff9691: test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c (32e8414d-bc21-4662-9fd8-de58199f7587 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.012-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.060-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.080-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.012-0500 I STORAGE [conn110] Index build initialized: 26b05e26-6dbf-4d27-a806-1114427ff1a7: test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 (298522d2-dfa4-4ba4-8daf-896261426c8d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.062-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.080-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.012-0500 I INDEX [conn110] Waiting for index build to complete: 26b05e26-6dbf-4d27-a806-1114427ff1a7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.065-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f with provided UUID: 6b799a27-b1b1-48cd-afd7-f49a9ed9712b and options: { uuid: UUID("6b799a27-b1b1-48cd-afd7-f49a9ed9712b"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.083-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.013-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.067-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: c899593c-ae67-496f-9505-2e360a1e1a6d: test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c ( 32e8414d-bc21-4662-9fd8-de58199f7587 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.085-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f with provided UUID: 6b799a27-b1b1-48cd-afd7-f49a9ed9712b and options: { uuid: UUID("6b799a27-b1b1-48cd-afd7-f49a9ed9712b"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.013-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: b9dffa22-5301-4ceb-923d-c25c679d0f43: test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c ( 32e8414d-bc21-4662-9fd8-de58199f7587 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.080-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.086-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 8890a233-b308-4ee7-bd2e-256927ff9691: test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c ( 32e8414d-bc21-4662-9fd8-de58199f7587 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.013-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.096-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.100-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.015-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f with generated UUID: 6b799a27-b1b1-48cd-afd7-f49a9ed9712b and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.096-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.119-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.024-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.096-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 012aa315-2cee-4ce4-9a2b-34256f5e2455: test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 (298522d2-dfa4-4ba4-8daf-896261426c8d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.119-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.042-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.097-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.119-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 865747fb-ead9-457d-a7cd-97dac084addf: test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 (298522d2-dfa4-4ba4-8daf-896261426c8d ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.042-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.097-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.119-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.042-0500 I STORAGE [conn114] Index build initialized: 93447c12-e4a0-4c4a-ab8c-7ea7f14acbf4: test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 (1e620c18-1571-41a5-a0dd-b7d2e3d7d8cd ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.101-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.120-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.042-0500 I INDEX [conn114] Waiting for index build to complete: 93447c12-e4a0-4c4a-ab8c-7ea7f14acbf4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.109-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 012aa315-2cee-4ce4-9a2b-34256f5e2455: test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 ( 298522d2-dfa4-4ba4-8daf-896261426c8d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.123-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.042-0500 I INDEX [conn46] Index build completed: b9dffa22-5301-4ceb-923d-c25c679d0f43
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.116-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.132-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 865747fb-ead9-457d-a7cd-97dac084addf: test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 ( 298522d2-dfa4-4ba4-8daf-896261426c8d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.043-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 26b05e26-6dbf-4d27-a806-1114427ff1a7: test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 ( 298522d2-dfa4-4ba4-8daf-896261426c8d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.116-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.140-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.043-0500 I INDEX [conn110] Index build completed: 26b05e26-6dbf-4d27-a806-1114427ff1a7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.116-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 3c0cb4c7-75af-4bd3-bb7f-6690e04f7874: test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 (1e620c18-1571-41a5-a0dd-b7d2e3d7d8cd ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.140-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.049-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.116-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.140-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: c5418cc7-15af-4bd0-b92a-d3eecbd4c3fc: test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 (1e620c18-1571-41a5-a0dd-b7d2e3d7d8cd ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.049-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.117-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.140-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.049-0500 I INDEX [conn112] Registering index build: f198a09d-ba30-4ea0-af0d-0ca654ca7d49
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.119-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.141-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.049-0500 I COMMAND [conn108] CMD: drop test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.123-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 3c0cb4c7-75af-4bd3-bb7f-6690e04f7874: test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 ( 1e620c18-1571-41a5-a0dd-b7d2e3d7d8cd ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.143-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.050-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.124-0500 I COMMAND [ReplWriterWorker-0] CMD: drop test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.148-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: c5418cc7-15af-4bd0-b92a-d3eecbd4c3fc: test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 ( 1e620c18-1571-41a5-a0dd-b7d2e3d7d8cd ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.051-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.125-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 (a777b042-8585-46ad-bc0a-bc47f19c6395) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796749, 1320), t: 1 } and commit timestamp Timestamp(1574796749, 1320)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.151-0500 I COMMAND [ReplWriterWorker-0] CMD: drop test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.058-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 93447c12-e4a0-4c4a-ab8c-7ea7f14acbf4: test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 ( 1e620c18-1571-41a5-a0dd-b7d2e3d7d8cd ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.125-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 (a777b042-8585-46ad-bc0a-bc47f19c6395).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.151-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 (a777b042-8585-46ad-bc0a-bc47f19c6395) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796749, 1320), t: 1 } and commit timestamp Timestamp(1574796749, 1320)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.069-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.125-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 (a777b042-8585-46ad-bc0a-bc47f19c6395)'. Ident: 'index-476--8000595249233899911', commit timestamp: 'Timestamp(1574796749, 1320)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.151-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 (a777b042-8585-46ad-bc0a-bc47f19c6395).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.069-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.125-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 (a777b042-8585-46ad-bc0a-bc47f19c6395)'. Ident: 'index-483--8000595249233899911', commit timestamp: 'Timestamp(1574796749, 1320)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.151-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 (a777b042-8585-46ad-bc0a-bc47f19c6395)'. Ident: 'index-476--4104909142373009110', commit timestamp: 'Timestamp(1574796749, 1320)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.069-0500 I STORAGE [conn112] Index build initialized: f198a09d-ba30-4ea0-af0d-0ca654ca7d49: test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f (6b799a27-b1b1-48cd-afd7-f49a9ed9712b ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.125-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2'. Ident: collection-475--8000595249233899911, commit timestamp: Timestamp(1574796749, 1320)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.151-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 (a777b042-8585-46ad-bc0a-bc47f19c6395)'. Ident: 'index-483--4104909142373009110', commit timestamp: 'Timestamp(1574796749, 1320)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.069-0500 I INDEX [conn112] Waiting for index build to complete: f198a09d-ba30-4ea0-af0d-0ca654ca7d49
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.140-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.151-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2'. Ident: collection-475--4104909142373009110, commit timestamp: Timestamp(1574796749, 1320)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.070-0500 I INDEX [conn114] Index build completed: 93447c12-e4a0-4c4a-ab8c-7ea7f14acbf4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.140-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.163-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.070-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 (a777b042-8585-46ad-bc0a-bc47f19c6395) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.140-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: cc893eee-e3d4-4517-802f-c1b21b586592: test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f (6b799a27-b1b1-48cd-afd7-f49a9ed9712b ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.163-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.070-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 (a777b042-8585-46ad-bc0a-bc47f19c6395).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.140-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.163-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: b0ef8e38-5835-449e-a408-4a3ea2da075a: test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f (6b799a27-b1b1-48cd-afd7-f49a9ed9712b ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.070-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 (a777b042-8585-46ad-bc0a-bc47f19c6395)'. Ident: 'index-464-8224331490264904478', commit timestamp: 'Timestamp(1574796749, 1320)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.141-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.163-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.070-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2 (a777b042-8585-46ad-bc0a-bc47f19c6395)'. Ident: 'index-467-8224331490264904478', commit timestamp: 'Timestamp(1574796749, 1320)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.142-0500 I COMMAND [ReplWriterWorker-8] CMD: drop test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.164-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.070-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2'. Ident: collection-462-8224331490264904478, commit timestamp: Timestamp(1574796749, 1320)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.142-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 (298522d2-dfa4-4ba4-8daf-896261426c8d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796749, 1323), t: 1 } and commit timestamp Timestamp(1574796749, 1323)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.165-0500 I COMMAND [ReplWriterWorker-14] CMD: drop test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.070-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.142-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 (298522d2-dfa4-4ba4-8daf-896261426c8d).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.165-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 (298522d2-dfa4-4ba4-8daf-896261426c8d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796749, 1323), t: 1 } and commit timestamp Timestamp(1574796749, 1323)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.070-0500 I COMMAND [conn68] command test5_fsmdb0.agg_out appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 686145392646853143, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7628043130608509805, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796748867), clusterTime: Timestamp(1574796748, 5) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796748, 6), signature: { hash: BinData(0, 738EFF41AD79DEE4C1F6CEEA4B0A3ECA9AF375C2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.f30a4615-8a48-44d3-a569-70f2535965d2\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:996 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 202ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.142-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 (298522d2-dfa4-4ba4-8daf-896261426c8d)'. Ident: 'index-482--8000595249233899911', commit timestamp: 'Timestamp(1574796749, 1323)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.165-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 (298522d2-dfa4-4ba4-8daf-896261426c8d).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.070-0500 I COMMAND [conn110] CMD: drop test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.142-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 (298522d2-dfa4-4ba4-8daf-896261426c8d)'. Ident: 'index-491--8000595249233899911', commit timestamp: 'Timestamp(1574796749, 1323)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.165-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 (298522d2-dfa4-4ba4-8daf-896261426c8d)'. Ident: 'index-482--4104909142373009110', commit timestamp: 'Timestamp(1574796749, 1323)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.071-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.142-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003'. Ident: collection-481--8000595249233899911, commit timestamp: Timestamp(1574796749, 1323)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.165-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 (298522d2-dfa4-4ba4-8daf-896261426c8d)'. Ident: 'index-491--4104909142373009110', commit timestamp: 'Timestamp(1574796749, 1323)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.072-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.143-0500 I COMMAND [ReplWriterWorker-14] CMD: drop test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.165-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003'. Ident: collection-481--4104909142373009110, commit timestamp: Timestamp(1574796749, 1323)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.073-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 (298522d2-dfa4-4ba4-8daf-896261426c8d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.143-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c (32e8414d-bc21-4662-9fd8-de58199f7587) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796749, 1324), t: 1 } and commit timestamp Timestamp(1574796749, 1324)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.166-0500 I COMMAND [ReplWriterWorker-5] CMD: drop test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.073-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 (298522d2-dfa4-4ba4-8daf-896261426c8d).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.143-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c (32e8414d-bc21-4662-9fd8-de58199f7587).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.166-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c (32e8414d-bc21-4662-9fd8-de58199f7587) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796749, 1324), t: 1 } and commit timestamp Timestamp(1574796749, 1324)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.073-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 (298522d2-dfa4-4ba4-8daf-896261426c8d)'. Ident: 'index-472-8224331490264904478', commit timestamp: 'Timestamp(1574796749, 1323)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.143-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c (32e8414d-bc21-4662-9fd8-de58199f7587)'. Ident: 'index-480--8000595249233899911', commit timestamp: 'Timestamp(1574796749, 1324)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.166-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c (32e8414d-bc21-4662-9fd8-de58199f7587).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.073-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003 (298522d2-dfa4-4ba4-8daf-896261426c8d)'. Ident: 'index-477-8224331490264904478', commit timestamp: 'Timestamp(1574796749, 1323)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.143-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c (32e8414d-bc21-4662-9fd8-de58199f7587)'. Ident: 'index-487--8000595249233899911', commit timestamp: 'Timestamp(1574796749, 1324)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.166-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c (32e8414d-bc21-4662-9fd8-de58199f7587)'. Ident: 'index-480--4104909142373009110', commit timestamp: 'Timestamp(1574796749, 1324)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.073-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003'. Ident: collection-470-8224331490264904478, commit timestamp: Timestamp(1574796749, 1323)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.143-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c'. Ident: collection-479--8000595249233899911, commit timestamp: Timestamp(1574796749, 1324)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.166-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c (32e8414d-bc21-4662-9fd8-de58199f7587)'. Ident: 'index-487--4104909142373009110', commit timestamp: 'Timestamp(1574796749, 1324)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.073-0500 I COMMAND [conn114] CMD: drop test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.143-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.166-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c'. Ident: collection-479--4104909142373009110, commit timestamp: Timestamp(1574796749, 1324)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.073-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c (32e8414d-bc21-4662-9fd8-de58199f7587) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.144-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f with provided UUID: 0a067f39-bf1a-45f5-96ad-cc591bd1c137 and options: { uuid: UUID("0a067f39-bf1a-45f5-96ad-cc591bd1c137"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.166-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.073-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c (32e8414d-bc21-4662-9fd8-de58199f7587).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.146-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: cc893eee-e3d4-4517-802f-c1b21b586592: test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f ( 6b799a27-b1b1-48cd-afd7-f49a9ed9712b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:32.159-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796749, 2333), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3002ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:32.179-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796749, 2337), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3001ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.167-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f with provided UUID: 0a067f39-bf1a-45f5-96ad-cc591bd1c137 and options: { uuid: UUID("0a067f39-bf1a-45f5-96ad-cc591bd1c137"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.073-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c (32e8414d-bc21-4662-9fd8-de58199f7587)'. Ident: 'index-471-8224331490264904478', commit timestamp: 'Timestamp(1574796749, 1324)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.160-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:32.290-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796749, 3847), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3035ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.168-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: b0ef8e38-5835-449e-a408-4a3ea2da075a: test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f ( 6b799a27-b1b1-48cd-afd7-f49a9ed9712b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:32.251-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796749, 3408), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3021ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.073-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c (32e8414d-bc21-4662-9fd8-de58199f7587)'. Ident: 'index-473-8224331490264904478', commit timestamp: 'Timestamp(1574796749, 1324)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.164-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5 with provided UUID: 527c321b-1ef4-445f-9270-df012bbc287a and options: { uuid: UUID("527c321b-1ef4-445f-9270-df012bbc287a"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:32.342-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796752, 1010), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 162ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.182-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:32.293-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796749, 3848), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 183ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.073-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c'. Ident: collection-469-8224331490264904478, commit timestamp: Timestamp(1574796749, 1324)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.179-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.185-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5 with provided UUID: 527c321b-1ef4-445f-9270-df012bbc287a and options: { uuid: UUID("527c321b-1ef4-445f-9270-df012bbc287a"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:32.378-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796752, 1079), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 165ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.073-0500 I COMMAND [conn67] command test5_fsmdb0.agg_out appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4272852923614364759, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8764944691707715216, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796748932), clusterTime: Timestamp(1574796748, 1269) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796748, 1334), signature: { hash: BinData(0, 738EFF41AD79DEE4C1F6CEEA4B0A3ECA9AF375C2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.d60a2b43-265b-406a-ac4c-53136fd50003\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:996 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 140ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.180-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562 with provided UUID: 1f088086-0af7-4acd-bbe9-9fc4c50862ee and options: { uuid: UUID("1f088086-0af7-4acd-bbe9-9fc4c50862ee"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.199-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.073-0500 I COMMAND [conn70] command test5_fsmdb0.agg_out appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5413370710862244475, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4895672689044267539, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796748912), clusterTime: Timestamp(1574796748, 1137) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796748, 1269), signature: { hash: BinData(0, 738EFF41AD79DEE4C1F6CEEA4B0A3ECA9AF375C2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.34bee2e4-2791-43a1-b545-0f62da83322c\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:996 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 143ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.196-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.200-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562 with provided UUID: 1f088086-0af7-4acd-bbe9-9fc4c50862ee and options: { uuid: UUID("1f088086-0af7-4acd-bbe9-9fc4c50862ee"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.074-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f with generated UUID: 0a067f39-bf1a-45f5-96ad-cc591bd1c137 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.219-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.216-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.076-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: f198a09d-ba30-4ea0-af0d-0ca654ca7d49: test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f ( 6b799a27-b1b1-48cd-afd7-f49a9ed9712b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.219-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.238-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.076-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5 with generated UUID: 527c321b-1ef4-445f-9270-df012bbc287a and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.219-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: e593bb63-20fe-42f2-8e81-67a1a0048424: test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f (0a067f39-bf1a-45f5-96ad-cc591bd1c137 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.238-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.076-0500 I INDEX [conn112] Index build completed: f198a09d-ba30-4ea0-af0d-0ca654ca7d49
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.219-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.238-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 61c2f1bf-247d-4b8b-a74b-df42e7053a0b: test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f (0a067f39-bf1a-45f5-96ad-cc591bd1c137 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.076-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562 with generated UUID: 1f088086-0af7-4acd-bbe9-9fc4c50862ee and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.220-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.238-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.105-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.221-0500 I COMMAND [ReplWriterWorker-0] CMD: drop test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.239-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.106-0500 I INDEX [conn46] Registering index build: 99ea66bd-6f13-49bf-a4c8-0e63c57948f1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.221-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 (1e620c18-1571-41a5-a0dd-b7d2e3d7d8cd) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796749, 2332), t: 1 } and commit timestamp Timestamp(1574796749, 2332)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.240-0500 I COMMAND [ReplWriterWorker-1] CMD: drop test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.112-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.221-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 (1e620c18-1571-41a5-a0dd-b7d2e3d7d8cd).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.240-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 (1e620c18-1571-41a5-a0dd-b7d2e3d7d8cd) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796749, 2332), t: 1 } and commit timestamp Timestamp(1574796749, 2332)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.121-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.221-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 (1e620c18-1571-41a5-a0dd-b7d2e3d7d8cd)'. Ident: 'index-486--8000595249233899911', commit timestamp: 'Timestamp(1574796749, 2332)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.240-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 (1e620c18-1571-41a5-a0dd-b7d2e3d7d8cd).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.134-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.221-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 (1e620c18-1571-41a5-a0dd-b7d2e3d7d8cd)'. Ident: 'index-493--8000595249233899911', commit timestamp: 'Timestamp(1574796749, 2332)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.240-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 (1e620c18-1571-41a5-a0dd-b7d2e3d7d8cd)'. Ident: 'index-486--4104909142373009110', commit timestamp: 'Timestamp(1574796749, 2332)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.134-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.221-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444'. Ident: collection-485--8000595249233899911, commit timestamp: Timestamp(1574796749, 2332)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.240-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 (1e620c18-1571-41a5-a0dd-b7d2e3d7d8cd)'. Ident: 'index-493--4104909142373009110', commit timestamp: 'Timestamp(1574796749, 2332)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.134-0500 I STORAGE [conn46] Index build initialized: 99ea66bd-6f13-49bf-a4c8-0e63c57948f1: test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f (0a067f39-bf1a-45f5-96ad-cc591bd1c137 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.222-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.240-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444'. Ident: collection-485--4104909142373009110, commit timestamp: Timestamp(1574796749, 2332)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.134-0500 I INDEX [conn46] Waiting for index build to complete: 99ea66bd-6f13-49bf-a4c8-0e63c57948f1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.222-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f (6b799a27-b1b1-48cd-afd7-f49a9ed9712b) to test5_fsmdb0.agg_out and drop eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.241-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f (6b799a27-b1b1-48cd-afd7-f49a9ed9712b) to test5_fsmdb0.agg_out and drop eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.135-0500 I INDEX [conn110] Registering index build: a4762674-6d45-420b-a917-77c6f3033a66
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.222-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test5_fsmdb0.agg_out (eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796749, 2333), t: 1 } and commit timestamp Timestamp(1574796749, 2333)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.242-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.135-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.222-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test5_fsmdb0.agg_out (eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.243-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test5_fsmdb0.agg_out (eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796749, 2333), t: 1 } and commit timestamp Timestamp(1574796749, 2333)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.135-0500 I INDEX [conn108] Registering index build: 30bc17c7-8eb6-46aa-85f9-a5e095b974e9
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.223-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection 6b799a27-b1b1-48cd-afd7-f49a9ed9712b from test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.243-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test5_fsmdb0.agg_out (eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.135-0500 I COMMAND [conn114] CMD: drop test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.223-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a)'. Ident: 'index-474--8000595249233899911', commit timestamp: 'Timestamp(1574796749, 2333)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.243-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection 6b799a27-b1b1-48cd-afd7-f49a9ed9712b from test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.135-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.223-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a)'. Ident: 'index-477--8000595249233899911', commit timestamp: 'Timestamp(1574796749, 2333)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.243-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a)'. Ident: 'index-474--4104909142373009110', commit timestamp: 'Timestamp(1574796749, 2333)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.147-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.223-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-473--8000595249233899911, commit timestamp: Timestamp(1574796749, 2333)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.243-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a)'. Ident: 'index-477--4104909142373009110', commit timestamp: 'Timestamp(1574796749, 2333)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.155-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.224-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: e593bb63-20fe-42f2-8e81-67a1a0048424: test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f ( 0a067f39-bf1a-45f5-96ad-cc591bd1c137 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.243-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-473--4104909142373009110, commit timestamp: Timestamp(1574796749, 2333)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.155-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.241-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.244-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 61c2f1bf-247d-4b8b-a74b-df42e7053a0b: test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f ( 0a067f39-bf1a-45f5-96ad-cc591bd1c137 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.155-0500 I STORAGE [conn110] Index build initialized: a4762674-6d45-420b-a917-77c6f3033a66: test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5 (527c321b-1ef4-445f-9270-df012bbc287a ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.241-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.259-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.155-0500 I INDEX [conn110] Waiting for index build to complete: a4762674-6d45-420b-a917-77c6f3033a66
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.241-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 1019fd6b-6f01-4559-8cc3-d7bfdc55c786: test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5 (527c321b-1ef4-445f-9270-df012bbc287a ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.259-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.155-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 (1e620c18-1571-41a5-a0dd-b7d2e3d7d8cd) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.241-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.259-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 78144a24-082d-4dd8-b0ec-c022af0e451a: test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5 (527c321b-1ef4-445f-9270-df012bbc287a ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.155-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 (1e620c18-1571-41a5-a0dd-b7d2e3d7d8cd).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.242-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.259-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.155-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 (1e620c18-1571-41a5-a0dd-b7d2e3d7d8cd)'. Ident: 'index-476-8224331490264904478', commit timestamp: 'Timestamp(1574796749, 2332)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.244-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.259-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.155-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444 (1e620c18-1571-41a5-a0dd-b7d2e3d7d8cd)'. Ident: 'index-479-8224331490264904478', commit timestamp: 'Timestamp(1574796749, 2332)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.245-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8 with provided UUID: 77938539-03cd-409c-9dc0-686f3df76c56 and options: { uuid: UUID("77938539-03cd-409c-9dc0-686f3df76c56"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.261-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.155-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444'. Ident: collection-474-8224331490264904478, commit timestamp: Timestamp(1574796749, 2332)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.247-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 1019fd6b-6f01-4559-8cc3-d7bfdc55c786: test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5 ( 527c321b-1ef4-445f-9270-df012bbc287a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.263-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 78144a24-082d-4dd8-b0ec-c022af0e451a: test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5 ( 527c321b-1ef4-445f-9270-df012bbc287a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.155-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.263-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.264-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8 with provided UUID: 77938539-03cd-409c-9dc0-686f3df76c56 and options: { uuid: UUID("77938539-03cd-409c-9dc0-686f3df76c56"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.155-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796749, 2333), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.263-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98 with provided UUID: 08807cf2-77b8-4659-bb56-46ea20402109 and options: { uuid: UUID("08807cf2-77b8-4659-bb56-46ea20402109"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.278-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.155-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.278-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.279-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98 with provided UUID: 08807cf2-77b8-4659-bb56-46ea20402109 and options: { uuid: UUID("08807cf2-77b8-4659-bb56-46ea20402109"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.155-0500 I STORAGE [conn112] renameCollection: renaming collection 6b799a27-b1b1-48cd-afd7-f49a9ed9712b from test5_fsmdb0.tmp.agg_out.5db0e967-faf8-4d02-886a-b45c373ed49f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.293-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.293-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.155-0500 I COMMAND [conn71] command test5_fsmdb0.agg_out appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4068714444578088402, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7204338527759023310, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796748965), clusterTime: Timestamp(1574796748, 1582) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796748, 1582), signature: { hash: BinData(0, 738EFF41AD79DEE4C1F6CEEA4B0A3ECA9AF375C2), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.b1bca38f-e2ff-476a-8f0e-fd556c628444\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:996 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 189ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.293-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.306-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.155-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a)'. Ident: 'index-463-8224331490264904478', commit timestamp: 'Timestamp(1574796749, 2333)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.293-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 65779dff-3af1-44ef-8eb0-df1087717011: test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562 (1f088086-0af7-4acd-bbe9-9fc4c50862ee ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.306-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.155-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (eacd0a8e-453b-4eee-bdd5-4eeab4e6cb2a)'. Ident: 'index-465-8224331490264904478', commit timestamp: 'Timestamp(1574796749, 2333)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.293-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.306-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: c2b5f1e8-d32e-425a-92e1-b98c52e90153: test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562 (1f088086-0af7-4acd-bbe9-9fc4c50862ee ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.155-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-461-8224331490264904478, commit timestamp: Timestamp(1574796749, 2333)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.293-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.306-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.156-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.295-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.307-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.156-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 955896051428999304, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1780329377100155334, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796749013), clusterTime: Timestamp(1574796749, 4) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796749, 68), signature: { hash: BinData(0, E52E6B1E4F4C50C54E92D0317F270BD159ED2D2D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 141ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.298-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 65779dff-3af1-44ef-8eb0-df1087717011: test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562 ( 1f088086-0af7-4acd-bbe9-9fc4c50862ee ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.309-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.157-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 99ea66bd-6f13-49bf-a4c8-0e63c57948f1: test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f ( 0a067f39-bf1a-45f5-96ad-cc591bd1c137 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.312-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f (0a067f39-bf1a-45f5-96ad-cc591bd1c137) to test5_fsmdb0.agg_out and drop 6b799a27-b1b1-48cd-afd7-f49a9ed9712b.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.310-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: c2b5f1e8-d32e-425a-92e1-b98c52e90153: test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562 ( 1f088086-0af7-4acd-bbe9-9fc4c50862ee ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.158-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.312-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (6b799a27-b1b1-48cd-afd7-f49a9ed9712b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796749, 3407), t: 1 } and commit timestamp Timestamp(1574796749, 3407)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.322-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f (0a067f39-bf1a-45f5-96ad-cc591bd1c137) to test5_fsmdb0.agg_out and drop 6b799a27-b1b1-48cd-afd7-f49a9ed9712b.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.160-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.312-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (6b799a27-b1b1-48cd-afd7-f49a9ed9712b).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.322-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (6b799a27-b1b1-48cd-afd7-f49a9ed9712b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796749, 3407), t: 1 } and commit timestamp Timestamp(1574796749, 3407)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.169-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: a4762674-6d45-420b-a917-77c6f3033a66: test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5 ( 527c321b-1ef4-445f-9270-df012bbc287a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.312-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 0a067f39-bf1a-45f5-96ad-cc591bd1c137 from test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.322-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (6b799a27-b1b1-48cd-afd7-f49a9ed9712b).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.177-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.312-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (6b799a27-b1b1-48cd-afd7-f49a9ed9712b)'. Ident: 'index-490--8000595249233899911', commit timestamp: 'Timestamp(1574796749, 3407)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.323-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 0a067f39-bf1a-45f5-96ad-cc591bd1c137 from test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.177-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.312-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (6b799a27-b1b1-48cd-afd7-f49a9ed9712b)'. Ident: 'index-495--8000595249233899911', commit timestamp: 'Timestamp(1574796749, 3407)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.323-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (6b799a27-b1b1-48cd-afd7-f49a9ed9712b)'. Ident: 'index-490--4104909142373009110', commit timestamp: 'Timestamp(1574796749, 3407)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.177-0500 I STORAGE [conn108] Index build initialized: 30bc17c7-8eb6-46aa-85f9-a5e095b974e9: test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562 (1f088086-0af7-4acd-bbe9-9fc4c50862ee ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.312-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-489--8000595249233899911, commit timestamp: Timestamp(1574796749, 3407)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.323-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (6b799a27-b1b1-48cd-afd7-f49a9ed9712b)'. Ident: 'index-495--4104909142373009110', commit timestamp: 'Timestamp(1574796749, 3407)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.177-0500 I INDEX [conn108] Waiting for index build to complete: 30bc17c7-8eb6-46aa-85f9-a5e095b974e9
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.313-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5 (527c321b-1ef4-445f-9270-df012bbc287a) to test5_fsmdb0.agg_out and drop 0a067f39-bf1a-45f5-96ad-cc591bd1c137.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.323-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-489--4104909142373009110, commit timestamp: Timestamp(1574796749, 3407)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.177-0500 I INDEX [conn46] Index build completed: 99ea66bd-6f13-49bf-a4c8-0e63c57948f1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.313-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (0a067f39-bf1a-45f5-96ad-cc591bd1c137) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796749, 3408), t: 1 } and commit timestamp Timestamp(1574796749, 3408)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.323-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5 (527c321b-1ef4-445f-9270-df012bbc287a) to test5_fsmdb0.agg_out and drop 0a067f39-bf1a-45f5-96ad-cc591bd1c137.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.177-0500 I INDEX [conn110] Index build completed: a4762674-6d45-420b-a917-77c6f3033a66
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.313-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (0a067f39-bf1a-45f5-96ad-cc591bd1c137).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.323-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.agg_out (0a067f39-bf1a-45f5-96ad-cc591bd1c137) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796749, 3408), t: 1 } and commit timestamp Timestamp(1574796749, 3408)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.177-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.313-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 527c321b-1ef4-445f-9270-df012bbc287a from test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.323-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.agg_out (0a067f39-bf1a-45f5-96ad-cc591bd1c137).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.178-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.313-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0a067f39-bf1a-45f5-96ad-cc591bd1c137)'. Ident: 'index-498--8000595249233899911', commit timestamp: 'Timestamp(1574796749, 3408)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.323-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection 527c321b-1ef4-445f-9270-df012bbc287a from test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.178-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8 with generated UUID: 77938539-03cd-409c-9dc0-686f3df76c56 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.313-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0a067f39-bf1a-45f5-96ad-cc591bd1c137)'. Ident: 'index-503--8000595249233899911', commit timestamp: 'Timestamp(1574796749, 3408)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.324-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0a067f39-bf1a-45f5-96ad-cc591bd1c137)'. Ident: 'index-498--4104909142373009110', commit timestamp: 'Timestamp(1574796749, 3408)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.180-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98 with generated UUID: 08807cf2-77b8-4659-bb56-46ea20402109 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.313-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-497--8000595249233899911, commit timestamp: Timestamp(1574796749, 3408)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.324-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0a067f39-bf1a-45f5-96ad-cc591bd1c137)'. Ident: 'index-503--4104909142373009110', commit timestamp: 'Timestamp(1574796749, 3408)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.182-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.329-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.324-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-497--4104909142373009110, commit timestamp: Timestamp(1574796749, 3408)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.196-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 30bc17c7-8eb6-46aa-85f9-a5e095b974e9: test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562 ( 1f088086-0af7-4acd-bbe9-9fc4c50862ee ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.329-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.341-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.196-0500 I INDEX [conn108] Index build completed: 30bc17c7-8eb6-46aa-85f9-a5e095b974e9
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.329-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 9ec0b4f9-f655-47b8-b73f-57048493912c: test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8 (77938539-03cd-409c-9dc0-686f3df76c56 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.341-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.204-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.329-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.341-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 24fd90d7-aee2-40eb-886d-e9a51016321a: test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8 (77938539-03cd-409c-9dc0-686f3df76c56 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.204-0500 I INDEX [conn112] Registering index build: fd444b44-ab38-4ceb-a179-778b8a52c9e9
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.330-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.341-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.212-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.332-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.342-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.228-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:29.334-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 9ec0b4f9-f655-47b8-b73f-57048493912c: test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8 ( 77938539-03cd-409c-9dc0-686f3df76c56 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.344-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.228-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.112-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562 (1f088086-0af7-4acd-bbe9-9fc4c50862ee) to test5_fsmdb0.agg_out and drop 527c321b-1ef4-445f-9270-df012bbc287a.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:29.346-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 24fd90d7-aee2-40eb-886d-e9a51016321a: test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8 ( 77938539-03cd-409c-9dc0-686f3df76c56 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.228-0500 I STORAGE [conn112] Index build initialized: fd444b44-ab38-4ceb-a179-778b8a52c9e9: test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8 (77938539-03cd-409c-9dc0-686f3df76c56 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.112-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (527c321b-1ef4-445f-9270-df012bbc287a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796749, 3848), t: 1 } and commit timestamp Timestamp(1574796749, 3848)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.114-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562 (1f088086-0af7-4acd-bbe9-9fc4c50862ee) to test5_fsmdb0.agg_out and drop 527c321b-1ef4-445f-9270-df012bbc287a.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.228-0500 I INDEX [conn112] Waiting for index build to complete: fd444b44-ab38-4ceb-a179-778b8a52c9e9
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.112-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (527c321b-1ef4-445f-9270-df012bbc287a).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.114-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.agg_out (527c321b-1ef4-445f-9270-df012bbc287a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796749, 3848), t: 1 } and commit timestamp Timestamp(1574796749, 3848)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.228-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.112-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 1f088086-0af7-4acd-bbe9-9fc4c50862ee from test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.114-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.agg_out (527c321b-1ef4-445f-9270-df012bbc287a).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.228-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (6b799a27-b1b1-48cd-afd7-f49a9ed9712b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796749, 3407), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.112-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (527c321b-1ef4-445f-9270-df012bbc287a)'. Ident: 'index-500--8000595249233899911', commit timestamp: 'Timestamp(1574796749, 3848)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.114-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection 1f088086-0af7-4acd-bbe9-9fc4c50862ee from test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.228-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (6b799a27-b1b1-48cd-afd7-f49a9ed9712b).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.112-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (527c321b-1ef4-445f-9270-df012bbc287a)'. Ident: 'index-505--8000595249233899911', commit timestamp: 'Timestamp(1574796749, 3848)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.114-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (527c321b-1ef4-445f-9270-df012bbc287a)'. Ident: 'index-500--4104909142373009110', commit timestamp: 'Timestamp(1574796749, 3848)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.228-0500 I STORAGE [conn110] renameCollection: renaming collection 0a067f39-bf1a-45f5-96ad-cc591bd1c137 from test5_fsmdb0.tmp.agg_out.06f0e8de-860a-4067-8e16-2a20d0574b0f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.112-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-499--8000595249233899911, commit timestamp: Timestamp(1574796749, 3848)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.114-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (527c321b-1ef4-445f-9270-df012bbc287a)'. Ident: 'index-505--4104909142373009110', commit timestamp: 'Timestamp(1574796749, 3848)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.228-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (6b799a27-b1b1-48cd-afd7-f49a9ed9712b)'. Ident: 'index-482-8224331490264904478', commit timestamp: 'Timestamp(1574796749, 3407)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.135-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21 with provided UUID: d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc and options: { uuid: UUID("d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.114-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-499--4104909142373009110, commit timestamp: Timestamp(1574796749, 3848)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.228-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (6b799a27-b1b1-48cd-afd7-f49a9ed9712b)'. Ident: 'index-483-8224331490264904478', commit timestamp: 'Timestamp(1574796749, 3407)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.147-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.148-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21 with provided UUID: d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc and options: { uuid: UUID("d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.228-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-480-8224331490264904478, commit timestamp: Timestamp(1574796749, 3407)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.148-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce with provided UUID: ff19dd80-07b1-47d7-abdc-de2e7cb53d66 and options: { uuid: UUID("ff19dd80-07b1-47d7-abdc-de2e7cb53d66"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.163-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.228-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.161-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.164-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce with provided UUID: ff19dd80-07b1-47d7-abdc-de2e7cb53d66 and options: { uuid: UUID("ff19dd80-07b1-47d7-abdc-de2e7cb53d66"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.229-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (0a067f39-bf1a-45f5-96ad-cc591bd1c137) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796749, 3408), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.175-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.180-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.229-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (0a067f39-bf1a-45f5-96ad-cc591bd1c137).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.175-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.197-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.229-0500 I STORAGE [conn46] renameCollection: renaming collection 527c321b-1ef4-445f-9270-df012bbc287a from test5_fsmdb0.tmp.agg_out.4ed34111-ca85-4068-b3db-4b892ad32da5 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.175-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: afe98ed4-a085-49d4-a3af-d4412b9402f6: test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98 (08807cf2-77b8-4659-bb56-46ea20402109 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.197-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.229-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 9037628885907770823, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8611924300067092093, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796749071), clusterTime: Timestamp(1574796749, 1320) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796749, 1322), signature: { hash: BinData(0, E52E6B1E4F4C50C54E92D0317F270BD159ED2D2D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 155ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.175-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.197-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 6302383a-bcd5-4376-ad42-dca8ff8eb054: test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98 (08807cf2-77b8-4659-bb56-46ea20402109 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.229-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0a067f39-bf1a-45f5-96ad-cc591bd1c137)'. Ident: 'index-488-8224331490264904478', commit timestamp: 'Timestamp(1574796749, 3408)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.176-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.197-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.229-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0a067f39-bf1a-45f5-96ad-cc591bd1c137)'. Ident: 'index-491-8224331490264904478', commit timestamp: 'Timestamp(1574796749, 3408)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.177-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824 with provided UUID: 126f0461-584f-4161-b6e0-63f434414629 and options: { uuid: UUID("126f0461-584f-4161-b6e0-63f434414629"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.198-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.229-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-485-8224331490264904478, commit timestamp: Timestamp(1574796749, 3408)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.179-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.199-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824 with provided UUID: 126f0461-584f-4161-b6e0-63f434414629 and options: { uuid: UUID("126f0461-584f-4161-b6e0-63f434414629"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.229-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.189-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: afe98ed4-a085-49d4-a3af-d4412b9402f6: test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98 ( 08807cf2-77b8-4659-bb56-46ea20402109 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.201-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.229-0500 I INDEX [conn114] Registering index build: 14196694-c9ab-48ff-8e01-542589524d22
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.196-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.210-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 6302383a-bcd5-4376-ad42-dca8ff8eb054: test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98 ( 08807cf2-77b8-4659-bb56-46ea20402109 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.229-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1788634627429726909, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3229963263518261057, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796749075), clusterTime: Timestamp(1574796749, 1324) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796749, 1325), signature: { hash: BinData(0, E52E6B1E4F4C50C54E92D0317F270BD159ED2D2D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 153ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.202-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8 (77938539-03cd-409c-9dc0-686f3df76c56) to test5_fsmdb0.agg_out and drop 1f088086-0af7-4acd-bbe9-9fc4c50862ee.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.219-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.230-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.202-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (1f088086-0af7-4acd-bbe9-9fc4c50862ee) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796752, 571), t: 1 } and commit timestamp Timestamp(1574796752, 571)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.224-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8 (77938539-03cd-409c-9dc0-686f3df76c56) to test5_fsmdb0.agg_out and drop 1f088086-0af7-4acd-bbe9-9fc4c50862ee.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.231-0500 I COMMAND [conn70] CMD: dropIndexes test5_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.202-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (1f088086-0af7-4acd-bbe9-9fc4c50862ee).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.224-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (1f088086-0af7-4acd-bbe9-9fc4c50862ee) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796752, 571), t: 1 } and commit timestamp Timestamp(1574796752, 571)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.244-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.202-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 77938539-03cd-409c-9dc0-686f3df76c56 from test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.224-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (1f088086-0af7-4acd-bbe9-9fc4c50862ee).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.253-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.202-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1f088086-0af7-4acd-bbe9-9fc4c50862ee)'. Ident: 'index-502--8000595249233899911', commit timestamp: 'Timestamp(1574796752, 571)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.224-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 77938539-03cd-409c-9dc0-686f3df76c56 from test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.253-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.202-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1f088086-0af7-4acd-bbe9-9fc4c50862ee)'. Ident: 'index-511--8000595249233899911', commit timestamp: 'Timestamp(1574796752, 571)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.224-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1f088086-0af7-4acd-bbe9-9fc4c50862ee)'. Ident: 'index-502--4104909142373009110', commit timestamp: 'Timestamp(1574796752, 571)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.253-0500 I STORAGE [conn114] Index build initialized: 14196694-c9ab-48ff-8e01-542589524d22: test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98 (08807cf2-77b8-4659-bb56-46ea20402109 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.202-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-501--8000595249233899911, commit timestamp: Timestamp(1574796752, 571)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.224-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1f088086-0af7-4acd-bbe9-9fc4c50862ee)'. Ident: 'index-511--4104909142373009110', commit timestamp: 'Timestamp(1574796752, 571)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.253-0500 I INDEX [conn114] Waiting for index build to complete: 14196694-c9ab-48ff-8e01-542589524d22
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.223-0500 I INDEX [ReplWriterWorker-12] index build: starting on test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.224-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-501--4104909142373009110, commit timestamp: Timestamp(1574796752, 571)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.253-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.223-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.240-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.253-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (527c321b-1ef4-445f-9270-df012bbc287a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796749, 3848), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.223-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 064bf85f-9cb2-48a8-a69f-bbcc73294d40: test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21 (d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.240-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.253-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (527c321b-1ef4-445f-9270-df012bbc287a).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.223-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.240-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 96168d1b-30f1-4d47-a862-e930e0a99415: test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21 (d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.253-0500 I STORAGE [conn108] renameCollection: renaming collection 1f088086-0af7-4acd-bbe9-9fc4c50862ee from test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.224-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.240-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.226-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.241-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:29.254-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: fd444b44-ab38-4ceb-a179-778b8a52c9e9: test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8 ( 77938539-03cd-409c-9dc0-686f3df76c56 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.229-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98 (08807cf2-77b8-4659-bb56-46ea20402109) to test5_fsmdb0.agg_out and drop 77938539-03cd-409c-9dc0-686f3df76c56.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.243-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.107-0500 I INDEX [conn112] Index build completed: fd444b44-ab38-4ceb-a179-778b8a52c9e9
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.229-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (77938539-03cd-409c-9dc0-686f3df76c56) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796752, 1011), t: 1 } and commit timestamp Timestamp(1574796752, 1011)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.245-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98 (08807cf2-77b8-4659-bb56-46ea20402109) to test5_fsmdb0.agg_out and drop 77938539-03cd-409c-9dc0-686f3df76c56.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.107-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (527c321b-1ef4-445f-9270-df012bbc287a)'. Ident: 'index-489-8224331490264904478', commit timestamp: 'Timestamp(1574796749, 3848)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.229-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (77938539-03cd-409c-9dc0-686f3df76c56).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.245-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (77938539-03cd-409c-9dc0-686f3df76c56) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796752, 1011), t: 1 } and commit timestamp Timestamp(1574796752, 1011)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.107-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (527c321b-1ef4-445f-9270-df012bbc287a)'. Ident: 'index-493-8224331490264904478', commit timestamp: 'Timestamp(1574796749, 3848)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.229-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 08807cf2-77b8-4659-bb56-46ea20402109 from test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.245-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (77938539-03cd-409c-9dc0-686f3df76c56).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.107-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-486-8224331490264904478, commit timestamp: Timestamp(1574796749, 3848)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.229-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (77938539-03cd-409c-9dc0-686f3df76c56)'. Ident: 'index-508--8000595249233899911', commit timestamp: 'Timestamp(1574796752, 1011)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:33.400-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796752, 1520), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 1147ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.245-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 08807cf2-77b8-4659-bb56-46ea20402109 from test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.107-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8 appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796749, 3237), signature: { hash: BinData(0, E52E6B1E4F4C50C54E92D0317F270BD159ED2D2D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2902ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.229-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (77938539-03cd-409c-9dc0-686f3df76c56)'. Ident: 'index-513--8000595249233899911', commit timestamp: 'Timestamp(1574796752, 1011)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.245-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (77938539-03cd-409c-9dc0-686f3df76c56)'. Ident: 'index-508--4104909142373009110', commit timestamp: 'Timestamp(1574796752, 1011)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.107-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.229-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-507--8000595249233899911, commit timestamp: Timestamp(1574796752, 1011)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.245-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (77938539-03cd-409c-9dc0-686f3df76c56)'. Ident: 'index-513--4104909142373009110', commit timestamp: 'Timestamp(1574796752, 1011)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.107-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562 appName: "tid:0" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.4d5ce2bd-b8fa-4bcc-ab5c-6e54fac65562", to: "test5_fsmdb0.agg_out", collectionOptions: { validationLevel: "moderate", validationAction: "error" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796749, 3844), signature: { hash: BinData(0, E52E6B1E4F4C50C54E92D0317F270BD159ED2D2D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 16636 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2870ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.230-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e with provided UUID: 3602026a-acbd-43e9-9267-8b9a2fc24f0f and options: { uuid: UUID("3602026a-acbd-43e9-9267-8b9a2fc24f0f"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.245-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-507--4104909142373009110, commit timestamp: Timestamp(1574796752, 1011)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.107-0500 I COMMAND [conn103] command test5_fsmdb0.agg_out command: listIndexes { listIndexes: "agg_out", databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, $clusterTime: { clusterTime: Timestamp(1574796749, 3847), signature: { hash: BinData(0, E52E6B1E4F4C50C54E92D0317F270BD159ED2D2D), keyId: 6763700092420489256 } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:495 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2853735 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 2853ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.230-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 064bf85f-9cb2-48a8-a69f-bbcc73294d40: test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21 ( d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.246-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 96168d1b-30f1-4d47-a862-e930e0a99415: test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21 ( d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.107-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } } ], fromMongos: true, needsMerge: true, collation: { locale: "simple" }, cursor: { batchSize: 0 }, runtimeConstants: { localNow: new Date(1574796749254), clusterTime: Timestamp(1574796749, 3847) }, use44SortKeys: true, allowImplicitCollectionCreation: false, shardVersion: [ Timestamp(1, 1), ObjectId('5ddd7dc43bbfe7fa5630eb06') ], lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796749, 3847), signature: { hash: BinData(0, E52E6B1E4F4C50C54E92D0317F270BD159ED2D2D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } planSummary: COLLSCAN cursorid:5706724159619259357 keysExamined:0 docsExamined:0 numYields:0 nreturned:0 queryHash:CC4733C9 planCacheKey:CC4733C9 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2852433 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 3 } } } protocol:op_msg 2852ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.245-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.246-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e with provided UUID: 3602026a-acbd-43e9-9267-8b9a2fc24f0f and options: { uuid: UUID("3602026a-acbd-43e9-9267-8b9a2fc24f0f"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.107-0500 I COMMAND [conn220] command test5_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796749, 3408), lsid: { id: UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") }, $clusterTime: { clusterTime: Timestamp(1574796749, 3472), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796749, 3408). Collection minimum timestamp is Timestamp(1574796749, 3847)" errName:SnapshotUnavailable errCode:246 reslen:582 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2781655 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 2781ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.267-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.262-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.107-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1069617349757893288, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 9026287291413974108, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796749075), clusterTime: Timestamp(1574796749, 1324) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796749, 1325), signature: { hash: BinData(0, E52E6B1E4F4C50C54E92D0317F270BD159ED2D2D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3031ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.267-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.285-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.108-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21 with generated UUID: d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.267-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 43a4548b-f52b-4fcf-a44c-22d729654395: test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce (ff19dd80-07b1-47d7-abdc-de2e7cb53d66 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.285-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.108-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.268-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.285-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 320c8464-ea27-42c1-8a27-886a507c5b01: test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce (ff19dd80-07b1-47d7-abdc-de2e7cb53d66 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.108-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce with generated UUID: ff19dd80-07b1-47d7-abdc-de2e7cb53d66 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.270-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160 with provided UUID: b28a4daa-804b-4466-9062-872a0aae5d42 and options: { uuid: UUID("b28a4daa-804b-4466-9062-872a0aae5d42"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.285-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.110-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.270-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.287-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.110-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824 with generated UUID: 126f0461-584f-4161-b6e0-63f434414629 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.280-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.289-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160 with provided UUID: b28a4daa-804b-4466-9062-872a0aae5d42 and options: { uuid: UUID("b28a4daa-804b-4466-9062-872a0aae5d42"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.121-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 14196694-c9ab-48ff-8e01-542589524d22: test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98 ( 08807cf2-77b8-4659-bb56-46ea20402109 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.288-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.290-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.121-0500 I INDEX [conn114] Index build completed: 14196694-c9ab-48ff-8e01-542589524d22
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.289-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 43a4548b-f52b-4fcf-a44c-22d729654395: test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce ( ff19dd80-07b1-47d7-abdc-de2e7cb53d66 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.300-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 320c8464-ea27-42c1-8a27-886a507c5b01: test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce ( ff19dd80-07b1-47d7-abdc-de2e7cb53d66 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.121-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98 appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796749, 3405), signature: { hash: BinData(0, E52E6B1E4F4C50C54E92D0317F270BD159ED2D2D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 15859 } }, Collection: { acquireCount: { w: 1, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 48 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2907ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.308-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.308-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.132-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.308-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.327-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.132-0500 I INDEX [conn112] Registering index build: e80ac6b4-ffa8-4197-88ba-f88b4090af67
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.308-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: c5d630d0-e5f8-4789-830d-f9a32ac3fabb: test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824 (126f0461-584f-4161-b6e0-63f434414629 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.327-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.138-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.308-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.327-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: b6a60149-3980-471b-8cd0-e984d966e626: test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824 (126f0461-584f-4161-b6e0-63f434414629 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.143-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.308-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.328-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.158-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.310-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21 (d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc) to test5_fsmdb0.agg_out and drop 08807cf2-77b8-4659-bb56-46ea20402109.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.328-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.158-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.311-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.329-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21 (d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc) to test5_fsmdb0.agg_out and drop 08807cf2-77b8-4659-bb56-46ea20402109.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.158-0500 I STORAGE [conn112] Index build initialized: e80ac6b4-ffa8-4197-88ba-f88b4090af67: test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21 (d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.312-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (08807cf2-77b8-4659-bb56-46ea20402109) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796752, 1520), t: 1 } and commit timestamp Timestamp(1574796752, 1520)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.331-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.158-0500 I INDEX [conn112] Waiting for index build to complete: e80ac6b4-ffa8-4197-88ba-f88b4090af67
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.312-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (08807cf2-77b8-4659-bb56-46ea20402109).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.331-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (08807cf2-77b8-4659-bb56-46ea20402109) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796752, 1520), t: 1 } and commit timestamp Timestamp(1574796752, 1520)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.158-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.312-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc from test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.331-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (08807cf2-77b8-4659-bb56-46ea20402109).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.158-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (1f088086-0af7-4acd-bbe9-9fc4c50862ee) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796752, 571), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.312-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (08807cf2-77b8-4659-bb56-46ea20402109)'. Ident: 'index-510--8000595249233899911', commit timestamp: 'Timestamp(1574796752, 1520)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.331-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc from test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.159-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (1f088086-0af7-4acd-bbe9-9fc4c50862ee).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.312-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (08807cf2-77b8-4659-bb56-46ea20402109)'. Ident: 'index-519--8000595249233899911', commit timestamp: 'Timestamp(1574796752, 1520)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.331-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (08807cf2-77b8-4659-bb56-46ea20402109)'. Ident: 'index-510--4104909142373009110', commit timestamp: 'Timestamp(1574796752, 1520)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.159-0500 I STORAGE [conn108] renameCollection: renaming collection 77938539-03cd-409c-9dc0-686f3df76c56 from test5_fsmdb0.tmp.agg_out.c60d3acb-defd-4b59-b4f3-769d25c619a8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.312-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-509--8000595249233899911, commit timestamp: Timestamp(1574796752, 1520)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.331-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (08807cf2-77b8-4659-bb56-46ea20402109)'. Ident: 'index-519--4104909142373009110', commit timestamp: 'Timestamp(1574796752, 1520)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.159-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1f088086-0af7-4acd-bbe9-9fc4c50862ee)'. Ident: 'index-490-8224331490264904478', commit timestamp: 'Timestamp(1574796752, 571)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.313-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93 with provided UUID: d77031cf-6538-4b23-b849-16c37c1a47c0 and options: { uuid: UUID("d77031cf-6538-4b23-b849-16c37c1a47c0"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.331-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-509--4104909142373009110, commit timestamp: Timestamp(1574796752, 1520)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.159-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1f088086-0af7-4acd-bbe9-9fc4c50862ee)'. Ident: 'index-495-8224331490264904478', commit timestamp: 'Timestamp(1574796752, 571)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.314-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: c5d630d0-e5f8-4789-830d-f9a32ac3fabb: test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824 ( 126f0461-584f-4161-b6e0-63f434414629 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.332-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93 with provided UUID: d77031cf-6538-4b23-b849-16c37c1a47c0 and options: { uuid: UUID("d77031cf-6538-4b23-b849-16c37c1a47c0"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.159-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-487-8224331490264904478, commit timestamp: Timestamp(1574796752, 571)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.327-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.333-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: b6a60149-3980-471b-8cd0-e984d966e626: test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824 ( 126f0461-584f-4161-b6e0-63f434414629 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.159-0500 I INDEX [conn46] Registering index build: 88676b8f-340c-423e-939b-a93c9e05b702
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.350-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.349-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.159-0500 I INDEX [conn110] Registering index build: 28b03ecc-7e29-4e58-be29-0f2c52973aec
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.350-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.371-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.159-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.350-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 586f8862-5ad6-463d-960a-d5f0369ebaf6: test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e (3602026a-acbd-43e9-9267-8b9a2fc24f0f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.371-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.159-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3654718070009961174, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7543382715577545742, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796749157), clusterTime: Timestamp(1574796749, 2333) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796749, 2333), signature: { hash: BinData(0, E52E6B1E4F4C50C54E92D0317F270BD159ED2D2D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 19983 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3001ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.350-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.371-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 023ee355-6109-45f3-8691-2cbc95fa5d05: test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e (3602026a-acbd-43e9-9267-8b9a2fc24f0f ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.159-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.351-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.371-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.161-0500 I COMMAND [conn71] CMD: dropIndexes test5_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.354-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.372-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.162-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.356-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce (ff19dd80-07b1-47d7-abdc-de2e7cb53d66) to test5_fsmdb0.agg_out and drop d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.375-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.171-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: e80ac6b4-ffa8-4197-88ba-f88b4090af67: test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21 ( d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.356-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796752, 2525), t: 1 } and commit timestamp Timestamp(1574796752, 2525)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.377-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce (ff19dd80-07b1-47d7-abdc-de2e7cb53d66) to test5_fsmdb0.agg_out and drop d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.178-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.356-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.377-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796752, 2525), t: 1 } and commit timestamp Timestamp(1574796752, 2525)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.178-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.356-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection ff19dd80-07b1-47d7-abdc-de2e7cb53d66 from test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.377-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.178-0500 I STORAGE [conn46] Index build initialized: 88676b8f-340c-423e-939b-a93c9e05b702: test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce (ff19dd80-07b1-47d7-abdc-de2e7cb53d66 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.356-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc)'. Ident: 'index-516--8000595249233899911', commit timestamp: 'Timestamp(1574796752, 2525)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.377-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection ff19dd80-07b1-47d7-abdc-de2e7cb53d66 from test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.178-0500 I INDEX [conn46] Waiting for index build to complete: 88676b8f-340c-423e-939b-a93c9e05b702
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.357-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc)'. Ident: 'index-523--8000595249233899911', commit timestamp: 'Timestamp(1574796752, 2525)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.377-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc)'. Ident: 'index-516--4104909142373009110', commit timestamp: 'Timestamp(1574796752, 2525)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.179-0500 I INDEX [conn112] Index build completed: e80ac6b4-ffa8-4197-88ba-f88b4090af67
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.357-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-515--8000595249233899911, commit timestamp: Timestamp(1574796752, 2525)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.377-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc)'. Ident: 'index-523--4104909142373009110', commit timestamp: 'Timestamp(1574796752, 2525)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.179-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.357-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 586f8862-5ad6-463d-960a-d5f0369ebaf6: test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e ( 3602026a-acbd-43e9-9267-8b9a2fc24f0f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.377-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-515--4104909142373009110, commit timestamp: Timestamp(1574796752, 2525)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.179-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (77938539-03cd-409c-9dc0-686f3df76c56) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796752, 1011), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.376-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.379-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 023ee355-6109-45f3-8691-2cbc95fa5d05: test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e ( 3602026a-acbd-43e9-9267-8b9a2fc24f0f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.179-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (77938539-03cd-409c-9dc0-686f3df76c56).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.376-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.396-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.179-0500 I STORAGE [conn114] renameCollection: renaming collection 08807cf2-77b8-4659-bb56-46ea20402109 from test5_fsmdb0.tmp.agg_out.e84a1324-ad8c-4e36-89ee-218cd9958c98 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.376-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 38e2f515-cecc-438d-bfb2-c238ac75d9d2: test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160 (b28a4daa-804b-4466-9062-872a0aae5d42 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.396-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.179-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (77938539-03cd-409c-9dc0-686f3df76c56)'. Ident: 'index-499-8224331490264904478', commit timestamp: 'Timestamp(1574796752, 1011)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.376-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.396-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: ba401e11-8799-455d-8bfb-db8bae98f98d: test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160 (b28a4daa-804b-4466-9062-872a0aae5d42 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.179-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (77938539-03cd-409c-9dc0-686f3df76c56)'. Ident: 'index-501-8224331490264904478', commit timestamp: 'Timestamp(1574796752, 1011)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.376-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.397-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.179-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-497-8224331490264904478, commit timestamp: Timestamp(1574796752, 1011)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.377-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824 (126f0461-584f-4161-b6e0-63f434414629) to test5_fsmdb0.agg_out and drop ff19dd80-07b1-47d7-abdc-de2e7cb53d66.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.397-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.179-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.379-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.398-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824 (126f0461-584f-4161-b6e0-63f434414629) to test5_fsmdb0.agg_out and drop ff19dd80-07b1-47d7-abdc-de2e7cb53d66.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.179-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8622341010429504770, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7065721432151026555, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796749178), clusterTime: Timestamp(1574796749, 2337) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796749, 2338), signature: { hash: BinData(0, E52E6B1E4F4C50C54E92D0317F270BD159ED2D2D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3000ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.380-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (ff19dd80-07b1-47d7-abdc-de2e7cb53d66) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796752, 2528), t: 1 } and commit timestamp Timestamp(1574796752, 2528)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.400-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.180-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.380-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (ff19dd80-07b1-47d7-abdc-de2e7cb53d66).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.400-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (ff19dd80-07b1-47d7-abdc-de2e7cb53d66) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796752, 2528), t: 1 } and commit timestamp Timestamp(1574796752, 2528)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.182-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e with generated UUID: 3602026a-acbd-43e9-9267-8b9a2fc24f0f and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.380-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 126f0461-584f-4161-b6e0-63f434414629 from test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.400-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (ff19dd80-07b1-47d7-abdc-de2e7cb53d66).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.182-0500 I COMMAND [conn65] CMD: dropIndexes test5_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.380-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ff19dd80-07b1-47d7-abdc-de2e7cb53d66)'. Ident: 'index-518--8000595249233899911', commit timestamp: 'Timestamp(1574796752, 2528)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.400-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection 126f0461-584f-4161-b6e0-63f434414629 from test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.189-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.380-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ff19dd80-07b1-47d7-abdc-de2e7cb53d66)'. Ident: 'index-527--8000595249233899911', commit timestamp: 'Timestamp(1574796752, 2528)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.400-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ff19dd80-07b1-47d7-abdc-de2e7cb53d66)'. Ident: 'index-518--4104909142373009110', commit timestamp: 'Timestamp(1574796752, 2528)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.203-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.380-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-517--8000595249233899911, commit timestamp: Timestamp(1574796752, 2528)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.400-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ff19dd80-07b1-47d7-abdc-de2e7cb53d66)'. Ident: 'index-527--4104909142373009110', commit timestamp: 'Timestamp(1574796752, 2528)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.203-0500 I INDEX [conn114] Registering index build: f496d87c-ca53-4f65-a0c5-ba9b92bff432
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.381-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771 with provided UUID: 1da4737b-0194-409d-befd-9879f5f65f50 and options: { uuid: UUID("1da4737b-0194-409d-befd-9879f5f65f50"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.400-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-517--4104909142373009110, commit timestamp: Timestamp(1574796752, 2528)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.210-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.381-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 38e2f515-cecc-438d-bfb2-c238ac75d9d2: test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160 ( b28a4daa-804b-4466-9062-872a0aae5d42 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.401-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771 with provided UUID: 1da4737b-0194-409d-befd-9879f5f65f50 and options: { uuid: UUID("1da4737b-0194-409d-befd-9879f5f65f50"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.210-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.396-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.402-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: ba401e11-8799-455d-8bfb-db8bae98f98d: test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160 ( b28a4daa-804b-4466-9062-872a0aae5d42 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.210-0500 I STORAGE [conn110] Index build initialized: 28b03ecc-7e29-4e58-be29-0f2c52973aec: test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824 (126f0461-584f-4161-b6e0-63f434414629 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.397-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481 with provided UUID: 0eaf7944-47f7-48ed-abb0-2c0463c62c58 and options: { uuid: UUID("0eaf7944-47f7-48ed-abb0-2c0463c62c58"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.417-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.210-0500 I INDEX [conn110] Waiting for index build to complete: 28b03ecc-7e29-4e58-be29-0f2c52973aec
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.413-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.417-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796752, 2528) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796752, 2528), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 13802 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 106ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.211-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.434-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.418-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481 with provided UUID: 0eaf7944-47f7-48ed-abb0-2c0463c62c58 and options: { uuid: UUID("0eaf7944-47f7-48ed-abb0-2c0463c62c58"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.213-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 88676b8f-340c-423e-939b-a93c9e05b702: test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce ( ff19dd80-07b1-47d7-abdc-de2e7cb53d66 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.434-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.431-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.214-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.434-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: bab6f0e6-4c95-44ba-88f5-c55f122fe01c: test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93 (d77031cf-6538-4b23-b849-16c37c1a47c0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.451-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.214-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160 with generated UUID: b28a4daa-804b-4466-9062-872a0aae5d42 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.434-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.451-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.225-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.434-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.451-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: e92faf8b-f2e0-4802-bb2a-963799290341: test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93 (d77031cf-6538-4b23-b849-16c37c1a47c0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.241-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.435-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e (3602026a-acbd-43e9-9267-8b9a2fc24f0f) to test5_fsmdb0.agg_out and drop 126f0461-584f-4161-b6e0-63f434414629.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.451-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.241-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.436-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.452-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.241-0500 I STORAGE [conn114] Index build initialized: f496d87c-ca53-4f65-a0c5-ba9b92bff432: test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e (3602026a-acbd-43e9-9267-8b9a2fc24f0f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.436-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (126f0461-584f-4161-b6e0-63f434414629) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796752, 3034), t: 1 } and commit timestamp Timestamp(1574796752, 3034)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.452-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e (3602026a-acbd-43e9-9267-8b9a2fc24f0f) to test5_fsmdb0.agg_out and drop 126f0461-584f-4161-b6e0-63f434414629.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.241-0500 I INDEX [conn114] Waiting for index build to complete: f496d87c-ca53-4f65-a0c5-ba9b92bff432
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.436-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (126f0461-584f-4161-b6e0-63f434414629).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.454-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.242-0500 I INDEX [conn46] Index build completed: 88676b8f-340c-423e-939b-a93c9e05b702
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.436-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 3602026a-acbd-43e9-9267-8b9a2fc24f0f from test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.455-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (126f0461-584f-4161-b6e0-63f434414629) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796752, 3034), t: 1 } and commit timestamp Timestamp(1574796752, 3034)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.242-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796752, 569), signature: { hash: BinData(0, 56F3EFC472C7D97A3EEA7034FBB4EF88057B5AA8), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 19856 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 102ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.437-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (126f0461-584f-4161-b6e0-63f434414629)'. Ident: 'index-522--8000595249233899911', commit timestamp: 'Timestamp(1574796752, 3034)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.455-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (126f0461-584f-4161-b6e0-63f434414629).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.242-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 28b03ecc-7e29-4e58-be29-0f2c52973aec: test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824 ( 126f0461-584f-4161-b6e0-63f434414629 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.437-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (126f0461-584f-4161-b6e0-63f434414629)'. Ident: 'index-531--8000595249233899911', commit timestamp: 'Timestamp(1574796752, 3034)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.455-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 3602026a-acbd-43e9-9267-8b9a2fc24f0f from test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.242-0500 I INDEX [conn110] Index build completed: 28b03ecc-7e29-4e58-be29-0f2c52973aec
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.437-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-521--8000595249233899911, commit timestamp: Timestamp(1574796752, 3034)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.455-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (126f0461-584f-4161-b6e0-63f434414629)'. Ident: 'index-522--4104909142373009110', commit timestamp: 'Timestamp(1574796752, 3034)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.250-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.438-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: bab6f0e6-4c95-44ba-88f5-c55f122fe01c: test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93 ( d77031cf-6538-4b23-b849-16c37c1a47c0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.455-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (126f0461-584f-4161-b6e0-63f434414629)'. Ident: 'index-531--4104909142373009110', commit timestamp: 'Timestamp(1574796752, 3034)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.250-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.441-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c with provided UUID: 3324b894-56e0-49fd-89f5-45beb58177b3 and options: { uuid: UUID("3324b894-56e0-49fd-89f5-45beb58177b3"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.455-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-521--4104909142373009110, commit timestamp: Timestamp(1574796752, 3034)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.250-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (08807cf2-77b8-4659-bb56-46ea20402109) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796752, 1520), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.455-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.456-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: e92faf8b-f2e0-4802-bb2a-963799290341: test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93 ( d77031cf-6538-4b23-b849-16c37c1a47c0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.250-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (08807cf2-77b8-4659-bb56-46ea20402109).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.459-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160 (b28a4daa-804b-4466-9062-872a0aae5d42) to test5_fsmdb0.agg_out and drop 3602026a-acbd-43e9-9267-8b9a2fc24f0f.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.459-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c with provided UUID: 3324b894-56e0-49fd-89f5-45beb58177b3 and options: { uuid: UUID("3324b894-56e0-49fd-89f5-45beb58177b3"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.250-0500 I STORAGE [conn112] renameCollection: renaming collection d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc from test5_fsmdb0.tmp.agg_out.69db7e46-f661-4eca-8bce-2a728dfcde21 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.459-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (3602026a-acbd-43e9-9267-8b9a2fc24f0f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796752, 3537), t: 1 } and commit timestamp Timestamp(1574796752, 3537)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.471-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.251-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (08807cf2-77b8-4659-bb56-46ea20402109)'. Ident: 'index-500-8224331490264904478', commit timestamp: 'Timestamp(1574796752, 1520)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.459-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (3602026a-acbd-43e9-9267-8b9a2fc24f0f).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.474-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160 (b28a4daa-804b-4466-9062-872a0aae5d42) to test5_fsmdb0.agg_out and drop 3602026a-acbd-43e9-9267-8b9a2fc24f0f.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.251-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (08807cf2-77b8-4659-bb56-46ea20402109)'. Ident: 'index-503-8224331490264904478', commit timestamp: 'Timestamp(1574796752, 1520)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.459-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection b28a4daa-804b-4466-9062-872a0aae5d42 from test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.474-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (3602026a-acbd-43e9-9267-8b9a2fc24f0f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796752, 3537), t: 1 } and commit timestamp Timestamp(1574796752, 3537)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.251-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-498-8224331490264904478, commit timestamp: Timestamp(1574796752, 1520)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.459-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3602026a-acbd-43e9-9267-8b9a2fc24f0f)'. Ident: 'index-526--8000595249233899911', commit timestamp: 'Timestamp(1574796752, 3537)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.474-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (3602026a-acbd-43e9-9267-8b9a2fc24f0f).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.251-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.459-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3602026a-acbd-43e9-9267-8b9a2fc24f0f)'. Ident: 'index-535--8000595249233899911', commit timestamp: 'Timestamp(1574796752, 3537)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.474-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection b28a4daa-804b-4466-9062-872a0aae5d42 from test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.251-0500 I INDEX [conn108] Registering index build: 7a3b4ea0-bc98-43f0-8397-3615b0a6cb6e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:32.459-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-525--8000595249233899911, commit timestamp: Timestamp(1574796752, 3537)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.474-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3602026a-acbd-43e9-9267-8b9a2fc24f0f)'. Ident: 'index-526--4104909142373009110', commit timestamp: 'Timestamp(1574796752, 3537)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.251-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8914000619653013311, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 85277966932890036, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796749230), clusterTime: Timestamp(1574796749, 3408) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796749, 3536), signature: { hash: BinData(0, E52E6B1E4F4C50C54E92D0317F270BD159ED2D2D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 22018 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3019ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.474-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3602026a-acbd-43e9-9267-8b9a2fc24f0f)'. Ident: 'index-535--4104909142373009110', commit timestamp: 'Timestamp(1574796752, 3537)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.406-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e with provided UUID: 1f16e5d9-f3a7-4e90-820d-da5e3cdf8703 and options: { uuid: UUID("1f16e5d9-f3a7-4e90-820d-da5e3cdf8703"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.251-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:32.474-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-525--4104909142373009110, commit timestamp: Timestamp(1574796752, 3537)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.254-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93 with generated UUID: d77031cf-6538-4b23-b849-16c37c1a47c0 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.261-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.279-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.279-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.279-0500 I STORAGE [conn108] Index build initialized: 7a3b4ea0-bc98-43f0-8397-3615b0a6cb6e: test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160 (b28a4daa-804b-4466-9062-872a0aae5d42 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.279-0500 I INDEX [conn108] Waiting for index build to complete: 7a3b4ea0-bc98-43f0-8397-3615b0a6cb6e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.281-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: f496d87c-ca53-4f65-a0c5-ba9b92bff432: test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e ( 3602026a-acbd-43e9-9267-8b9a2fc24f0f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.281-0500 I INDEX [conn114] Index build completed: f496d87c-ca53-4f65-a0c5-ba9b92bff432
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.288-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.289-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.289-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796752, 2525), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.289-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.289-0500 I STORAGE [conn110] renameCollection: renaming collection ff19dd80-07b1-47d7-abdc-de2e7cb53d66 from test5_fsmdb0.tmp.agg_out.b3b8a1bd-4630-431a-aeae-8b8fde5a77ce to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.289-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc)'. Ident: 'index-508-8224331490264904478', commit timestamp: 'Timestamp(1574796752, 2525)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.289-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d7ab4d33-46b1-4c53-a89c-7f71dbbf7cdc)'. Ident: 'index-511-8224331490264904478', commit timestamp: 'Timestamp(1574796752, 2525)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.289-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-505-8224331490264904478, commit timestamp: Timestamp(1574796752, 2525)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.289-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.289-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 50110919479547598, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5706724159619259357, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796749254), clusterTime: Timestamp(1574796749, 3847) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796749, 3848), signature: { hash: BinData(0, E52E6B1E4F4C50C54E92D0317F270BD159ED2D2D), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 181ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.290-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.292-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.292-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.292-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (ff19dd80-07b1-47d7-abdc-de2e7cb53d66) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796752, 2528), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.292-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (ff19dd80-07b1-47d7-abdc-de2e7cb53d66).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.292-0500 I STORAGE [conn46] renameCollection: renaming collection 126f0461-584f-4161-b6e0-63f434414629 from test5_fsmdb0.tmp.agg_out.8cdb7206-a92a-4a3e-afa3-3dc349eea824 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.292-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ff19dd80-07b1-47d7-abdc-de2e7cb53d66)'. Ident: 'index-509-8224331490264904478', commit timestamp: 'Timestamp(1574796752, 2528)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.292-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ff19dd80-07b1-47d7-abdc-de2e7cb53d66)'. Ident: 'index-513-8224331490264904478', commit timestamp: 'Timestamp(1574796752, 2528)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.292-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-506-8224331490264904478, commit timestamp: Timestamp(1574796752, 2528)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.292-0500 I INDEX [conn112] Registering index build: bf666a2d-2578-4242-85c7-7dfcd8ddefda
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.292-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6517913584106705025, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3128626698042853327, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796752109), clusterTime: Timestamp(1574796749, 3848) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796752, 2), signature: { hash: BinData(0, 56F3EFC472C7D97A3EEA7034FBB4EF88057B5AA8), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 182ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.293-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771 with generated UUID: 1da4737b-0194-409d-befd-9879f5f65f50 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.294-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 7a3b4ea0-bc98-43f0-8397-3615b0a6cb6e: test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160 ( b28a4daa-804b-4466-9062-872a0aae5d42 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.296-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481 with generated UUID: 0eaf7944-47f7-48ed-abb0-2c0463c62c58 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.325-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.325-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.325-0500 I STORAGE [conn112] Index build initialized: bf666a2d-2578-4242-85c7-7dfcd8ddefda: test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93 (d77031cf-6538-4b23-b849-16c37c1a47c0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.325-0500 I INDEX [conn112] Waiting for index build to complete: bf666a2d-2578-4242-85c7-7dfcd8ddefda
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.325-0500 I INDEX [conn108] Index build completed: 7a3b4ea0-bc98-43f0-8397-3615b0a6cb6e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.325-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.333-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.339-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.339-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.341-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.342-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.342-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (126f0461-584f-4161-b6e0-63f434414629) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796752, 3034), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.342-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (126f0461-584f-4161-b6e0-63f434414629).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.342-0500 I STORAGE [conn114] renameCollection: renaming collection 3602026a-acbd-43e9-9267-8b9a2fc24f0f from test5_fsmdb0.tmp.agg_out.cd25e5d5-5824-4227-950f-b92f0514e44e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.342-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (126f0461-584f-4161-b6e0-63f434414629)'. Ident: 'index-510-8224331490264904478', commit timestamp: 'Timestamp(1574796752, 3034)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.342-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (126f0461-584f-4161-b6e0-63f434414629)'. Ident: 'index-515-8224331490264904478', commit timestamp: 'Timestamp(1574796752, 3034)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.342-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-507-8224331490264904478, commit timestamp: Timestamp(1574796752, 3034)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.342-0500 I INDEX [conn110] Registering index build: e8206b63-9eea-4ec7-b98a-d369c9eace0b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.342-0500 I INDEX [conn46] Registering index build: 35bdb6c4-7349-459d-84db-ee81388c964c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.342-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 18450627289403297, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5421131300303003229, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796752180), clusterTime: Timestamp(1574796752, 1010) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796752, 1011), signature: { hash: BinData(0, 56F3EFC472C7D97A3EEA7034FBB4EF88057B5AA8), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 161ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.342-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: bf666a2d-2578-4242-85c7-7dfcd8ddefda: test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93 ( d77031cf-6538-4b23-b849-16c37c1a47c0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.345-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c with generated UUID: 3324b894-56e0-49fd-89f5-45beb58177b3 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.361-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.361-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.361-0500 I STORAGE [conn110] Index build initialized: e8206b63-9eea-4ec7-b98a-d369c9eace0b: test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481 (0eaf7944-47f7-48ed-abb0-2c0463c62c58 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.361-0500 I INDEX [conn110] Waiting for index build to complete: e8206b63-9eea-4ec7-b98a-d369c9eace0b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.361-0500 I INDEX [conn112] Index build completed: bf666a2d-2578-4242-85c7-7dfcd8ddefda
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.377-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.377-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.378-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (3602026a-acbd-43e9-9267-8b9a2fc24f0f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796752, 3537), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.378-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (3602026a-acbd-43e9-9267-8b9a2fc24f0f).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.378-0500 I STORAGE [conn108] renameCollection: renaming collection b28a4daa-804b-4466-9062-872a0aae5d42 from test5_fsmdb0.tmp.agg_out.f83c04f4-c780-4f56-8ef2-ea85aef8f160 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.378-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3602026a-acbd-43e9-9267-8b9a2fc24f0f)'. Ident: 'index-518-8224331490264904478', commit timestamp: 'Timestamp(1574796752, 3537)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.378-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3602026a-acbd-43e9-9267-8b9a2fc24f0f)'. Ident: 'index-519-8224331490264904478', commit timestamp: 'Timestamp(1574796752, 3537)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.378-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-516-8224331490264904478, commit timestamp: Timestamp(1574796752, 3537)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.378-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.378-0500 I INDEX [conn114] Registering index build: 83635620-2930-406a-aa02-98dba5471d09
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.378-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6818992226009885682, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8629912323729768191, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796752213), clusterTime: Timestamp(1574796752, 1079) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796752, 1079), signature: { hash: BinData(0, 56F3EFC472C7D97A3EEA7034FBB4EF88057B5AA8), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 164ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.378-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.381-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e with generated UUID: 1f16e5d9-f3a7-4e90-820d-da5e3cdf8703 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.388-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.404-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.398-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.407-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: e8206b63-9eea-4ec7-b98a-d369c9eace0b: test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481 ( 0eaf7944-47f7-48ed-abb0-2c0463c62c58 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:32.416-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.398-0500 I STORAGE [conn46] Index build initialized: 35bdb6c4-7349-459d-84db-ee81388c964c: test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771 (1da4737b-0194-409d-befd-9879f5f65f50 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.398-0500 I INDEX [conn46] Waiting for index build to complete: 35bdb6c4-7349-459d-84db-ee81388c964c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.398-0500 I INDEX [conn110] Index build completed: e8206b63-9eea-4ec7-b98a-d369c9eace0b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.399-0500 I COMMAND [conn110] command test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796752, 3031), signature: { hash: BinData(0, 56F3EFC472C7D97A3EEA7034FBB4EF88057B5AA8), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 2674 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 1059ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.399-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e appName: "tid:2" command: create { create: "tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e", temp: true, validationLevel: "moderate", validationAction: "error", databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796752, 3537), signature: { hash: BinData(0, 56F3EFC472C7D97A3EEA7034FBB4EF88057B5AA8), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 1017ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.399-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.399-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (b28a4daa-804b-4466-9062-872a0aae5d42) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 2), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.399-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (b28a4daa-804b-4466-9062-872a0aae5d42).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.399-0500 I STORAGE [conn112] renameCollection: renaming collection d77031cf-6538-4b23-b849-16c37c1a47c0 from test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.399-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (b28a4daa-804b-4466-9062-872a0aae5d42)'. Ident: 'index-522-8224331490264904478', commit timestamp: 'Timestamp(1574796753, 2)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.399-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (b28a4daa-804b-4466-9062-872a0aae5d42)'. Ident: 'index-523-8224331490264904478', commit timestamp: 'Timestamp(1574796753, 2)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.399-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-520-8224331490264904478, commit timestamp: Timestamp(1574796753, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.399-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.399-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93 appName: "tid:4" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93", to: "test5_fsmdb0.agg_out", collectionOptions: { validationLevel: "moderate", validationAction: "error" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796752, 4040), signature: { hash: BinData(0, 56F3EFC472C7D97A3EEA7034FBB4EF88057B5AA8), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 993394 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 993ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.399-0500 I INDEX [conn110] Registering index build: 652e1929-297c-4d75-a4e5-b1b57e3a2acf
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.399-0500 I COMMAND [conn220] command test5_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796752, 2528), lsid: { id: UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") }, $clusterTime: { clusterTime: Timestamp(1574796752, 2528), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796752, 2528). Collection minimum timestamp is Timestamp(1574796752, 3604)" errName:SnapshotUnavailable errCode:246 reslen:582 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 980424 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 980ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.399-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 971780379530229986, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7759287265973899446, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796752252), clusterTime: Timestamp(1574796752, 1520) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796752, 1520), signature: { hash: BinData(0, 56F3EFC472C7D97A3EEA7034FBB4EF88057B5AA8), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 1146ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.400-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.401-0500 I COMMAND [conn68] CMD: dropIndexes test5_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.406-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.412-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.412-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.412-0500 I STORAGE [conn114] Index build initialized: 83635620-2930-406a-aa02-98dba5471d09: test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c (3324b894-56e0-49fd-89f5-45beb58177b3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.412-0500 I INDEX [conn114] Waiting for index build to complete: 83635620-2930-406a-aa02-98dba5471d09
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.413-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.413-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 35bdb6c4-7349-459d-84db-ee81388c964c: test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771 ( 1da4737b-0194-409d-befd-9879f5f65f50 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.414-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.415-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6 with generated UUID: f5d8aeab-fb68-462a-8f0f-48ce1f8e479e and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.421-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.422-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e with provided UUID: 1f16e5d9-f3a7-4e90-820d-da5e3cdf8703 and options: { uuid: UUID("1f16e5d9-f3a7-4e90-820d-da5e3cdf8703"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.424-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.437-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.437-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.437-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: d77dbb5f-eede-42a6-9cc5-6a8d32c1cce6: test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481 (0eaf7944-47f7-48ed-abb0-2c0463c62c58 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.437-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.437-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.438-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.438-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.438-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.438-0500 I STORAGE [conn110] Index build initialized: 652e1929-297c-4d75-a4e5-b1b57e3a2acf: test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e (1f16e5d9-f3a7-4e90-820d-da5e3cdf8703 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.439-0500 I INDEX [conn110] Waiting for index build to complete: 652e1929-297c-4d75-a4e5-b1b57e3a2acf
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.439-0500 I INDEX [conn46] Index build completed: 35bdb6c4-7349-459d-84db-ee81388c964c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.439-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771 appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796752, 3031), signature: { hash: BinData(0, 56F3EFC472C7D97A3EEA7034FBB4EF88057B5AA8), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 24711 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 1104ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.439-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.441-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 83635620-2930-406a-aa02-98dba5471d09: test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c ( 3324b894-56e0-49fd-89f5-45beb58177b3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.441-0500 I INDEX [conn114] Index build completed: 83635620-2930-406a-aa02-98dba5471d09
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.441-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796752, 3537), signature: { hash: BinData(0, 56F3EFC472C7D97A3EEA7034FBB4EF88057B5AA8), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 639 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 1063ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.442-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93 (d77031cf-6538-4b23-b849-16c37c1a47c0) to test5_fsmdb0.agg_out and drop b28a4daa-804b-4466-9062-872a0aae5d42.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.442-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (b28a4daa-804b-4466-9062-872a0aae5d42) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 2), t: 1 } and commit timestamp Timestamp(1574796753, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.442-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (b28a4daa-804b-4466-9062-872a0aae5d42).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.442-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection d77031cf-6538-4b23-b849-16c37c1a47c0 from test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.442-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (b28a4daa-804b-4466-9062-872a0aae5d42)'. Ident: 'index-530--8000595249233899911', commit timestamp: 'Timestamp(1574796753, 2)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.442-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (b28a4daa-804b-4466-9062-872a0aae5d42)'. Ident: 'index-537--8000595249233899911', commit timestamp: 'Timestamp(1574796753, 2)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.442-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-529--8000595249233899911, commit timestamp: Timestamp(1574796753, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.442-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: d77dbb5f-eede-42a6-9cc5-6a8d32c1cce6: test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481 ( 0eaf7944-47f7-48ed-abb0-2c0463c62c58 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.449-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.449-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.449-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (d77031cf-6538-4b23-b849-16c37c1a47c0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 510), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.449-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (d77031cf-6538-4b23-b849-16c37c1a47c0).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.449-0500 I STORAGE [conn108] renameCollection: renaming collection 0eaf7944-47f7-48ed-abb0-2c0463c62c58 from test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.449-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d77031cf-6538-4b23-b849-16c37c1a47c0)'. Ident: 'index-526-8224331490264904478', commit timestamp: 'Timestamp(1574796753, 510)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.449-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d77031cf-6538-4b23-b849-16c37c1a47c0)'. Ident: 'index-527-8224331490264904478', commit timestamp: 'Timestamp(1574796753, 510)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.449-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-524-8224331490264904478, commit timestamp: Timestamp(1574796753, 510)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.449-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.450-0500 I INDEX [conn112] Registering index build: ffed9154-85f2-4bd9-8660-6d2151ec7f36
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.450-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4890619580833364990, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 769107255468611398, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796752294), clusterTime: Timestamp(1574796752, 2528) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796752, 2529), signature: { hash: BinData(0, 56F3EFC472C7D97A3EEA7034FBB4EF88057B5AA8), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 1155ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:33.450-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796752, 2528), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 1156ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.452-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.486-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:33.523-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796752, 3098), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 1179ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.450-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:33.574-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796752, 3537), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 1193ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.452-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.486-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:33.523-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796752, 2525), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 1232ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.453-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e with generated UUID: 6f31ba43-d761-4e14-abdf-7c2faba353cf and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:33.605-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796753, 69), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 191ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.452-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 4e858a67-8252-47f7-80a0-ca6d904231fc: test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481 (0eaf7944-47f7-48ed-abb0-2c0463c62c58 ): indexes: 1
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:33.704-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796753, 1518), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 180ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.486-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 2509ee2f-f66c-409a-b65d-99f8f3986519: test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771 (1da4737b-0194-409d-befd-9879f5f65f50 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.491-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:33.657-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796753, 510), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 205ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.452-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:33.741-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796753, 1518), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 216ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.486-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.509-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:33.739-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796753, 2088), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 164ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.453-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.487-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.509-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:33.796-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796753, 2527), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 190ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.457-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93 (d77031cf-6538-4b23-b849-16c37c1a47c0) to test5_fsmdb0.agg_out and drop b28a4daa-804b-4466-9062-872a0aae5d42.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.489-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.509-0500 I STORAGE [conn112] Index build initialized: ffed9154-85f2-4bd9-8660-6d2151ec7f36: test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6 (f5d8aeab-fb68-462a-8f0f-48ce1f8e479e ): indexes: 1
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:36.792-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796753, 3033), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3133ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.484-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.493-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 2509ee2f-f66c-409a-b65d-99f8f3986519: test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771 ( 1da4737b-0194-409d-befd-9879f5f65f50 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.509-0500 I INDEX [conn112] Waiting for index build to complete: ffed9154-85f2-4bd9-8660-6d2151ec7f36
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.509-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.484-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (b28a4daa-804b-4466-9062-872a0aae5d42) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 2), t: 1 } and commit timestamp Timestamp(1574796753, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.495-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6 with provided UUID: f5d8aeab-fb68-462a-8f0f-48ce1f8e479e and options: { uuid: UUID("f5d8aeab-fb68-462a-8f0f-48ce1f8e479e"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.511-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 652e1929-297c-4d75-a4e5-b1b57e3a2acf: test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e ( 1f16e5d9-f3a7-4e90-820d-da5e3cdf8703 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.484-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (b28a4daa-804b-4466-9062-872a0aae5d42).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.509-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.511-0500 I INDEX [conn110] Index build completed: 652e1929-297c-4d75-a4e5-b1b57e3a2acf
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.484-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection d77031cf-6538-4b23-b849-16c37c1a47c0 from test5_fsmdb0.tmp.agg_out.7ae00e82-5878-4411-9b8c-cbde8937dd93 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.525-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.511-0500 I COMMAND [conn110] command test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796753, 2), signature: { hash: BinData(0, 9122DBC09EDDBB444843EB147B27BD8172624FA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 111ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.484-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (b28a4daa-804b-4466-9062-872a0aae5d42)'. Ident: 'index-530--4104909142373009110', commit timestamp: 'Timestamp(1574796753, 2)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.525-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.519-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.484-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (b28a4daa-804b-4466-9062-872a0aae5d42)'. Ident: 'index-537--4104909142373009110', commit timestamp: 'Timestamp(1574796753, 2)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.525-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 174810ee-e5fe-4444-8057-e09d21d6ee9f: test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c (3324b894-56e0-49fd-89f5-45beb58177b3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.520-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.484-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-529--4104909142373009110, commit timestamp: Timestamp(1574796753, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.525-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.522-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.488-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 4e858a67-8252-47f7-80a0-ca6d904231fc: test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481 ( 0eaf7944-47f7-48ed-abb0-2c0463c62c58 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.526-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.522-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.505-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.528-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481 (0eaf7944-47f7-48ed-abb0-2c0463c62c58) to test5_fsmdb0.agg_out and drop d77031cf-6538-4b23-b849-16c37c1a47c0.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.522-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (0eaf7944-47f7-48ed-abb0-2c0463c62c58) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 1517), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.505-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.530-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.522-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (0eaf7944-47f7-48ed-abb0-2c0463c62c58).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.505-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 40246e41-43e8-4b12-bf74-f704d88a76fd: test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771 (1da4737b-0194-409d-befd-9879f5f65f50 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.530-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test5_fsmdb0.agg_out (d77031cf-6538-4b23-b849-16c37c1a47c0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 510), t: 1 } and commit timestamp Timestamp(1574796753, 510)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.522-0500 I STORAGE [conn46] renameCollection: renaming collection 3324b894-56e0-49fd-89f5-45beb58177b3 from test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.505-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.530-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test5_fsmdb0.agg_out (d77031cf-6538-4b23-b849-16c37c1a47c0).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.522-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0eaf7944-47f7-48ed-abb0-2c0463c62c58)'. Ident: 'index-532-8224331490264904478', commit timestamp: 'Timestamp(1574796753, 1517)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.505-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.530-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection 0eaf7944-47f7-48ed-abb0-2c0463c62c58 from test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.522-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0eaf7944-47f7-48ed-abb0-2c0463c62c58)'. Ident: 'index-533-8224331490264904478', commit timestamp: 'Timestamp(1574796753, 1517)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.508-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.530-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d77031cf-6538-4b23-b849-16c37c1a47c0)'. Ident: 'index-534--8000595249233899911', commit timestamp: 'Timestamp(1574796753, 510)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.522-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-529-8224331490264904478, commit timestamp: Timestamp(1574796753, 1517)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.512-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 40246e41-43e8-4b12-bf74-f704d88a76fd: test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771 ( 1da4737b-0194-409d-befd-9879f5f65f50 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.530-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d77031cf-6538-4b23-b849-16c37c1a47c0)'. Ident: 'index-543--8000595249233899911', commit timestamp: 'Timestamp(1574796753, 510)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.523-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.513-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6 with provided UUID: f5d8aeab-fb68-462a-8f0f-48ce1f8e479e and options: { uuid: UUID("f5d8aeab-fb68-462a-8f0f-48ce1f8e479e"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.530-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-533--8000595249233899911, commit timestamp: Timestamp(1574796753, 510)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.523-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (3324b894-56e0-49fd-89f5-45beb58177b3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 1518), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.525-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.531-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e with provided UUID: 6f31ba43-d761-4e14-abdf-7c2faba353cf and options: { uuid: UUID("6f31ba43-d761-4e14-abdf-7c2faba353cf"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.523-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (3324b894-56e0-49fd-89f5-45beb58177b3).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.544-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.532-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 174810ee-e5fe-4444-8057-e09d21d6ee9f: test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c ( 3324b894-56e0-49fd-89f5-45beb58177b3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.523-0500 I STORAGE [conn114] renameCollection: renaming collection 1da4737b-0194-409d-befd-9879f5f65f50 from test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.544-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.548-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.523-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7331881591318732869, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5127049205563313387, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796752344), clusterTime: Timestamp(1574796752, 3098) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796752, 3162), signature: { hash: BinData(0, 56F3EFC472C7D97A3EEA7034FBB4EF88057B5AA8), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 1178ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.544-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 4e91f826-92d2-4f4b-8f10-423a67f63e48: test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c (3324b894-56e0-49fd-89f5-45beb58177b3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.569-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.523-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3324b894-56e0-49fd-89f5-45beb58177b3)'. Ident: 'index-536-8224331490264904478', commit timestamp: 'Timestamp(1574796753, 1518)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.544-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.569-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.523-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3324b894-56e0-49fd-89f5-45beb58177b3)'. Ident: 'index-541-8224331490264904478', commit timestamp: 'Timestamp(1574796753, 1518)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.545-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.569-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: ed7f46d2-825f-496f-9e60-c8ca3f0b9675: test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e (1f16e5d9-f3a7-4e90-820d-da5e3cdf8703 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.523-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-534-8224331490264904478, commit timestamp: Timestamp(1574796753, 1518)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.547-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481 (0eaf7944-47f7-48ed-abb0-2c0463c62c58) to test5_fsmdb0.agg_out and drop d77031cf-6538-4b23-b849-16c37c1a47c0.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.569-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.523-0500 I INDEX [conn108] Registering index build: a0b3b660-3310-418e-b628-0b3f386dde0e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.547-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.569-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.523-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 388108667730777378, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 694622782151818592, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796752291), clusterTime: Timestamp(1574796752, 2525) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796752, 2528), signature: { hash: BinData(0, 56F3EFC472C7D97A3EEA7034FBB4EF88057B5AA8), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 1230ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.547-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (d77031cf-6538-4b23-b849-16c37c1a47c0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 510), t: 1 } and commit timestamp Timestamp(1574796753, 510)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.571-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.523-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: ffed9154-85f2-4bd9-8660-6d2151ec7f36: test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6 ( f5d8aeab-fb68-462a-8f0f-48ce1f8e479e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.547-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (d77031cf-6538-4b23-b849-16c37c1a47c0).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.574-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: ed7f46d2-825f-496f-9e60-c8ca3f0b9675: test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e ( 1f16e5d9-f3a7-4e90-820d-da5e3cdf8703 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.526-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271 with generated UUID: 08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.547-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 0eaf7944-47f7-48ed-abb0-2c0463c62c58 from test5_fsmdb0.tmp.agg_out.6cef7f54-86f1-47ef-9ba2-39d0aae3d481 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.592-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.526-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5 with generated UUID: 9f6f085d-5129-4f40-aa08-28bd3c65b525 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.547-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d77031cf-6538-4b23-b849-16c37c1a47c0)'. Ident: 'index-534--4104909142373009110', commit timestamp: 'Timestamp(1574796753, 510)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.592-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.553-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.547-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d77031cf-6538-4b23-b849-16c37c1a47c0)'. Ident: 'index-543--4104909142373009110', commit timestamp: 'Timestamp(1574796753, 510)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.592-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: ac7e4b2b-538e-4668-8c82-dc5fbaf8fcb1: test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6 (f5d8aeab-fb68-462a-8f0f-48ce1f8e479e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.592-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.547-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-533--4104909142373009110, commit timestamp: Timestamp(1574796753, 510)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.593-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.553-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.549-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 4e91f826-92d2-4f4b-8f10-423a67f63e48: test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c ( 3324b894-56e0-49fd-89f5-45beb58177b3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.594-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c (3324b894-56e0-49fd-89f5-45beb58177b3) to test5_fsmdb0.agg_out and drop 0eaf7944-47f7-48ed-abb0-2c0463c62c58.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.553-0500 I STORAGE [conn108] Index build initialized: a0b3b660-3310-418e-b628-0b3f386dde0e: test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e (6f31ba43-d761-4e14-abdf-7c2faba353cf ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.549-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e with provided UUID: 6f31ba43-d761-4e14-abdf-7c2faba353cf and options: { uuid: UUID("6f31ba43-d761-4e14-abdf-7c2faba353cf"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.595-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.553-0500 I INDEX [conn108] Waiting for index build to complete: a0b3b660-3310-418e-b628-0b3f386dde0e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.562-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.596-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (0eaf7944-47f7-48ed-abb0-2c0463c62c58) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 1517), t: 1 } and commit timestamp Timestamp(1574796753, 1517)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.553-0500 I INDEX [conn112] Index build completed: ffed9154-85f2-4bd9-8660-6d2151ec7f36
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.585-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.596-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (0eaf7944-47f7-48ed-abb0-2c0463c62c58).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.553-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.585-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.596-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection 3324b894-56e0-49fd-89f5-45beb58177b3 from test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.553-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6 appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796753, 510), signature: { hash: BinData(0, 9122DBC09EDDBB444843EB147B27BD8172624FA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 52 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 103ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.585-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 979edb47-ab00-4324-92a7-e63a6a8536cf: test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e (1f16e5d9-f3a7-4e90-820d-da5e3cdf8703 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.596-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0eaf7944-47f7-48ed-abb0-2c0463c62c58)'. Ident: 'index-542--8000595249233899911', commit timestamp: 'Timestamp(1574796753, 1517)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.562-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.585-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.596-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0eaf7944-47f7-48ed-abb0-2c0463c62c58)'. Ident: 'index-549--8000595249233899911', commit timestamp: 'Timestamp(1574796753, 1517)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.570-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.585-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.596-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-541--8000595249233899911, commit timestamp: Timestamp(1574796753, 1517)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.570-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.587-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.596-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771 (1da4737b-0194-409d-befd-9879f5f65f50) to test5_fsmdb0.agg_out and drop 3324b894-56e0-49fd-89f5-45beb58177b3.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.573-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.591-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 979edb47-ab00-4324-92a7-e63a6a8536cf: test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e ( 1f16e5d9-f3a7-4e90-820d-da5e3cdf8703 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.596-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (3324b894-56e0-49fd-89f5-45beb58177b3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 1518), t: 1 } and commit timestamp Timestamp(1574796753, 1518)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.573-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.608-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.596-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (3324b894-56e0-49fd-89f5-45beb58177b3).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.573-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (1da4737b-0194-409d-befd-9879f5f65f50) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 2024), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.608-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.597-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 1da4737b-0194-409d-befd-9879f5f65f50 from test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.573-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (1da4737b-0194-409d-befd-9879f5f65f50).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.608-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: ffa2d799-daff-42ac-a4e2-5010cc04ac0d: test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6 (f5d8aeab-fb68-462a-8f0f-48ce1f8e479e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.597-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3324b894-56e0-49fd-89f5-45beb58177b3)'. Ident: 'index-546--8000595249233899911', commit timestamp: 'Timestamp(1574796753, 1518)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.573-0500 I STORAGE [conn112] renameCollection: renaming collection 1f16e5d9-f3a7-4e90-820d-da5e3cdf8703 from test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.608-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.597-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3324b894-56e0-49fd-89f5-45beb58177b3)'. Ident: 'index-555--8000595249233899911', commit timestamp: 'Timestamp(1574796753, 1518)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.573-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1da4737b-0194-409d-befd-9879f5f65f50)'. Ident: 'index-531-8224331490264904478', commit timestamp: 'Timestamp(1574796753, 2024)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.608-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.597-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-545--8000595249233899911, commit timestamp: Timestamp(1574796753, 1518)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.573-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1da4737b-0194-409d-befd-9879f5f65f50)'. Ident: 'index-537-8224331490264904478', commit timestamp: 'Timestamp(1574796753, 2024)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.609-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c (3324b894-56e0-49fd-89f5-45beb58177b3) to test5_fsmdb0.agg_out and drop 0eaf7944-47f7-48ed-abb0-2c0463c62c58.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.597-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271 with provided UUID: 08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9 and options: { uuid: UUID("08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.573-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-528-8224331490264904478, commit timestamp: Timestamp(1574796753, 2024)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.611-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.598-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: ac7e4b2b-538e-4668-8c82-dc5fbaf8fcb1: test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6 ( f5d8aeab-fb68-462a-8f0f-48ce1f8e479e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.573-0500 I INDEX [conn110] Registering index build: 8ec0d70b-5e40-474b-b243-4d6654740d67
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.611-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (0eaf7944-47f7-48ed-abb0-2c0463c62c58) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 1517), t: 1 } and commit timestamp Timestamp(1574796753, 1517)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.612-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.573-0500 I INDEX [conn46] Registering index build: 257ec74b-1c98-4711-be25-61bedf99c94e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.611-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (0eaf7944-47f7-48ed-abb0-2c0463c62c58).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.613-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5 with provided UUID: 9f6f085d-5129-4f40-aa08-28bd3c65b525 and options: { uuid: UUID("9f6f085d-5129-4f40-aa08-28bd3c65b525"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.573-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6745293425949245934, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 387696840321029078, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796752380), clusterTime: Timestamp(1574796752, 3537) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796752, 3537), signature: { hash: BinData(0, 56F3EFC472C7D97A3EEA7034FBB4EF88057B5AA8), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 1192ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.611-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection 3324b894-56e0-49fd-89f5-45beb58177b3 from test5_fsmdb0.tmp.agg_out.f7906564-8da2-4c48-9d94-25dbc228990c to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.629-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.574-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: a0b3b660-3310-418e-b628-0b3f386dde0e: test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e ( 6f31ba43-d761-4e14-abdf-7c2faba353cf ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.611-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0eaf7944-47f7-48ed-abb0-2c0463c62c58)'. Ident: 'index-542--4104909142373009110', commit timestamp: 'Timestamp(1574796753, 1517)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.650-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.576-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8 with generated UUID: 50095387-c870-4f14-b3db-f1a1bb3cb39b and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.611-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0eaf7944-47f7-48ed-abb0-2c0463c62c58)'. Ident: 'index-549--4104909142373009110', commit timestamp: 'Timestamp(1574796753, 1517)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.650-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.598-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.611-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-541--4104909142373009110, commit timestamp: Timestamp(1574796753, 1517)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.650-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 863c85d5-c49b-45c4-ae0a-27fb50570ff2: test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e (6f31ba43-d761-4e14-abdf-7c2faba353cf ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.598-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.612-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771 (1da4737b-0194-409d-befd-9879f5f65f50) to test5_fsmdb0.agg_out and drop 3324b894-56e0-49fd-89f5-45beb58177b3.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.650-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.598-0500 I STORAGE [conn110] Index build initialized: 8ec0d70b-5e40-474b-b243-4d6654740d67: test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271 (08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.612-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test5_fsmdb0.agg_out (3324b894-56e0-49fd-89f5-45beb58177b3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 1518), t: 1 } and commit timestamp Timestamp(1574796753, 1518)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.650-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.598-0500 I INDEX [conn110] Waiting for index build to complete: 8ec0d70b-5e40-474b-b243-4d6654740d67
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.612-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test5_fsmdb0.agg_out (3324b894-56e0-49fd-89f5-45beb58177b3).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.651-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e (1f16e5d9-f3a7-4e90-820d-da5e3cdf8703) to test5_fsmdb0.agg_out and drop 1da4737b-0194-409d-befd-9879f5f65f50.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.598-0500 I INDEX [conn108] Index build completed: a0b3b660-3310-418e-b628-0b3f386dde0e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.612-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection 1da4737b-0194-409d-befd-9879f5f65f50 from test5_fsmdb0.tmp.agg_out.3e6381ee-daeb-4845-80c5-f7fba86fd771 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.652-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.604-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.612-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3324b894-56e0-49fd-89f5-45beb58177b3)'. Ident: 'index-546--4104909142373009110', commit timestamp: 'Timestamp(1574796753, 1518)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.652-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (1da4737b-0194-409d-befd-9879f5f65f50) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 2024), t: 1 } and commit timestamp Timestamp(1574796753, 2024)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.604-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.612-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3324b894-56e0-49fd-89f5-45beb58177b3)'. Ident: 'index-555--4104909142373009110', commit timestamp: 'Timestamp(1574796753, 1518)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.652-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (1da4737b-0194-409d-befd-9879f5f65f50).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.604-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (1f16e5d9-f3a7-4e90-820d-da5e3cdf8703) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 2527), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.612-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-545--4104909142373009110, commit timestamp: Timestamp(1574796753, 1518)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.652-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection 1f16e5d9-f3a7-4e90-820d-da5e3cdf8703 from test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.605-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (1f16e5d9-f3a7-4e90-820d-da5e3cdf8703).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.613-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: ffa2d799-daff-42ac-a4e2-5010cc04ac0d: test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6 ( f5d8aeab-fb68-462a-8f0f-48ce1f8e479e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.653-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1da4737b-0194-409d-befd-9879f5f65f50)'. Ident: 'index-540--8000595249233899911', commit timestamp: 'Timestamp(1574796753, 2024)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.605-0500 I STORAGE [conn114] renameCollection: renaming collection f5d8aeab-fb68-462a-8f0f-48ce1f8e479e from test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.613-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271 with provided UUID: 08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9 and options: { uuid: UUID("08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.653-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1da4737b-0194-409d-befd-9879f5f65f50)'. Ident: 'index-551--8000595249233899911', commit timestamp: 'Timestamp(1574796753, 2024)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.605-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1f16e5d9-f3a7-4e90-820d-da5e3cdf8703)'. Ident: 'index-540-8224331490264904478', commit timestamp: 'Timestamp(1574796753, 2527)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.630-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.653-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-539--8000595249233899911, commit timestamp: Timestamp(1574796753, 2024)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.605-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1f16e5d9-f3a7-4e90-820d-da5e3cdf8703)'. Ident: 'index-543-8224331490264904478', commit timestamp: 'Timestamp(1574796753, 2527)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.631-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5 with provided UUID: 9f6f085d-5129-4f40-aa08-28bd3c65b525 and options: { uuid: UUID("9f6f085d-5129-4f40-aa08-28bd3c65b525"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.655-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 863c85d5-c49b-45c4-ae0a-27fb50570ff2: test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e ( 6f31ba43-d761-4e14-abdf-7c2faba353cf ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.605-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-538-8224331490264904478, commit timestamp: Timestamp(1574796753, 2527)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.646-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.657-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8 with provided UUID: 50095387-c870-4f14-b3db-f1a1bb3cb39b and options: { uuid: UUID("50095387-c870-4f14-b3db-f1a1bb3cb39b"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.605-0500 I INDEX [conn112] Registering index build: 8fcc0e94-b59e-455a-807a-19c1a406adf3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.667-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.673-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.605-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.667-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.677-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6 (f5d8aeab-fb68-462a-8f0f-48ce1f8e479e) to test5_fsmdb0.agg_out and drop 1f16e5d9-f3a7-4e90-820d-da5e3cdf8703.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.605-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7758424567474614685, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5785250283241546500, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796753414), clusterTime: Timestamp(1574796753, 69) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796753, 133), signature: { hash: BinData(0, 9122DBC09EDDBB444843EB147B27BD8172624FA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 190ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.667-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 9c43fd41-0dcc-475d-b715-1156ba8ed0e9: test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e (6f31ba43-d761-4e14-abdf-7c2faba353cf ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.677-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (1f16e5d9-f3a7-4e90-820d-da5e3cdf8703) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 2527), t: 1 } and commit timestamp Timestamp(1574796753, 2527)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.605-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.667-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.677-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (1f16e5d9-f3a7-4e90-820d-da5e3cdf8703).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.608-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938 with generated UUID: 94d70fbc-859b-4ca2-8d81-ed31691453c3 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.668-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.677-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection f5d8aeab-fb68-462a-8f0f-48ce1f8e479e from test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.615-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.669-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e (1f16e5d9-f3a7-4e90-820d-da5e3cdf8703) to test5_fsmdb0.agg_out and drop 1da4737b-0194-409d-befd-9879f5f65f50.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.677-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1f16e5d9-f3a7-4e90-820d-da5e3cdf8703)'. Ident: 'index-548--8000595249233899911', commit timestamp: 'Timestamp(1574796753, 2527)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.632-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:36.807-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796753, 3542), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3101ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.670-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.677-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1f16e5d9-f3a7-4e90-820d-da5e3cdf8703)'. Ident: 'index-559--8000595249233899911', commit timestamp: 'Timestamp(1574796753, 2527)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.632-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.670-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.agg_out (1da4737b-0194-409d-befd-9879f5f65f50) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 2024), t: 1 } and commit timestamp Timestamp(1574796753, 2024)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.677-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-547--8000595249233899911, commit timestamp: Timestamp(1574796753, 2527)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.632-0500 I STORAGE [conn46] Index build initialized: 257ec74b-1c98-4711-be25-61bedf99c94e: test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5 (9f6f085d-5129-4f40-aa08-28bd3c65b525 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.670-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.agg_out (1da4737b-0194-409d-befd-9879f5f65f50).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.678-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938 with provided UUID: 94d70fbc-859b-4ca2-8d81-ed31691453c3 and options: { uuid: UUID("94d70fbc-859b-4ca2-8d81-ed31691453c3"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.632-0500 I INDEX [conn46] Waiting for index build to complete: 257ec74b-1c98-4711-be25-61bedf99c94e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.670-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection 1f16e5d9-f3a7-4e90-820d-da5e3cdf8703 from test5_fsmdb0.tmp.agg_out.59933d42-b3ad-45e5-b444-d7f8ef16ef3e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.692-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.634-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 8ec0d70b-5e40-474b-b243-4d6654740d67: test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271 ( 08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.670-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1da4737b-0194-409d-befd-9879f5f65f50)'. Ident: 'index-540--4104909142373009110', commit timestamp: 'Timestamp(1574796753, 2024)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.710-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.643-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.670-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1da4737b-0194-409d-befd-9879f5f65f50)'. Ident: 'index-551--4104909142373009110', commit timestamp: 'Timestamp(1574796753, 2024)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.710-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.656-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.670-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-539--4104909142373009110, commit timestamp: Timestamp(1574796753, 2024)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.710-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 728cdc7e-14fa-4417-81f0-37709d433e56: test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271 (08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.656-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.672-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 9c43fd41-0dcc-475d-b715-1156ba8ed0e9: test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e ( 6f31ba43-d761-4e14-abdf-7c2faba353cf ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.710-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.656-0500 I STORAGE [conn112] Index build initialized: 8fcc0e94-b59e-455a-807a-19c1a406adf3: test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8 (50095387-c870-4f14-b3db-f1a1bb3cb39b ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.674-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8 with provided UUID: 50095387-c870-4f14-b3db-f1a1bb3cb39b and options: { uuid: UUID("50095387-c870-4f14-b3db-f1a1bb3cb39b"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.711-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.656-0500 I INDEX [conn112] Waiting for index build to complete: 8fcc0e94-b59e-455a-807a-19c1a406adf3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.689-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.714-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.656-0500 I INDEX [conn110] Index build completed: 8ec0d70b-5e40-474b-b243-4d6654740d67
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.693-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6 (f5d8aeab-fb68-462a-8f0f-48ce1f8e479e) to test5_fsmdb0.agg_out and drop 1f16e5d9-f3a7-4e90-820d-da5e3cdf8703.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.716-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 728cdc7e-14fa-4417-81f0-37709d433e56: test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271 ( 08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.656-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.693-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (1f16e5d9-f3a7-4e90-820d-da5e3cdf8703) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 2527), t: 1 } and commit timestamp Timestamp(1574796753, 2527)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.720-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e (6f31ba43-d761-4e14-abdf-7c2faba353cf) to test5_fsmdb0.agg_out and drop f5d8aeab-fb68-462a-8f0f-48ce1f8e479e.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.656-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (f5d8aeab-fb68-462a-8f0f-48ce1f8e479e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 3033), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.693-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (1f16e5d9-f3a7-4e90-820d-da5e3cdf8703).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.720-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.agg_out (f5d8aeab-fb68-462a-8f0f-48ce1f8e479e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 3033), t: 1 } and commit timestamp Timestamp(1574796753, 3033)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.656-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (f5d8aeab-fb68-462a-8f0f-48ce1f8e479e).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.693-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection f5d8aeab-fb68-462a-8f0f-48ce1f8e479e from test5_fsmdb0.tmp.agg_out.d64f93e4-4a37-433e-991e-a8ca233577b6 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.720-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.agg_out (f5d8aeab-fb68-462a-8f0f-48ce1f8e479e).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.656-0500 I STORAGE [conn108] renameCollection: renaming collection 6f31ba43-d761-4e14-abdf-7c2faba353cf from test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.693-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1f16e5d9-f3a7-4e90-820d-da5e3cdf8703)'. Ident: 'index-548--4104909142373009110', commit timestamp: 'Timestamp(1574796753, 2527)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.720-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection 6f31ba43-d761-4e14-abdf-7c2faba353cf from test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.656-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f5d8aeab-fb68-462a-8f0f-48ce1f8e479e)'. Ident: 'index-546-8224331490264904478', commit timestamp: 'Timestamp(1574796753, 3033)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.693-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1f16e5d9-f3a7-4e90-820d-da5e3cdf8703)'. Ident: 'index-559--4104909142373009110', commit timestamp: 'Timestamp(1574796753, 2527)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.720-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f5d8aeab-fb68-462a-8f0f-48ce1f8e479e)'. Ident: 'index-554--8000595249233899911', commit timestamp: 'Timestamp(1574796753, 3033)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.656-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f5d8aeab-fb68-462a-8f0f-48ce1f8e479e)'. Ident: 'index-547-8224331490264904478', commit timestamp: 'Timestamp(1574796753, 3033)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.693-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-547--4104909142373009110, commit timestamp: Timestamp(1574796753, 2527)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.720-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f5d8aeab-fb68-462a-8f0f-48ce1f8e479e)'. Ident: 'index-561--8000595249233899911', commit timestamp: 'Timestamp(1574796753, 3033)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.656-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-544-8224331490264904478, commit timestamp: Timestamp(1574796753, 3033)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.694-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938 with provided UUID: 94d70fbc-859b-4ca2-8d81-ed31691453c3 and options: { uuid: UUID("94d70fbc-859b-4ca2-8d81-ed31691453c3"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.720-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-553--8000595249233899911, commit timestamp: Timestamp(1574796753, 3033)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.656-0500 I INDEX [conn114] Registering index build: 7cbbeae2-5344-4671-aaca-79f34c192f57
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.708-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.721-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d with provided UUID: 077c5d1a-0471-4e96-b9bd-2a04de9b63c6 and options: { uuid: UUID("077c5d1a-0471-4e96-b9bd-2a04de9b63c6"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.656-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.730-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.737-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.656-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.730-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.755-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.657-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7275001886408344896, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3150822979950767988, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796753451), clusterTime: Timestamp(1574796753, 510) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796753, 510), signature: { hash: BinData(0, 9122DBC09EDDBB444843EB147B27BD8172624FA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 204ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.730-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: db98d78e-51f9-4266-a315-4f477e7f9218: test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271 (08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.755-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.657-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.730-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.755-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 561d42f9-ef86-4908-bcdb-a3ab4e724c5f: test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5 (9f6f085d-5129-4f40-aa08-28bd3c65b525 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.658-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.730-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.755-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.660-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d with generated UUID: 077c5d1a-0471-4e96-b9bd-2a04de9b63c6 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.733-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.756-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.667-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.735-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: db98d78e-51f9-4266-a315-4f477e7f9218: test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271 ( 08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.758-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.670-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.736-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e (6f31ba43-d761-4e14-abdf-7c2faba353cf) to test5_fsmdb0.agg_out and drop f5d8aeab-fb68-462a-8f0f-48ce1f8e479e.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.768-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 561d42f9-ef86-4908-bcdb-a3ab4e724c5f: test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5 ( 9f6f085d-5129-4f40-aa08-28bd3c65b525 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.686-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.736-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (f5d8aeab-fb68-462a-8f0f-48ce1f8e479e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 3033), t: 1 } and commit timestamp Timestamp(1574796753, 3033)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.776-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.686-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.736-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (f5d8aeab-fb68-462a-8f0f-48ce1f8e479e).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.776-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.686-0500 I STORAGE [conn114] Index build initialized: 7cbbeae2-5344-4671-aaca-79f34c192f57: test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938 (94d70fbc-859b-4ca2-8d81-ed31691453c3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.736-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection 6f31ba43-d761-4e14-abdf-7c2faba353cf from test5_fsmdb0.tmp.agg_out.49ca5ab9-d52b-49f2-b109-cc5ab249071e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.776-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: a0c75384-5613-41ee-8842-d356e2a70cd4: test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8 (50095387-c870-4f14-b3db-f1a1bb3cb39b ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.686-0500 I INDEX [conn114] Waiting for index build to complete: 7cbbeae2-5344-4671-aaca-79f34c192f57
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.736-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f5d8aeab-fb68-462a-8f0f-48ce1f8e479e)'. Ident: 'index-554--4104909142373009110', commit timestamp: 'Timestamp(1574796753, 3033)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.776-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.687-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.736-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f5d8aeab-fb68-462a-8f0f-48ce1f8e479e)'. Ident: 'index-561--4104909142373009110', commit timestamp: 'Timestamp(1574796753, 3033)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.777-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.689-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 257ec74b-1c98-4711-be25-61bedf99c94e: test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5 ( 9f6f085d-5129-4f40-aa08-28bd3c65b525 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.736-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-553--4104909142373009110, commit timestamp: Timestamp(1574796753, 3033)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.780-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.689-0500 I INDEX [conn46] Index build completed: 257ec74b-1c98-4711-be25-61bedf99c94e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.738-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d with provided UUID: 077c5d1a-0471-4e96-b9bd-2a04de9b63c6 and options: { uuid: UUID("077c5d1a-0471-4e96-b9bd-2a04de9b63c6"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.783-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: a0c75384-5613-41ee-8842-d356e2a70cd4: test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8 ( 50095387-c870-4f14-b3db-f1a1bb3cb39b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.689-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5 appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796753, 2021), signature: { hash: BinData(0, 9122DBC09EDDBB444843EB147B27BD8172624FA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 8919 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 118ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.754-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.797-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.692-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 8fcc0e94-b59e-455a-807a-19c1a406adf3: test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8 ( 50095387-c870-4f14-b3db-f1a1bb3cb39b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.771-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.797-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.692-0500 I INDEX [conn112] Index build completed: 8fcc0e94-b59e-455a-807a-19c1a406adf3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.771-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.797-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 099a2a7a-3950-4290-8039-bec5d9b15880: test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938 (94d70fbc-859b-4ca2-8d81-ed31691453c3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.701-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.771-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 04552cdc-92e0-4248-a5ab-9c246705f56d: test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5 (9f6f085d-5129-4f40-aa08-28bd3c65b525 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.797-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.701-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.771-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.798-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.703-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.772-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.799-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271 (08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9) to test5_fsmdb0.agg_out and drop 6f31ba43-d761-4e14-abdf-7c2faba353cf.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.704-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.775-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.800-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.704-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (6f31ba43-d761-4e14-abdf-7c2faba353cf) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 3542), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.777-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 04552cdc-92e0-4248-a5ab-9c246705f56d: test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5 ( 9f6f085d-5129-4f40-aa08-28bd3c65b525 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.800-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (6f31ba43-d761-4e14-abdf-7c2faba353cf) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 3542), t: 1 } and commit timestamp Timestamp(1574796753, 3542)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.704-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (6f31ba43-d761-4e14-abdf-7c2faba353cf).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.794-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.800-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (6f31ba43-d761-4e14-abdf-7c2faba353cf).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.704-0500 I STORAGE [conn110] renameCollection: renaming collection 08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9 from test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.794-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.800-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9 from test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.704-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (6f31ba43-d761-4e14-abdf-7c2faba353cf)'. Ident: 'index-550-8224331490264904478', commit timestamp: 'Timestamp(1574796753, 3542)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.794-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: e7a570be-3608-4301-975d-a9ba1df6f554: test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8 (50095387-c870-4f14-b3db-f1a1bb3cb39b ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.801-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (6f31ba43-d761-4e14-abdf-7c2faba353cf)'. Ident: 'index-558--8000595249233899911', commit timestamp: 'Timestamp(1574796753, 3542)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.704-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (6f31ba43-d761-4e14-abdf-7c2faba353cf)'. Ident: 'index-551-8224331490264904478', commit timestamp: 'Timestamp(1574796753, 3542)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.794-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.801-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (6f31ba43-d761-4e14-abdf-7c2faba353cf)'. Ident: 'index-567--8000595249233899911', commit timestamp: 'Timestamp(1574796753, 3542)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.704-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-548-8224331490264904478, commit timestamp: Timestamp(1574796753, 3542)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.795-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.801-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-557--8000595249233899911, commit timestamp: Timestamp(1574796753, 3542)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.704-0500 I INDEX [conn108] Registering index build: 99e9e38f-c5e2-4e07-b434-c6a67204e8b8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.796-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.801-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2 with provided UUID: 0b7fb243-90ba-407b-98d2-da072fdf9da4 and options: { uuid: UUID("0b7fb243-90ba-407b-98d2-da072fdf9da4"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.704-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 656009186286864391, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7816574682264075507, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796753524), clusterTime: Timestamp(1574796753, 1518) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796753, 1518), signature: { hash: BinData(0, 9122DBC09EDDBB444843EB147B27BD8172624FA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 178ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.800-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: e7a570be-3608-4301-975d-a9ba1df6f554: test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8 ( 50095387-c870-4f14-b3db-f1a1bb3cb39b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.803-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 099a2a7a-3950-4290-8039-bec5d9b15880: test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938 ( 94d70fbc-859b-4ca2-8d81-ed31691453c3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.705-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 7cbbeae2-5344-4671-aaca-79f34c192f57: test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938 ( 94d70fbc-859b-4ca2-8d81-ed31691453c3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.813-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.817-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.707-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2 with generated UUID: 0b7fb243-90ba-407b-98d2-da072fdf9da4 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.813-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.840-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.729-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.813-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 78c4508b-7141-45a0-a764-bd34e8d79b8c: test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938 (94d70fbc-859b-4ca2-8d81-ed31691453c3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.840-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.729-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.813-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.840-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: dd1a9b06-2548-4ec1-9a2b-3dfbb26eb310: test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d (077c5d1a-0471-4e96-b9bd-2a04de9b63c6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.729-0500 I STORAGE [conn108] Index build initialized: 99e9e38f-c5e2-4e07-b434-c6a67204e8b8: test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d (077c5d1a-0471-4e96-b9bd-2a04de9b63c6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.814-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.840-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.729-0500 I INDEX [conn108] Waiting for index build to complete: 99e9e38f-c5e2-4e07-b434-c6a67204e8b8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.814-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271 (08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9) to test5_fsmdb0.agg_out and drop 6f31ba43-d761-4e14-abdf-7c2faba353cf.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.840-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.729-0500 I INDEX [conn114] Index build completed: 7cbbeae2-5344-4671-aaca-79f34c192f57
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.815-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.841-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8 (50095387-c870-4f14-b3db-f1a1bb3cb39b) to test5_fsmdb0.agg_out and drop 08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.729-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.815-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (6f31ba43-d761-4e14-abdf-7c2faba353cf) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 3542), t: 1 } and commit timestamp Timestamp(1574796753, 3542)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.843-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.736-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.815-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (6f31ba43-d761-4e14-abdf-7c2faba353cf).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.843-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.agg_out (08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 4547), t: 1 } and commit timestamp Timestamp(1574796753, 4547)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.736-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.816-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9 from test5_fsmdb0.tmp.agg_out.a6640fc2-679a-4536-8c0d-b75bf6c0b271 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.843-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.agg_out (08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.738-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.816-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (6f31ba43-d761-4e14-abdf-7c2faba353cf)'. Ident: 'index-558--4104909142373009110', commit timestamp: 'Timestamp(1574796753, 3542)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.843-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection 50095387-c870-4f14-b3db-f1a1bb3cb39b from test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.739-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.816-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (6f31ba43-d761-4e14-abdf-7c2faba353cf)'. Ident: 'index-567--4104909142373009110', commit timestamp: 'Timestamp(1574796753, 3542)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.843-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9)'. Ident: 'index-564--8000595249233899911', commit timestamp: 'Timestamp(1574796753, 4547)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.739-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 4547), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.816-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-557--4104909142373009110, commit timestamp: Timestamp(1574796753, 3542)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.843-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9)'. Ident: 'index-573--8000595249233899911', commit timestamp: 'Timestamp(1574796753, 4547)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.739-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.818-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 78c4508b-7141-45a0-a764-bd34e8d79b8c: test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938 ( 94d70fbc-859b-4ca2-8d81-ed31691453c3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.843-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-563--8000595249233899911, commit timestamp: Timestamp(1574796753, 4547)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.739-0500 I STORAGE [conn112] renameCollection: renaming collection 50095387-c870-4f14-b3db-f1a1bb3cb39b from test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.818-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2 with provided UUID: 0b7fb243-90ba-407b-98d2-da072fdf9da4 and options: { uuid: UUID("0b7fb243-90ba-407b-98d2-da072fdf9da4"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.844-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5 (9f6f085d-5129-4f40-aa08-28bd3c65b525) to test5_fsmdb0.agg_out and drop 50095387-c870-4f14-b3db-f1a1bb3cb39b.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.739-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9)'. Ident: 'index-555-8224331490264904478', commit timestamp: 'Timestamp(1574796753, 4547)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.832-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.844-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (50095387-c870-4f14-b3db-f1a1bb3cb39b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 4548), t: 1 } and commit timestamp Timestamp(1574796753, 4548)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.739-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9)'. Ident: 'index-557-8224331490264904478', commit timestamp: 'Timestamp(1574796753, 4547)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.853-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.844-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (50095387-c870-4f14-b3db-f1a1bb3cb39b).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.739-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-552-8224331490264904478, commit timestamp: Timestamp(1574796753, 4547)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.853-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.844-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection 9f6f085d-5129-4f40-aa08-28bd3c65b525 from test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.739-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5872151775402563212, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 962102110771460472, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796753575), clusterTime: Timestamp(1574796753, 2088) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796753, 2088), signature: { hash: BinData(0, 9122DBC09EDDBB444843EB147B27BD8172624FA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 163ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.853-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: eefb4375-5699-4e6a-aebe-ea233ded20bc: test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d (077c5d1a-0471-4e96-b9bd-2a04de9b63c6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.844-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (50095387-c870-4f14-b3db-f1a1bb3cb39b)'. Ident: 'index-570--8000595249233899911', commit timestamp: 'Timestamp(1574796753, 4548)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.740-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.853-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.844-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (50095387-c870-4f14-b3db-f1a1bb3cb39b)'. Ident: 'index-579--8000595249233899911', commit timestamp: 'Timestamp(1574796753, 4548)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.740-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (50095387-c870-4f14-b3db-f1a1bb3cb39b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 4548), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.854-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.844-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-569--8000595249233899911, commit timestamp: Timestamp(1574796753, 4548)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.740-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (50095387-c870-4f14-b3db-f1a1bb3cb39b).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.855-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8 (50095387-c870-4f14-b3db-f1a1bb3cb39b) to test5_fsmdb0.agg_out and drop 08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.845-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: dd1a9b06-2548-4ec1-9a2b-3dfbb26eb310: test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d ( 077c5d1a-0471-4e96-b9bd-2a04de9b63c6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.740-0500 I STORAGE [conn46] renameCollection: renaming collection 9f6f085d-5129-4f40-aa08-28bd3c65b525 from test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.856-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.850-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f with provided UUID: 01e5d84d-3870-4b0e-bb76-a1b64331197f and options: { uuid: UUID("01e5d84d-3870-4b0e-bb76-a1b64331197f"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.740-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (50095387-c870-4f14-b3db-f1a1bb3cb39b)'. Ident: 'index-560-8224331490264904478', commit timestamp: 'Timestamp(1574796753, 4548)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.856-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 4547), t: 1 } and commit timestamp Timestamp(1574796753, 4547)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.864-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.740-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (50095387-c870-4f14-b3db-f1a1bb3cb39b)'. Ident: 'index-565-8224331490264904478', commit timestamp: 'Timestamp(1574796753, 4548)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.856-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.865-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3 with provided UUID: cd42a7f3-2ac8-4d86-910c-de28468e0433 and options: { uuid: UUID("cd42a7f3-2ac8-4d86-910c-de28468e0433"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.740-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-558-8224331490264904478, commit timestamp: Timestamp(1574796753, 4548)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.857-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection 50095387-c870-4f14-b3db-f1a1bb3cb39b from test5_fsmdb0.tmp.agg_out.dfcd614e-ae6c-47c7-b701-552df74e87f8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.880-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.740-0500 I INDEX [conn110] Registering index build: 6a153653-edda-407b-9128-a22eebbbd6d1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.857-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9)'. Ident: 'index-564--4104909142373009110', commit timestamp: 'Timestamp(1574796753, 4547)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.857-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (08fe7e6a-cbc1-4ca2-87a4-1496dab5d2e9)'. Ident: 'index-573--4104909142373009110', commit timestamp: 'Timestamp(1574796753, 4547)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.857-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-563--4104909142373009110, commit timestamp: Timestamp(1574796753, 4547)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.857-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5 (9f6f085d-5129-4f40-aa08-28bd3c65b525) to test5_fsmdb0.agg_out and drop 50095387-c870-4f14-b3db-f1a1bb3cb39b.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.857-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (50095387-c870-4f14-b3db-f1a1bb3cb39b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 4548), t: 1 } and commit timestamp Timestamp(1574796753, 4548)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.857-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (50095387-c870-4f14-b3db-f1a1bb3cb39b).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.857-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 9f6f085d-5129-4f40-aa08-28bd3c65b525 from test5_fsmdb0.tmp.agg_out.473c120e-3e6b-4848-ae49-51b75df7d6e5 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.857-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (50095387-c870-4f14-b3db-f1a1bb3cb39b)'. Ident: 'index-570--4104909142373009110', commit timestamp: 'Timestamp(1574796753, 4548)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.858-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (50095387-c870-4f14-b3db-f1a1bb3cb39b)'. Ident: 'index-579--4104909142373009110', commit timestamp: 'Timestamp(1574796753, 4548)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.858-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-569--4104909142373009110, commit timestamp: Timestamp(1574796753, 4548)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.859-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: eefb4375-5699-4e6a-aebe-ea233ded20bc: test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d ( 077c5d1a-0471-4e96-b9bd-2a04de9b63c6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.861-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796753, 4548) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796753, 4548), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 12761 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 116ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.866-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f with provided UUID: 01e5d84d-3870-4b0e-bb76-a1b64331197f and options: { uuid: UUID("01e5d84d-3870-4b0e-bb76-a1b64331197f"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.880-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.881-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3 with provided UUID: cd42a7f3-2ac8-4d86-910c-de28468e0433 and options: { uuid: UUID("cd42a7f3-2ac8-4d86-910c-de28468e0433"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.896-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.908-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.908-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.908-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: fc65838c-12bd-4d4f-a9dd-e61882a8db9e: test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2 (0b7fb243-90ba-407b-98d2-da072fdf9da4 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.908-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.909-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.910-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938 (94d70fbc-859b-4ca2-8d81-ed31691453c3) to test5_fsmdb0.agg_out and drop 9f6f085d-5129-4f40-aa08-28bd3c65b525.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.910-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.911-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.agg_out (9f6f085d-5129-4f40-aa08-28bd3c65b525) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 5054), t: 1 } and commit timestamp Timestamp(1574796753, 5054)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.911-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.agg_out (9f6f085d-5129-4f40-aa08-28bd3c65b525).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.911-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection 94d70fbc-859b-4ca2-8d81-ed31691453c3 from test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.911-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (9f6f085d-5129-4f40-aa08-28bd3c65b525)'. Ident: 'index-566--4104909142373009110', commit timestamp: 'Timestamp(1574796753, 5054)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.911-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (9f6f085d-5129-4f40-aa08-28bd3c65b525)'. Ident: 'index-577--4104909142373009110', commit timestamp: 'Timestamp(1574796753, 5054)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.911-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-565--4104909142373009110, commit timestamp: Timestamp(1574796753, 5054)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.914-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: fc65838c-12bd-4d4f-a9dd-e61882a8db9e: test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2 ( 0b7fb243-90ba-407b-98d2-da072fdf9da4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.924-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0 with provided UUID: 89419957-a37a-473d-81e8-459e71eec2ff and options: { uuid: UUID("89419957-a37a-473d-81e8-459e71eec2ff"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.936-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.895-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.961-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.895-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.740-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2730199950090343778, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7731944270974283340, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796753525), clusterTime: Timestamp(1574796753, 1518) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796753, 1518), signature: { hash: BinData(0, 9122DBC09EDDBB444843EB147B27BD8172624FA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 214ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.961-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.895-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: d634d7d7-4474-4675-8e4b-bda603e786cd: test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2 (0b7fb243-90ba-407b-98d2-da072fdf9da4 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.741-0500 I COMMAND [conn65] CMD: dropIndexes test5_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.961-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: e33b6645-b3b1-414e-a8a3-d76ffed35759: test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3 (cd42a7f3-2ac8-4d86-910c-de28468e0433 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.895-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.742-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 99e9e38f-c5e2-4e07-b434-c6a67204e8b8: test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d ( 077c5d1a-0471-4e96-b9bd-2a04de9b63c6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.961-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.896-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.758-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.962-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.896-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938 (94d70fbc-859b-4ca2-8d81-ed31691453c3) to test5_fsmdb0.agg_out and drop 9f6f085d-5129-4f40-aa08-28bd3c65b525.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.758-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.965-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.898-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.758-0500 I STORAGE [conn110] Index build initialized: 6a153653-edda-407b-9128-a22eebbbd6d1: test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2 (0b7fb243-90ba-407b-98d2-da072fdf9da4 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:33.966-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: e33b6645-b3b1-414e-a8a3-d76ffed35759: test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3 ( cd42a7f3-2ac8-4d86-910c-de28468e0433 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.898-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (9f6f085d-5129-4f40-aa08-28bd3c65b525) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 5054), t: 1 } and commit timestamp Timestamp(1574796753, 5054)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.758-0500 I INDEX [conn110] Waiting for index build to complete: 6a153653-edda-407b-9128-a22eebbbd6d1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.898-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (9f6f085d-5129-4f40-aa08-28bd3c65b525).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.795-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d (077c5d1a-0471-4e96-b9bd-2a04de9b63c6) to test5_fsmdb0.agg_out and drop 94d70fbc-859b-4ca2-8d81-ed31691453c3.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.758-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.898-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection 94d70fbc-859b-4ca2-8d81-ed31691453c3 from test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.795-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test5_fsmdb0.agg_out (94d70fbc-859b-4ca2-8d81-ed31691453c3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796756, 1), t: 1 } and commit timestamp Timestamp(1574796756, 1)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.758-0500 I INDEX [conn108] Index build completed: 99e9e38f-c5e2-4e07-b434-c6a67204e8b8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.898-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (9f6f085d-5129-4f40-aa08-28bd3c65b525)'. Ident: 'index-566--8000595249233899911', commit timestamp: 'Timestamp(1574796753, 5054)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.795-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test5_fsmdb0.agg_out (94d70fbc-859b-4ca2-8d81-ed31691453c3).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.759-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.898-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (9f6f085d-5129-4f40-aa08-28bd3c65b525)'. Ident: 'index-577--8000595249233899911', commit timestamp: 'Timestamp(1574796753, 5054)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.795-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection 077c5d1a-0471-4e96-b9bd-2a04de9b63c6 from test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.759-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f with generated UUID: 01e5d84d-3870-4b0e-bb76-a1b64331197f and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.898-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-565--8000595249233899911, commit timestamp: Timestamp(1574796753, 5054)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.795-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (94d70fbc-859b-4ca2-8d81-ed31691453c3)'. Ident: 'index-572--4104909142373009110', commit timestamp: 'Timestamp(1574796756, 1)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.761-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3 with generated UUID: cd42a7f3-2ac8-4d86-910c-de28468e0433 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.899-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: d634d7d7-4474-4675-8e4b-bda603e786cd: test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2 ( 0b7fb243-90ba-407b-98d2-da072fdf9da4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.795-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (94d70fbc-859b-4ca2-8d81-ed31691453c3)'. Ident: 'index-581--4104909142373009110', commit timestamp: 'Timestamp(1574796756, 1)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.762-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.911-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0 with provided UUID: 89419957-a37a-473d-81e8-459e71eec2ff and options: { uuid: UUID("89419957-a37a-473d-81e8-459e71eec2ff"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.795-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-571--4104909142373009110, commit timestamp: Timestamp(1574796756, 1)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.780-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 6a153653-edda-407b-9128-a22eebbbd6d1: test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2 ( 0b7fb243-90ba-407b-98d2-da072fdf9da4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.923-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.780-0500 I INDEX [conn110] Index build completed: 6a153653-edda-407b-9128-a22eebbbd6d1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.948-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.787-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.948-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.795-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.948-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 08979029-7883-4521-a567-1a95bbc20014: test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3 (cd42a7f3-2ac8-4d86-910c-de28468e0433 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.795-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.949-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.796-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (9f6f085d-5129-4f40-aa08-28bd3c65b525) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796753, 5054), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.949-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.796-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (9f6f085d-5129-4f40-aa08-28bd3c65b525).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.951-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.796-0500 I STORAGE [conn114] renameCollection: renaming collection 94d70fbc-859b-4ca2-8d81-ed31691453c3 from test5_fsmdb0.tmp.agg_out.1f9c81e3-ff54-4210-a282-639bda4a7938 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:33.954-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 08979029-7883-4521-a567-1a95bbc20014: test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3 ( cd42a7f3-2ac8-4d86-910c-de28468e0433 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.796-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (9f6f085d-5129-4f40-aa08-28bd3c65b525)'. Ident: 'index-556-8224331490264904478', commit timestamp: 'Timestamp(1574796753, 5054)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.793-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d (077c5d1a-0471-4e96-b9bd-2a04de9b63c6) to test5_fsmdb0.agg_out and drop 94d70fbc-859b-4ca2-8d81-ed31691453c3.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.796-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (9f6f085d-5129-4f40-aa08-28bd3c65b525)'. Ident: 'index-561-8224331490264904478', commit timestamp: 'Timestamp(1574796753, 5054)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.794-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (94d70fbc-859b-4ca2-8d81-ed31691453c3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796756, 1), t: 1 } and commit timestamp Timestamp(1574796756, 1)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.796-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-553-8224331490264904478, commit timestamp: Timestamp(1574796753, 5054)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.794-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (94d70fbc-859b-4ca2-8d81-ed31691453c3).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.796-0500 I INDEX [conn112] Registering index build: 06e9be54-e9a7-4743-b33d-a9ca67ac2c05
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.794-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 077c5d1a-0471-4e96-b9bd-2a04de9b63c6 from test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.796-0500 I INDEX [conn46] Registering index build: 80f67e4e-bf71-44f5-8f5c-7da4f1a7426c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.794-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (94d70fbc-859b-4ca2-8d81-ed31691453c3)'. Ident: 'index-572--8000595249233899911', commit timestamp: 'Timestamp(1574796756, 1)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.796-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8824087129923066509, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5264631281126594210, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796753606), clusterTime: Timestamp(1574796753, 2527) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796753, 2527), signature: { hash: BinData(0, 9122DBC09EDDBB444843EB147B27BD8172624FA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 189ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.794-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (94d70fbc-859b-4ca2-8d81-ed31691453c3)'. Ident: 'index-581--8000595249233899911', commit timestamp: 'Timestamp(1574796756, 1)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.799-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0 with generated UUID: 89419957-a37a-473d-81e8-459e71eec2ff and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.794-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-571--8000595249233899911, commit timestamp: Timestamp(1574796756, 1)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.821-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.815-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.821-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.815-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.821-0500 I STORAGE [conn112] Index build initialized: 06e9be54-e9a7-4743-b33d-a9ca67ac2c05: test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3 (cd42a7f3-2ac8-4d86-910c-de28468e0433 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.815-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 9f4d5b65-e58c-4490-8f09-592bdbae162b: test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f (01e5d84d-3870-4b0e-bb76-a1b64331197f ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.821-0500 I INDEX [conn112] Waiting for index build to complete: 06e9be54-e9a7-4743-b33d-a9ca67ac2c05
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.815-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.822-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.831-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.815-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.830-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.817-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2 (0b7fb243-90ba-407b-98d2-da072fdf9da4) to test5_fsmdb0.agg_out and drop 077c5d1a-0471-4e96-b9bd-2a04de9b63c6.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.831-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.830-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.818-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.831-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: b33ab9fb-325e-4205-8641-2c18e4d1b612: test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f (01e5d84d-3870-4b0e-bb76-a1b64331197f ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.839-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.818-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.agg_out (077c5d1a-0471-4e96-b9bd-2a04de9b63c6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796756, 57), t: 1 } and commit timestamp Timestamp(1574796756, 57)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.831-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.846-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.818-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.agg_out (077c5d1a-0471-4e96-b9bd-2a04de9b63c6).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.846-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.832-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.818-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection 0b7fb243-90ba-407b-98d2-da072fdf9da4 from test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.846-0500 I STORAGE [conn46] Index build initialized: 80f67e4e-bf71-44f5-8f5c-7da4f1a7426c: test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f (01e5d84d-3870-4b0e-bb76-a1b64331197f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.818-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (077c5d1a-0471-4e96-b9bd-2a04de9b63c6)'. Ident: 'index-576--8000595249233899911', commit timestamp: 'Timestamp(1574796756, 57)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.819-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (077c5d1a-0471-4e96-b9bd-2a04de9b63c6)'. Ident: 'index-585--8000595249233899911', commit timestamp: 'Timestamp(1574796756, 57)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.819-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-575--8000595249233899911, commit timestamp: Timestamp(1574796756, 57)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.821-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 9f4d5b65-e58c-4490-8f09-592bdbae162b: test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f ( 01e5d84d-3870-4b0e-bb76-a1b64331197f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:33.847-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 06e9be54-e9a7-4743-b33d-a9ca67ac2c05: test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3 ( cd42a7f3-2ac8-4d86-910c-de28468e0433 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.791-0500 I INDEX [conn46] Waiting for index build to complete: 80f67e4e-bf71-44f5-8f5c-7da4f1a7426c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.791-0500 I INDEX [conn112] Index build completed: 06e9be54-e9a7-4743-b33d-a9ca67ac2c05
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.791-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.791-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3 appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796753, 5054), signature: { hash: BinData(0, 9122DBC09EDDBB444843EB147B27BD8172624FA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 266 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2995ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.791-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (94d70fbc-859b-4ca2-8d81-ed31691453c3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796756, 1), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.791-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (94d70fbc-859b-4ca2-8d81-ed31691453c3).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.791-0500 I STORAGE [conn108] renameCollection: renaming collection 077c5d1a-0471-4e96-b9bd-2a04de9b63c6 from test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.791-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (94d70fbc-859b-4ca2-8d81-ed31691453c3)'. Ident: 'index-564-8224331490264904478', commit timestamp: 'Timestamp(1574796756, 1)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.791-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (94d70fbc-859b-4ca2-8d81-ed31691453c3)'. Ident: 'index-567-8224331490264904478', commit timestamp: 'Timestamp(1574796756, 1)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.791-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-562-8224331490264904478, commit timestamp: Timestamp(1574796756, 1)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.791-0500 I INDEX [conn114] Registering index build: cf924917-8c6e-4f38-93dd-b9b634f3b183
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.791-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.791-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d appName: "tid:0" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.201feed9-a962-4c98-b887-933d7bda9d1d", to: "test5_fsmdb0.agg_out", collectionOptions: { validationLevel: "moderate", validationAction: "error" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796753, 6004), signature: { hash: BinData(0, 9122DBC09EDDBB444843EB147B27BD8172624FA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 2966179 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2966ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.791-0500 I COMMAND [conn220] command test5_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796753, 4548), lsid: { id: UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") }, $clusterTime: { clusterTime: Timestamp(1574796753, 4548), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796753, 4548). Collection minimum timestamp is Timestamp(1574796753, 6007)" errName:SnapshotUnavailable errCode:246 reslen:582 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2928900 } }, Collection: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 18 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 2929ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.791-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4441287120501933905, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1541425946929385113, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796753658), clusterTime: Timestamp(1574796753, 3033) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796753, 3033), signature: { hash: BinData(0, 9122DBC09EDDBB444843EB147B27BD8172624FA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3132ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.792-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.792-0500 I COMMAND [conn110] command test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2 appName: "tid:3" command: insert { insert: "tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2", bypassDocumentValidation: false, ordered: false, documents: 500, shardVersion: [ Timestamp(0, 0), ObjectId('000000000000000000000000') ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, writeConcern: { w: 1, wtimeout: 0 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796753, 5119), signature: { hash: BinData(0, 9122DBC09EDDBB444843EB147B27BD8172624FA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } ninserted:500 keysInserted:1000 numYields:0 reslen:400 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 8 } }, ReplicationStateTransition: { acquireCount: { w: 8 } }, Global: { acquireCount: { w: 8 } }, Database: { acquireCount: { w: 8 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 2966568 } }, Collection: { acquireCount: { w: 8 } }, Mutex: { acquireCount: { r: 1016 } } } flowControl:{ acquireCount: 8 } storage:{ timeWaitingMicros: { schemaLock: 9374 } } protocol:op_msg 2989ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.800-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.806-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.806-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.806-0500 I STORAGE [conn114] Index build initialized: cf924917-8c6e-4f38-93dd-b9b634f3b183: test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0 (89419957-a37a-473d-81e8-459e71eec2ff ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.806-0500 I INDEX [conn114] Waiting for index build to complete: cf924917-8c6e-4f38-93dd-b9b634f3b183
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.806-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.806-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (077c5d1a-0471-4e96-b9bd-2a04de9b63c6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796756, 57), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.806-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 80f67e4e-bf71-44f5-8f5c-7da4f1a7426c: test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f ( 01e5d84d-3870-4b0e-bb76-a1b64331197f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.806-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (077c5d1a-0471-4e96-b9bd-2a04de9b63c6).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.807-0500 I INDEX [conn46] Index build completed: 80f67e4e-bf71-44f5-8f5c-7da4f1a7426c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.807-0500 I STORAGE [conn110] renameCollection: renaming collection 0b7fb243-90ba-407b-98d2-da072fdf9da4 from test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.807-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (077c5d1a-0471-4e96-b9bd-2a04de9b63c6)'. Ident: 'index-570-8224331490264904478', commit timestamp: 'Timestamp(1574796756, 57)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.807-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796753, 5053), signature: { hash: BinData(0, 9122DBC09EDDBB444843EB147B27BD8172624FA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 8004 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 3018ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.833-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2 (0b7fb243-90ba-407b-98d2-da072fdf9da4) to test5_fsmdb0.agg_out and drop 077c5d1a-0471-4e96-b9bd-2a04de9b63c6.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.807-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (077c5d1a-0471-4e96-b9bd-2a04de9b63c6)'. Ident: 'index-571-8224331490264904478', commit timestamp: 'Timestamp(1574796756, 57)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.807-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-568-8224331490264904478, commit timestamp: Timestamp(1574796756, 57)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.807-0500 I COMMAND [conn67] CMD: dropIndexes test5_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.807-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.807-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7186722615164993165, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7647862736633999747, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796753706), clusterTime: Timestamp(1574796753, 3542) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796753, 3542), signature: { hash: BinData(0, 9122DBC09EDDBB444843EB147B27BD8172624FA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3100ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.834-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.838-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52 with provided UUID: f87c8fe1-7e7e-44ae-9910-3925c29cb283 and options: { uuid: UUID("f87c8fe1-7e7e-44ae-9910-3925c29cb283"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:36.843-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796753, 5049), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3083ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:36.873-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796753, 4548), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3131ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.808-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.834-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.agg_out (077c5d1a-0471-4e96-b9bd-2a04de9b63c6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796756, 57), t: 1 } and commit timestamp Timestamp(1574796756, 57)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.853-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:36.911-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796753, 5118), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3113ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:36.964-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796756, 121), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 155ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.810-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52 with generated UUID: f87c8fe1-7e7e-44ae-9910-3925c29cb283 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.834-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.agg_out (077c5d1a-0471-4e96-b9bd-2a04de9b63c6).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.854-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746 with provided UUID: 3fc454f3-0359-4857-8052-b7b2df4ed12e and options: { uuid: UUID("3fc454f3-0359-4857-8052-b7b2df4ed12e"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:37.015-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796756, 121), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 206ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:37.048-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796756, 1193), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 173ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.810-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746 with generated UUID: 3fc454f3-0359-4857-8052-b7b2df4ed12e and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.834-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection 0b7fb243-90ba-407b-98d2-da072fdf9da4 from test5_fsmdb0.tmp.agg_out.e4ee018e-e734-4112-886e-cfe3506e29d2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.868-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:37.084-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796756, 1570), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 171ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:37.139-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796756, 2401), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 149ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.810-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.834-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (077c5d1a-0471-4e96-b9bd-2a04de9b63c6)'. Ident: 'index-576--4104909142373009110', commit timestamp: 'Timestamp(1574796756, 57)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.883-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:37.085-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796756, 690), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 240ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.827-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: cf924917-8c6e-4f38-93dd-b9b634f3b183: test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0 ( 89419957-a37a-473d-81e8-459e71eec2ff ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.834-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (077c5d1a-0471-4e96-b9bd-2a04de9b63c6)'. Ident: 'index-585--4104909142373009110', commit timestamp: 'Timestamp(1574796756, 57)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.883-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:37.174-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796757, 67), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 158ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.827-0500 I INDEX [conn114] Index build completed: cf924917-8c6e-4f38-93dd-b9b634f3b183
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.834-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-575--4104909142373009110, commit timestamp: Timestamp(1574796756, 57)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.883-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: cf0b6343-ceb6-4e8f-8196-2f7e267dc998: test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0 (89419957-a37a-473d-81e8-459e71eec2ff ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.828-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0 appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796753, 6004), signature: { hash: BinData(0, 9122DBC09EDDBB444843EB147B27BD8172624FA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 2960936 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2997ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.836-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: b33ab9fb-325e-4205-8641-2c18e4d1b612: test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f ( 01e5d84d-3870-4b0e-bb76-a1b64331197f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.883-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.854-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52 with provided UUID: f87c8fe1-7e7e-44ae-9910-3925c29cb283 and options: { uuid: UUID("f87c8fe1-7e7e-44ae-9910-3925c29cb283"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.871-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.884-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.835-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.872-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746 with provided UUID: 3fc454f3-0359-4857-8052-b7b2df4ed12e and options: { uuid: UUID("3fc454f3-0359-4857-8052-b7b2df4ed12e"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.887-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.842-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.888-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.889-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3 (cd42a7f3-2ac8-4d86-910c-de28468e0433) to test5_fsmdb0.agg_out and drop 0b7fb243-90ba-407b-98d2-da072fdf9da4.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.842-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.905-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.889-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (0b7fb243-90ba-407b-98d2-da072fdf9da4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796756, 626), t: 1 } and commit timestamp Timestamp(1574796756, 626)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.843-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (0b7fb243-90ba-407b-98d2-da072fdf9da4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796756, 626), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.905-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.889-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (0b7fb243-90ba-407b-98d2-da072fdf9da4).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.843-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (0b7fb243-90ba-407b-98d2-da072fdf9da4).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.905-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 25ab3197-e5d2-4a74-b418-c658fb172e52: test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0 (89419957-a37a-473d-81e8-459e71eec2ff ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.890-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection cd42a7f3-2ac8-4d86-910c-de28468e0433 from test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.843-0500 I STORAGE [conn112] renameCollection: renaming collection cd42a7f3-2ac8-4d86-910c-de28468e0433 from test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.905-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.890-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0b7fb243-90ba-407b-98d2-da072fdf9da4)'. Ident: 'index-584--8000595249233899911', commit timestamp: 'Timestamp(1574796756, 626)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.843-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0b7fb243-90ba-407b-98d2-da072fdf9da4)'. Ident: 'index-574-8224331490264904478', commit timestamp: 'Timestamp(1574796756, 626)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.905-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.890-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0b7fb243-90ba-407b-98d2-da072fdf9da4)'. Ident: 'index-591--8000595249233899911', commit timestamp: 'Timestamp(1574796756, 626)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.843-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0b7fb243-90ba-407b-98d2-da072fdf9da4)'. Ident: 'index-575-8224331490264904478', commit timestamp: 'Timestamp(1574796756, 626)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.907-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:39.617-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796757, 762), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2567ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.890-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-583--8000595249233899911, commit timestamp: Timestamp(1574796756, 626)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.843-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-572-8224331490264904478, commit timestamp: Timestamp(1574796756, 626)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.910-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3 (cd42a7f3-2ac8-4d86-910c-de28468e0433) to test5_fsmdb0.agg_out and drop 0b7fb243-90ba-407b-98d2-da072fdf9da4.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.891-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: cf0b6343-ceb6-4e8f-8196-2f7e267dc998: test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0 ( 89419957-a37a-473d-81e8-459e71eec2ff ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.843-0500 I INDEX [conn46] Registering index build: abb9ae3d-f39e-4efa-a8ff-620920455891
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.910-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 25ab3197-e5d2-4a74-b418-c658fb172e52: test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0 ( 89419957-a37a-473d-81e8-459e71eec2ff ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.892-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23 with provided UUID: 4a2e1ed1-f9fc-49f9-b144-94b6a8db1734 and options: { uuid: UUID("4a2e1ed1-f9fc-49f9-b144-94b6a8db1734"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.843-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2550659830663185233, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5158499364895920783, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796753760), clusterTime: Timestamp(1574796753, 5049) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796753, 5050), signature: { hash: BinData(0, 9122DBC09EDDBB444843EB147B27BD8172624FA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3082ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.910-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.agg_out (0b7fb243-90ba-407b-98d2-da072fdf9da4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796756, 626), t: 1 } and commit timestamp Timestamp(1574796756, 626)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.906-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.843-0500 I INDEX [conn108] Registering index build: f150710f-1678-49b1-bc5e-1e45ee8cbcac
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.910-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.agg_out (0b7fb243-90ba-407b-98d2-da072fdf9da4).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.911-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f (01e5d84d-3870-4b0e-bb76-a1b64331197f) to test5_fsmdb0.agg_out and drop cd42a7f3-2ac8-4d86-910c-de28468e0433.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.846-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23 with generated UUID: 4a2e1ed1-f9fc-49f9-b144-94b6a8db1734 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.910-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection cd42a7f3-2ac8-4d86-910c-de28468e0433 from test5_fsmdb0.tmp.agg_out.e462c87f-33f9-4243-ba76-dc0cbbe141f3 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.911-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (cd42a7f3-2ac8-4d86-910c-de28468e0433) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796756, 1129), t: 1 } and commit timestamp Timestamp(1574796756, 1129)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.865-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.910-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0b7fb243-90ba-407b-98d2-da072fdf9da4)'. Ident: 'index-584--4104909142373009110', commit timestamp: 'Timestamp(1574796756, 626)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.911-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (cd42a7f3-2ac8-4d86-910c-de28468e0433).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.865-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.910-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0b7fb243-90ba-407b-98d2-da072fdf9da4)'. Ident: 'index-591--4104909142373009110', commit timestamp: 'Timestamp(1574796756, 626)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.911-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 01e5d84d-3870-4b0e-bb76-a1b64331197f from test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.865-0500 I STORAGE [conn46] Index build initialized: abb9ae3d-f39e-4efa-a8ff-620920455891: test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52 (f87c8fe1-7e7e-44ae-9910-3925c29cb283 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.910-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-583--4104909142373009110, commit timestamp: Timestamp(1574796756, 626)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.911-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (cd42a7f3-2ac8-4d86-910c-de28468e0433)'. Ident: 'index-590--8000595249233899911', commit timestamp: 'Timestamp(1574796756, 1129)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.865-0500 I INDEX [conn46] Waiting for index build to complete: abb9ae3d-f39e-4efa-a8ff-620920455891
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.912-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23 with provided UUID: 4a2e1ed1-f9fc-49f9-b144-94b6a8db1734 and options: { uuid: UUID("4a2e1ed1-f9fc-49f9-b144-94b6a8db1734"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.911-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (cd42a7f3-2ac8-4d86-910c-de28468e0433)'. Ident: 'index-595--8000595249233899911', commit timestamp: 'Timestamp(1574796756, 1129)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.872-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.929-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.911-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-589--8000595249233899911, commit timestamp: Timestamp(1574796756, 1129)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.872-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.934-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f (01e5d84d-3870-4b0e-bb76-a1b64331197f) to test5_fsmdb0.agg_out and drop cd42a7f3-2ac8-4d86-910c-de28468e0433.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.915-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b with provided UUID: 82180d2e-4c1c-427d-b4d0-cb5515958535 and options: { uuid: UUID("82180d2e-4c1c-427d-b4d0-cb5515958535"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.873-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (cd42a7f3-2ac8-4d86-910c-de28468e0433) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796756, 1129), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.934-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (cd42a7f3-2ac8-4d86-910c-de28468e0433) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796756, 1129), t: 1 } and commit timestamp Timestamp(1574796756, 1129)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.929-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.873-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (cd42a7f3-2ac8-4d86-910c-de28468e0433).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.934-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (cd42a7f3-2ac8-4d86-910c-de28468e0433).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.946-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.873-0500 I STORAGE [conn110] renameCollection: renaming collection 01e5d84d-3870-4b0e-bb76-a1b64331197f from test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.934-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection 01e5d84d-3870-4b0e-bb76-a1b64331197f from test5_fsmdb0.tmp.agg_out.5edd385a-8564-4122-9c11-8226cf61689f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.946-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.873-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (cd42a7f3-2ac8-4d86-910c-de28468e0433)'. Ident: 'index-580-8224331490264904478', commit timestamp: 'Timestamp(1574796756, 1129)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.934-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (cd42a7f3-2ac8-4d86-910c-de28468e0433)'. Ident: 'index-590--4104909142373009110', commit timestamp: 'Timestamp(1574796756, 1129)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.946-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 3a8f4510-0bc2-4ab6-84f1-2e35fd9c1c4a: test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52 (f87c8fe1-7e7e-44ae-9910-3925c29cb283 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.873-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (cd42a7f3-2ac8-4d86-910c-de28468e0433)'. Ident: 'index-581-8224331490264904478', commit timestamp: 'Timestamp(1574796756, 1129)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.934-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (cd42a7f3-2ac8-4d86-910c-de28468e0433)'. Ident: 'index-595--4104909142373009110', commit timestamp: 'Timestamp(1574796756, 1129)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.946-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.873-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-578-8224331490264904478, commit timestamp: Timestamp(1574796756, 1129)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.934-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-589--4104909142373009110, commit timestamp: Timestamp(1574796756, 1129)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.947-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.873-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.936-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b with provided UUID: 82180d2e-4c1c-427d-b4d0-cb5515958535 and options: { uuid: UUID("82180d2e-4c1c-427d-b4d0-cb5515958535"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.949-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.873-0500 I INDEX [conn112] Registering index build: c1b8d8ba-7ab8-4ade-b081-317563f9a469
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.949-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.950-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0 (89419957-a37a-473d-81e8-459e71eec2ff) to test5_fsmdb0.agg_out and drop 01e5d84d-3870-4b0e-bb76-a1b64331197f.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.873-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2438792113499344545, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5178762034319300155, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796753742), clusterTime: Timestamp(1574796753, 4548) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796753, 4548), signature: { hash: BinData(0, 9122DBC09EDDBB444843EB147B27BD8172624FA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 15977 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3130ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.968-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.950-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (01e5d84d-3870-4b0e-bb76-a1b64331197f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796756, 1570), t: 1 } and commit timestamp Timestamp(1574796756, 1570)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.873-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.968-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.950-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (01e5d84d-3870-4b0e-bb76-a1b64331197f).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.876-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b with generated UUID: 82180d2e-4c1c-427d-b4d0-cb5515958535 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.968-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 4b68a29d-f831-4d0f-8c28-bd736442cb47: test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52 (f87c8fe1-7e7e-44ae-9910-3925c29cb283 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.950-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 89419957-a37a-473d-81e8-459e71eec2ff from test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.887-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.968-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.950-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (01e5d84d-3870-4b0e-bb76-a1b64331197f)'. Ident: 'index-588--8000595249233899911', commit timestamp: 'Timestamp(1574796756, 1570)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.902-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.969-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.950-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (01e5d84d-3870-4b0e-bb76-a1b64331197f)'. Ident: 'index-597--8000595249233899911', commit timestamp: 'Timestamp(1574796756, 1570)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.902-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.971-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.950-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-587--8000595249233899911, commit timestamp: Timestamp(1574796756, 1570)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.902-0500 I STORAGE [conn108] Index build initialized: f150710f-1678-49b1-bc5e-1e45ee8cbcac: test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746 (3fc454f3-0359-4857-8052-b7b2df4ed12e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.971-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0 (89419957-a37a-473d-81e8-459e71eec2ff) to test5_fsmdb0.agg_out and drop 01e5d84d-3870-4b0e-bb76-a1b64331197f.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.951-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff with provided UUID: 7d074ddf-7f6c-4fbc-9506-932c0623c0cd and options: { uuid: UUID("7d074ddf-7f6c-4fbc-9506-932c0623c0cd"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.902-0500 I INDEX [conn108] Waiting for index build to complete: f150710f-1678-49b1-bc5e-1e45ee8cbcac
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.971-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (01e5d84d-3870-4b0e-bb76-a1b64331197f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796756, 1570), t: 1 } and commit timestamp Timestamp(1574796756, 1570)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.952-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 3a8f4510-0bc2-4ab6-84f1-2e35fd9c1c4a: test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52 ( f87c8fe1-7e7e-44ae-9910-3925c29cb283 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.903-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: abb9ae3d-f39e-4efa-a8ff-620920455891: test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52 ( f87c8fe1-7e7e-44ae-9910-3925c29cb283 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.971-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (01e5d84d-3870-4b0e-bb76-a1b64331197f).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.967-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.903-0500 I INDEX [conn46] Index build completed: abb9ae3d-f39e-4efa-a8ff-620920455891
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.971-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 89419957-a37a-473d-81e8-459e71eec2ff from test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.985-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.910-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.972-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (01e5d84d-3870-4b0e-bb76-a1b64331197f)'. Ident: 'index-588--4104909142373009110', commit timestamp: 'Timestamp(1574796756, 1570)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.985-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.910-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.972-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (01e5d84d-3870-4b0e-bb76-a1b64331197f)'. Ident: 'index-597--4104909142373009110', commit timestamp: 'Timestamp(1574796756, 1570)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.985-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 4db30d1c-77cb-4e97-8def-5c8b96d0dd4a: test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746 (3fc454f3-0359-4857-8052-b7b2df4ed12e ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.911-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (01e5d84d-3870-4b0e-bb76-a1b64331197f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796756, 1570), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.972-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-587--4104909142373009110, commit timestamp: Timestamp(1574796756, 1570)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.985-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.911-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (01e5d84d-3870-4b0e-bb76-a1b64331197f).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.972-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff with provided UUID: 7d074ddf-7f6c-4fbc-9506-932c0623c0cd and options: { uuid: UUID("7d074ddf-7f6c-4fbc-9506-932c0623c0cd"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.986-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.911-0500 I STORAGE [conn114] renameCollection: renaming collection 89419957-a37a-473d-81e8-459e71eec2ff from test5_fsmdb0.tmp.agg_out.b563ddcc-b02d-4f5e-b4d1-6dfc9a7f38e0 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.974-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 4b68a29d-f831-4d0f-8c28-bd736442cb47: test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52 ( f87c8fe1-7e7e-44ae-9910-3925c29cb283 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.988-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.911-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (01e5d84d-3870-4b0e-bb76-a1b64331197f)'. Ident: 'index-579-8224331490264904478', commit timestamp: 'Timestamp(1574796756, 1570)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:36.989-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.990-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 4db30d1c-77cb-4e97-8def-5c8b96d0dd4a: test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746 ( 3fc454f3-0359-4857-8052-b7b2df4ed12e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.911-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (01e5d84d-3870-4b0e-bb76-a1b64331197f)'. Ident: 'index-585-8224331490264904478', commit timestamp: 'Timestamp(1574796756, 1570)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.008-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.992-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52 (f87c8fe1-7e7e-44ae-9910-3925c29cb283) to test5_fsmdb0.agg_out and drop 89419957-a37a-473d-81e8-459e71eec2ff.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.911-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-577-8224331490264904478, commit timestamp: Timestamp(1574796756, 1570)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.008-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.992-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (89419957-a37a-473d-81e8-459e71eec2ff) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796756, 2076), t: 1 } and commit timestamp Timestamp(1574796756, 2076)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.911-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.008-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 95d03495-cd3b-4a87-a4a5-84f2eae2d911: test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746 (3fc454f3-0359-4857-8052-b7b2df4ed12e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.992-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (89419957-a37a-473d-81e8-459e71eec2ff).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.911-0500 I INDEX [conn110] Registering index build: 044c759c-23d9-4669-9c20-01143ae45309
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.008-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.992-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection f87c8fe1-7e7e-44ae-9910-3925c29cb283 from test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.911-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6517382431848327134, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2874946789301995097, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796753798), clusterTime: Timestamp(1574796753, 5118) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796753, 5118), signature: { hash: BinData(0, 9122DBC09EDDBB444843EB147B27BD8172624FA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796747, 7), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3112ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.009-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.992-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (89419957-a37a-473d-81e8-459e71eec2ff)'. Ident: 'index-594--8000595249233899911', commit timestamp: 'Timestamp(1574796756, 2076)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.912-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.012-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.992-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (89419957-a37a-473d-81e8-459e71eec2ff)'. Ident: 'index-603--8000595249233899911', commit timestamp: 'Timestamp(1574796756, 2076)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.914-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff with generated UUID: 7d074ddf-7f6c-4fbc-9506-932c0623c0cd and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.015-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 95d03495-cd3b-4a87-a4a5-84f2eae2d911: test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746 ( 3fc454f3-0359-4857-8052-b7b2df4ed12e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:36.992-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-593--8000595249233899911, commit timestamp: Timestamp(1574796756, 2076)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.921-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.015-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52 (f87c8fe1-7e7e-44ae-9910-3925c29cb283) to test5_fsmdb0.agg_out and drop 89419957-a37a-473d-81e8-459e71eec2ff.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.012-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.935-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.015-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (89419957-a37a-473d-81e8-459e71eec2ff) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796756, 2076), t: 1 } and commit timestamp Timestamp(1574796756, 2076)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.012-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.935-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.015-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (89419957-a37a-473d-81e8-459e71eec2ff).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.012-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 8c577ce3-fe06-4e97-9b34-40c75be363c3: test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b (82180d2e-4c1c-427d-b4d0-cb5515958535 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.935-0500 I STORAGE [conn112] Index build initialized: c1b8d8ba-7ab8-4ade-b081-317563f9a469: test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23 (4a2e1ed1-f9fc-49f9-b144-94b6a8db1734 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.015-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection f87c8fe1-7e7e-44ae-9910-3925c29cb283 from test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.012-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.935-0500 I INDEX [conn112] Waiting for index build to complete: c1b8d8ba-7ab8-4ade-b081-317563f9a469
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.015-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (89419957-a37a-473d-81e8-459e71eec2ff)'. Ident: 'index-594--4104909142373009110', commit timestamp: 'Timestamp(1574796756, 2076)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.013-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.938-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: f150710f-1678-49b1-bc5e-1e45ee8cbcac: test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746 ( 3fc454f3-0359-4857-8052-b7b2df4ed12e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.015-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (89419957-a37a-473d-81e8-459e71eec2ff)'. Ident: 'index-603--4104909142373009110', commit timestamp: 'Timestamp(1574796756, 2076)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.015-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-593--4104909142373009110, commit timestamp: Timestamp(1574796756, 2076)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.946-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.015-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.036-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.963-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.025-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 8c577ce3-fe06-4e97-9b34-40c75be363c3: test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b ( 82180d2e-4c1c-427d-b4d0-cb5515958535 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.036-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.963-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.032-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.036-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: f706562f-0291-4d4f-bb28-184a90e7e78b: test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b (82180d2e-4c1c-427d-b4d0-cb5515958535 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.963-0500 I STORAGE [conn110] Index build initialized: 044c759c-23d9-4669-9c20-01143ae45309: test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b (82180d2e-4c1c-427d-b4d0-cb5515958535 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.032-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.036-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.963-0500 I INDEX [conn110] Waiting for index build to complete: 044c759c-23d9-4669-9c20-01143ae45309
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.032-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: e19da030-6a90-4754-b0fe-3abc093c19aa: test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23 (4a2e1ed1-f9fc-49f9-b144-94b6a8db1734 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.037-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.963-0500 I INDEX [conn108] Index build completed: f150710f-1678-49b1-bc5e-1e45ee8cbcac
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.032-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.039-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.963-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.032-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.050-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: f706562f-0291-4d4f-bb28-184a90e7e78b: test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b ( 82180d2e-4c1c-427d-b4d0-cb5515958535 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.963-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796756, 625), signature: { hash: BinData(0, 627793C6F43E65D2670FF52D2AE5CBCC92C2D1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796755, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 7305 } }, Collection: { acquireCount: { w: 1, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 11 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 120ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.035-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.057-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.963-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (89419957-a37a-473d-81e8-459e71eec2ff) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796756, 2076), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.035-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719 with provided UUID: bf1d6756-980c-4b37-bb2d-c0b3a10a5bec and options: { uuid: UUID("bf1d6756-980c-4b37-bb2d-c0b3a10a5bec"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.057-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.963-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (89419957-a37a-473d-81e8-459e71eec2ff).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.039-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: e19da030-6a90-4754-b0fe-3abc093c19aa: test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23 ( 4a2e1ed1-f9fc-49f9-b144-94b6a8db1734 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.057-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: ab61bbee-f6e0-45c4-990a-02939b52610e: test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23 (4a2e1ed1-f9fc-49f9-b144-94b6a8db1734 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.963-0500 I STORAGE [conn114] renameCollection: renaming collection f87c8fe1-7e7e-44ae-9910-3925c29cb283 from test5_fsmdb0.tmp.agg_out.123b1bda-468a-4053-b750-5889982ece52 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.054-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.057-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.964-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (89419957-a37a-473d-81e8-459e71eec2ff)'. Ident: 'index-584-8224331490264904478', commit timestamp: 'Timestamp(1574796756, 2076)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.073-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.058-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.964-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (89419957-a37a-473d-81e8-459e71eec2ff)'. Ident: 'index-587-8224331490264904478', commit timestamp: 'Timestamp(1574796756, 2076)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.073-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.060-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.964-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-582-8224331490264904478, commit timestamp: Timestamp(1574796756, 2076)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.073-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 57347923-e765-46a1-9a56-312da6a108be: test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff (7d074ddf-7f6c-4fbc-9506-932c0623c0cd ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.061-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719 with provided UUID: bf1d6756-980c-4b37-bb2d-c0b3a10a5bec and options: { uuid: UUID("bf1d6756-980c-4b37-bb2d-c0b3a10a5bec"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.964-0500 I INDEX [conn46] Registering index build: c6cad6e1-4b43-4e48-aa7e-a404a2624691
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.073-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.062-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: ab61bbee-f6e0-45c4-990a-02939b52610e: test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23 ( 4a2e1ed1-f9fc-49f9-b144-94b6a8db1734 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.964-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.074-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.079-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.964-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.074-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746 (3fc454f3-0359-4857-8052-b7b2df4ed12e) to test5_fsmdb0.agg_out and drop f87c8fe1-7e7e-44ae-9910-3925c29cb283.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.097-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.964-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1357261478383018612, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7006457610034389457, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796756808), clusterTime: Timestamp(1574796756, 121) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796756, 121), signature: { hash: BinData(0, 627793C6F43E65D2670FF52D2AE5CBCC92C2D1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796755, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 154ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.076-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.097-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.965-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.076-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (f87c8fe1-7e7e-44ae-9910-3925c29cb283) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796757, 3), t: 1 } and commit timestamp Timestamp(1574796757, 3)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.097-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: deb1b96e-b8f0-4c07-8488-187229c6c2ca: test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff (7d074ddf-7f6c-4fbc-9506-932c0623c0cd ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.965-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.076-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (f87c8fe1-7e7e-44ae-9910-3925c29cb283).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.097-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.966-0500 I COMMAND [conn70] CMD: dropIndexes test5_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.076-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 3fc454f3-0359-4857-8052-b7b2df4ed12e from test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.097-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.976-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.076-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f87c8fe1-7e7e-44ae-9910-3925c29cb283)'. Ident: 'index-600--8000595249233899911', commit timestamp: 'Timestamp(1574796757, 3)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.098-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746 (3fc454f3-0359-4857-8052-b7b2df4ed12e) to test5_fsmdb0.agg_out and drop f87c8fe1-7e7e-44ae-9910-3925c29cb283.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.978-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.076-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f87c8fe1-7e7e-44ae-9910-3925c29cb283)'. Ident: 'index-609--8000595249233899911', commit timestamp: 'Timestamp(1574796757, 3)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.100-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.987-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.076-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-599--8000595249233899911, commit timestamp: Timestamp(1574796757, 3)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.101-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (f87c8fe1-7e7e-44ae-9910-3925c29cb283) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796757, 3), t: 1 } and commit timestamp Timestamp(1574796757, 3)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.988-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.079-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 57347923-e765-46a1-9a56-312da6a108be: test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff ( 7d074ddf-7f6c-4fbc-9506-932c0623c0cd ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.101-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (f87c8fe1-7e7e-44ae-9910-3925c29cb283).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.988-0500 I STORAGE [conn46] Index build initialized: c6cad6e1-4b43-4e48-aa7e-a404a2624691: test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff (7d074ddf-7f6c-4fbc-9506-932c0623c0cd ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.080-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e with provided UUID: 4517e47e-de20-43a7-a189-1bab6ffe0a2a and options: { uuid: UUID("4517e47e-de20-43a7-a189-1bab6ffe0a2a"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.101-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 3fc454f3-0359-4857-8052-b7b2df4ed12e from test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.988-0500 I INDEX [conn46] Waiting for index build to complete: c6cad6e1-4b43-4e48-aa7e-a404a2624691
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.100-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.101-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f87c8fe1-7e7e-44ae-9910-3925c29cb283)'. Ident: 'index-600--4104909142373009110', commit timestamp: 'Timestamp(1574796757, 3)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.988-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.105-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b (82180d2e-4c1c-427d-b4d0-cb5515958535) to test5_fsmdb0.agg_out and drop 3fc454f3-0359-4857-8052-b7b2df4ed12e.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.101-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f87c8fe1-7e7e-44ae-9910-3925c29cb283)'. Ident: 'index-609--4104909142373009110', commit timestamp: 'Timestamp(1574796757, 3)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.991-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719 with generated UUID: bf1d6756-980c-4b37-bb2d-c0b3a10a5bec and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.106-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (3fc454f3-0359-4857-8052-b7b2df4ed12e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796757, 634), t: 1 } and commit timestamp Timestamp(1574796757, 634)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.101-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-599--4104909142373009110, commit timestamp: Timestamp(1574796757, 3)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.992-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 044c759c-23d9-4669-9c20-01143ae45309: test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b ( 82180d2e-4c1c-427d-b4d0-cb5515958535 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.106-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (3fc454f3-0359-4857-8052-b7b2df4ed12e).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.102-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: deb1b96e-b8f0-4c07-8488-187229c6c2ca: test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff ( 7d074ddf-7f6c-4fbc-9506-932c0623c0cd ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.992-0500 I INDEX [conn110] Index build completed: 044c759c-23d9-4669-9c20-01143ae45309
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.106-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 82180d2e-4c1c-427d-b4d0-cb5515958535 from test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.105-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e with provided UUID: 4517e47e-de20-43a7-a189-1bab6ffe0a2a and options: { uuid: UUID("4517e47e-de20-43a7-a189-1bab6ffe0a2a"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.996-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: c1b8d8ba-7ab8-4ade-b081-317563f9a469: test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23 ( 4a2e1ed1-f9fc-49f9-b144-94b6a8db1734 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.106-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3fc454f3-0359-4857-8052-b7b2df4ed12e)'. Ident: 'index-602--8000595249233899911', commit timestamp: 'Timestamp(1574796757, 634)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.117-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.996-0500 I INDEX [conn112] Index build completed: c1b8d8ba-7ab8-4ade-b081-317563f9a469
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.106-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3fc454f3-0359-4857-8052-b7b2df4ed12e)'. Ident: 'index-613--8000595249233899911', commit timestamp: 'Timestamp(1574796757, 634)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.132-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b (82180d2e-4c1c-427d-b4d0-cb5515958535) to test5_fsmdb0.agg_out and drop 3fc454f3-0359-4857-8052-b7b2df4ed12e.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.996-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23 appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796756, 1129), signature: { hash: BinData(0, 627793C6F43E65D2670FF52D2AE5CBCC92C2D1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796755, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 8660 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 123ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.106-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-601--8000595249233899911, commit timestamp: Timestamp(1574796757, 634)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.132-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (3fc454f3-0359-4857-8052-b7b2df4ed12e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796757, 634), t: 1 } and commit timestamp Timestamp(1574796757, 634)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:36.996-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.112-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b with provided UUID: ec59aa05-4754-4102-9a71-692013247d23 and options: { uuid: UUID("ec59aa05-4754-4102-9a71-692013247d23"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.132-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (3fc454f3-0359-4857-8052-b7b2df4ed12e).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.007-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.125-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.132-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 82180d2e-4c1c-427d-b4d0-cb5515958535 from test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.014-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.143-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.132-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3fc454f3-0359-4857-8052-b7b2df4ed12e)'. Ident: 'index-602--4104909142373009110', commit timestamp: 'Timestamp(1574796757, 634)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.014-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.143-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.132-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3fc454f3-0359-4857-8052-b7b2df4ed12e)'. Ident: 'index-613--4104909142373009110', commit timestamp: 'Timestamp(1574796757, 634)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.014-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (f87c8fe1-7e7e-44ae-9910-3925c29cb283) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796757, 3), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.143-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 9f75a63d-34b5-4604-8326-824b4684a804: test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719 (bf1d6756-980c-4b37-bb2d-c0b3a10a5bec ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.132-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-601--4104909142373009110, commit timestamp: Timestamp(1574796757, 634)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.014-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (f87c8fe1-7e7e-44ae-9910-3925c29cb283).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.143-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.135-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b with provided UUID: ec59aa05-4754-4102-9a71-692013247d23 and options: { uuid: UUID("ec59aa05-4754-4102-9a71-692013247d23"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.014-0500 I STORAGE [conn108] renameCollection: renaming collection 3fc454f3-0359-4857-8052-b7b2df4ed12e from test5_fsmdb0.tmp.agg_out.d2dab6b9-3f94-4a57-b92a-a8426a095746 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.144-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.152-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.014-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f87c8fe1-7e7e-44ae-9910-3925c29cb283)'. Ident: 'index-591-8224331490264904478', commit timestamp: 'Timestamp(1574796757, 3)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.147-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff (7d074ddf-7f6c-4fbc-9506-932c0623c0cd) to test5_fsmdb0.agg_out and drop 82180d2e-4c1c-427d-b4d0-cb5515958535.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.170-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.014-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f87c8fe1-7e7e-44ae-9910-3925c29cb283)'. Ident: 'index-593-8224331490264904478', commit timestamp: 'Timestamp(1574796757, 3)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.147-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.170-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.014-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-589-8224331490264904478, commit timestamp: Timestamp(1574796757, 3)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.148-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (82180d2e-4c1c-427d-b4d0-cb5515958535) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796757, 1511), t: 1 } and commit timestamp Timestamp(1574796757, 1511)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.170-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: d9a70607-6863-496c-8b90-8f63810a0553: test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719 (bf1d6756-980c-4b37-bb2d-c0b3a10a5bec ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.014-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: c6cad6e1-4b43-4e48-aa7e-a404a2624691: test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff ( 7d074ddf-7f6c-4fbc-9506-932c0623c0cd ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.148-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (82180d2e-4c1c-427d-b4d0-cb5515958535).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.170-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.014-0500 I INDEX [conn46] Index build completed: c6cad6e1-4b43-4e48-aa7e-a404a2624691
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.148-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 7d074ddf-7f6c-4fbc-9506-932c0623c0cd from test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.171-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.014-0500 I INDEX [conn114] Registering index build: a1e4599e-e75f-4cb7-97a7-f1fd094274f3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.148-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (82180d2e-4c1c-427d-b4d0-cb5515958535)'. Ident: 'index-608--8000595249233899911', commit timestamp: 'Timestamp(1574796757, 1511)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.173-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.014-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4059052104668432221, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1901789730311429445, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796756808), clusterTime: Timestamp(1574796756, 121) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796756, 121), signature: { hash: BinData(0, 627793C6F43E65D2670FF52D2AE5CBCC92C2D1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796755, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 205ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.148-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (82180d2e-4c1c-427d-b4d0-cb5515958535)'. Ident: 'index-615--8000595249233899911', commit timestamp: 'Timestamp(1574796757, 1511)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.174-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff (7d074ddf-7f6c-4fbc-9506-932c0623c0cd) to test5_fsmdb0.agg_out and drop 82180d2e-4c1c-427d-b4d0-cb5515958535.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.017-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e with generated UUID: 4517e47e-de20-43a7-a189-1bab6ffe0a2a and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.148-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-607--8000595249233899911, commit timestamp: Timestamp(1574796757, 1511)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.175-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test5_fsmdb0.agg_out (82180d2e-4c1c-427d-b4d0-cb5515958535) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796757, 1511), t: 1 } and commit timestamp Timestamp(1574796757, 1511)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.038-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.148-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23 (4a2e1ed1-f9fc-49f9-b144-94b6a8db1734) to test5_fsmdb0.agg_out and drop 7d074ddf-7f6c-4fbc-9506-932c0623c0cd.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.175-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test5_fsmdb0.agg_out (82180d2e-4c1c-427d-b4d0-cb5515958535).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.038-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.148-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (7d074ddf-7f6c-4fbc-9506-932c0623c0cd) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796757, 1512), t: 1 } and commit timestamp Timestamp(1574796757, 1512)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.175-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection 7d074ddf-7f6c-4fbc-9506-932c0623c0cd from test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.038-0500 I STORAGE [conn114] Index build initialized: a1e4599e-e75f-4cb7-97a7-f1fd094274f3: test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719 (bf1d6756-980c-4b37-bb2d-c0b3a10a5bec ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.148-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (7d074ddf-7f6c-4fbc-9506-932c0623c0cd).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.175-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (82180d2e-4c1c-427d-b4d0-cb5515958535)'. Ident: 'index-608--4104909142373009110', commit timestamp: 'Timestamp(1574796757, 1511)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.038-0500 I INDEX [conn114] Waiting for index build to complete: a1e4599e-e75f-4cb7-97a7-f1fd094274f3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.149-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 4a2e1ed1-f9fc-49f9-b144-94b6a8db1734 from test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.175-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (82180d2e-4c1c-427d-b4d0-cb5515958535)'. Ident: 'index-615--4104909142373009110', commit timestamp: 'Timestamp(1574796757, 1511)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.047-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.149-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (7d074ddf-7f6c-4fbc-9506-932c0623c0cd)'. Ident: 'index-612--8000595249233899911', commit timestamp: 'Timestamp(1574796757, 1512)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.175-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-607--4104909142373009110, commit timestamp: Timestamp(1574796757, 1511)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.047-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.149-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (7d074ddf-7f6c-4fbc-9506-932c0623c0cd)'. Ident: 'index-621--8000595249233899911', commit timestamp: 'Timestamp(1574796757, 1512)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.175-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23 (4a2e1ed1-f9fc-49f9-b144-94b6a8db1734) to test5_fsmdb0.agg_out and drop 7d074ddf-7f6c-4fbc-9506-932c0623c0cd.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.047-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (3fc454f3-0359-4857-8052-b7b2df4ed12e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796757, 634), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.149-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-611--8000595249233899911, commit timestamp: Timestamp(1574796757, 1512)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.175-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.agg_out (7d074ddf-7f6c-4fbc-9506-932c0623c0cd) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796757, 1512), t: 1 } and commit timestamp Timestamp(1574796757, 1512)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.047-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (3fc454f3-0359-4857-8052-b7b2df4ed12e).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.151-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 9f75a63d-34b5-4604-8326-824b4684a804: test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719 ( bf1d6756-980c-4b37-bb2d-c0b3a10a5bec ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.175-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.agg_out (7d074ddf-7f6c-4fbc-9506-932c0623c0cd).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.047-0500 I STORAGE [conn110] renameCollection: renaming collection 82180d2e-4c1c-427d-b4d0-cb5515958535 from test5_fsmdb0.tmp.agg_out.00d823e1-6250-4bad-8c95-56f8c5c5677b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.166-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.175-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection 4a2e1ed1-f9fc-49f9-b144-94b6a8db1734 from test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.047-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3fc454f3-0359-4857-8052-b7b2df4ed12e)'. Ident: 'index-592-8224331490264904478', commit timestamp: 'Timestamp(1574796757, 634)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.166-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.175-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (7d074ddf-7f6c-4fbc-9506-932c0623c0cd)'. Ident: 'index-612--4104909142373009110', commit timestamp: 'Timestamp(1574796757, 1512)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.047-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3fc454f3-0359-4857-8052-b7b2df4ed12e)'. Ident: 'index-597-8224331490264904478', commit timestamp: 'Timestamp(1574796757, 634)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.166-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 56130faf-6d74-4d8e-a29c-d0ef15747bc4: test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e (4517e47e-de20-43a7-a189-1bab6ffe0a2a ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.175-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (7d074ddf-7f6c-4fbc-9506-932c0623c0cd)'. Ident: 'index-621--4104909142373009110', commit timestamp: 'Timestamp(1574796757, 1512)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.047-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-590-8224331490264904478, commit timestamp: Timestamp(1574796757, 634)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.166-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.175-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-611--4104909142373009110, commit timestamp: Timestamp(1574796757, 1512)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.048-0500 I INDEX [conn108] Registering index build: f7728175-5421-46cf-bebd-f050473a669a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.167-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.176-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: d9a70607-6863-496c-8b90-8f63810a0553: test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719 ( bf1d6756-980c-4b37-bb2d-c0b3a10a5bec ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.048-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.169-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.192-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.048-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 9140264975624414143, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7383169266137757738, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796756874), clusterTime: Timestamp(1574796756, 1193) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796756, 1193), signature: { hash: BinData(0, 627793C6F43E65D2670FF52D2AE5CBCC92C2D1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796755, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 172ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.171-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7 with provided UUID: 86f10f1c-9f3e-4c9b-8e4c-d6a159902655 and options: { uuid: UUID("86f10f1c-9f3e-4c9b-8e4c-d6a159902655"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.192-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.048-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.172-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 56130faf-6d74-4d8e-a29c-d0ef15747bc4: test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e ( 4517e47e-de20-43a7-a189-1bab6ffe0a2a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.192-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 5cb696c8-5401-45aa-a049-bfc83106467b: test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e (4517e47e-de20-43a7-a189-1bab6ffe0a2a ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.051-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b with generated UUID: ec59aa05-4754-4102-9a71-692013247d23 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.186-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.192-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.058-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.187-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb with provided UUID: fbe4c4d3-a547-4624-84e7-dd2ade23e5fb and options: { uuid: UUID("fbe4c4d3-a547-4624-84e7-dd2ade23e5fb"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.192-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.074-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.203-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.195-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.074-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.219-0500 I INDEX [ReplWriterWorker-12] index build: starting on test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.198-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7 with provided UUID: 86f10f1c-9f3e-4c9b-8e4c-d6a159902655 and options: { uuid: UUID("86f10f1c-9f3e-4c9b-8e4c-d6a159902655"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.074-0500 I STORAGE [conn108] Index build initialized: f7728175-5421-46cf-bebd-f050473a669a: test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e (4517e47e-de20-43a7-a189-1bab6ffe0a2a ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.219-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.199-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 5cb696c8-5401-45aa-a049-bfc83106467b: test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e ( 4517e47e-de20-43a7-a189-1bab6ffe0a2a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.074-0500 I INDEX [conn108] Waiting for index build to complete: f7728175-5421-46cf-bebd-f050473a669a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.219-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 3618820d-7c0d-4552-80ae-90dbd5da9b69: test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b (ec59aa05-4754-4102-9a71-692013247d23 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.212-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.075-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: a1e4599e-e75f-4cb7-97a7-f1fd094274f3: test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719 ( bf1d6756-980c-4b37-bb2d-c0b3a10a5bec ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.219-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.213-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb with provided UUID: fbe4c4d3-a547-4624-84e7-dd2ade23e5fb and options: { uuid: UUID("fbe4c4d3-a547-4624-84e7-dd2ade23e5fb"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.075-0500 I INDEX [conn114] Index build completed: a1e4599e-e75f-4cb7-97a7-f1fd094274f3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.219-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.228-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.083-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.220-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719 (bf1d6756-980c-4b37-bb2d-c0b3a10a5bec) to test5_fsmdb0.agg_out and drop 4a2e1ed1-f9fc-49f9-b144-94b6a8db1734.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.243-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.083-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.222-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.243-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.083-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (82180d2e-4c1c-427d-b4d0-cb5515958535) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796757, 1511), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.222-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (4a2e1ed1-f9fc-49f9-b144-94b6a8db1734) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796757, 2020), t: 1 } and commit timestamp Timestamp(1574796757, 2020)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.243-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 2aea5f11-937b-424f-a96d-4f60c607c857: test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b (ec59aa05-4754-4102-9a71-692013247d23 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.084-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (82180d2e-4c1c-427d-b4d0-cb5515958535).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.222-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (4a2e1ed1-f9fc-49f9-b144-94b6a8db1734).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.243-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.084-0500 I STORAGE [conn112] renameCollection: renaming collection 7d074ddf-7f6c-4fbc-9506-932c0623c0cd from test5_fsmdb0.tmp.agg_out.5301c06b-4cff-42eb-a453-592f1aab62ff to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.222-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection bf1d6756-980c-4b37-bb2d-c0b3a10a5bec from test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.244-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.084-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (82180d2e-4c1c-427d-b4d0-cb5515958535)'. Ident: 'index-600-8224331490264904478', commit timestamp: 'Timestamp(1574796757, 1511)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.222-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (4a2e1ed1-f9fc-49f9-b144-94b6a8db1734)'. Ident: 'index-606--8000595249233899911', commit timestamp: 'Timestamp(1574796757, 2020)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.245-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719 (bf1d6756-980c-4b37-bb2d-c0b3a10a5bec) to test5_fsmdb0.agg_out and drop 4a2e1ed1-f9fc-49f9-b144-94b6a8db1734.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.084-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (82180d2e-4c1c-427d-b4d0-cb5515958535)'. Ident: 'index-605-8224331490264904478', commit timestamp: 'Timestamp(1574796757, 1511)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.222-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (4a2e1ed1-f9fc-49f9-b144-94b6a8db1734)'. Ident: 'index-617--8000595249233899911', commit timestamp: 'Timestamp(1574796757, 2020)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.246-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.084-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-598-8224331490264904478, commit timestamp: Timestamp(1574796757, 1511)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.222-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-605--8000595249233899911, commit timestamp: Timestamp(1574796757, 2020)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.246-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.agg_out (4a2e1ed1-f9fc-49f9-b144-94b6a8db1734) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796757, 2020), t: 1 } and commit timestamp Timestamp(1574796757, 2020)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.084-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.224-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 3618820d-7c0d-4552-80ae-90dbd5da9b69: test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b ( ec59aa05-4754-4102-9a71-692013247d23 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.246-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.agg_out (4a2e1ed1-f9fc-49f9-b144-94b6a8db1734).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.084-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (7d074ddf-7f6c-4fbc-9506-932c0623c0cd) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796757, 1512), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.226-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549 with provided UUID: 101fd173-019c-4512-af56-396319e293e5 and options: { uuid: UUID("101fd173-019c-4512-af56-396319e293e5"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.246-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection bf1d6756-980c-4b37-bb2d-c0b3a10a5bec from test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.084-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (7d074ddf-7f6c-4fbc-9506-932c0623c0cd).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.240-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.246-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (4a2e1ed1-f9fc-49f9-b144-94b6a8db1734)'. Ident: 'index-606--4104909142373009110', commit timestamp: 'Timestamp(1574796757, 2020)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.084-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2721330593163661164, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8536852761015165426, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796756913), clusterTime: Timestamp(1574796756, 1570) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796756, 1570), signature: { hash: BinData(0, 627793C6F43E65D2670FF52D2AE5CBCC92C2D1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796755, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 170ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.084-0500 I STORAGE [conn46] renameCollection: renaming collection 4a2e1ed1-f9fc-49f9-b144-94b6a8db1734 from test5_fsmdb0.tmp.agg_out.8a16859b-65a5-4e29-a74c-4793bf4a1d23 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.246-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (4a2e1ed1-f9fc-49f9-b144-94b6a8db1734)'. Ident: 'index-617--4104909142373009110', commit timestamp: 'Timestamp(1574796757, 2020)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.245-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e (4517e47e-de20-43a7-a189-1bab6ffe0a2a) to test5_fsmdb0.agg_out and drop bf1d6756-980c-4b37-bb2d-c0b3a10a5bec.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.084-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (7d074ddf-7f6c-4fbc-9506-932c0623c0cd)'. Ident: 'index-604-8224331490264904478', commit timestamp: 'Timestamp(1574796757, 1512)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.246-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-605--4104909142373009110, commit timestamp: Timestamp(1574796757, 2020)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.245-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.agg_out (bf1d6756-980c-4b37-bb2d-c0b3a10a5bec) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796757, 2779), t: 1 } and commit timestamp Timestamp(1574796757, 2779)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.084-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (7d074ddf-7f6c-4fbc-9506-932c0623c0cd)'. Ident: 'index-607-8224331490264904478', commit timestamp: 'Timestamp(1574796757, 1512)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.248-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 2aea5f11-937b-424f-a96d-4f60c607c857: test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b ( ec59aa05-4754-4102-9a71-692013247d23 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.245-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.agg_out (bf1d6756-980c-4b37-bb2d-c0b3a10a5bec).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.084-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-602-8224331490264904478, commit timestamp: Timestamp(1574796757, 1512)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.250-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549 with provided UUID: 101fd173-019c-4512-af56-396319e293e5 and options: { uuid: UUID("101fd173-019c-4512-af56-396319e293e5"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.245-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection 4517e47e-de20-43a7-a189-1bab6ffe0a2a from test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.084-0500 I INDEX [conn110] Registering index build: 3a50d035-da24-440b-8472-0173056d6647
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.261-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.245-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bf1d6756-980c-4b37-bb2d-c0b3a10a5bec)'. Ident: 'index-620--8000595249233899911', commit timestamp: 'Timestamp(1574796757, 2779)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.084-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.267-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e (4517e47e-de20-43a7-a189-1bab6ffe0a2a) to test5_fsmdb0.agg_out and drop bf1d6756-980c-4b37-bb2d-c0b3a10a5bec.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.245-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bf1d6756-980c-4b37-bb2d-c0b3a10a5bec)'. Ident: 'index-627--8000595249233899911', commit timestamp: 'Timestamp(1574796757, 2779)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.084-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6644497500013331272, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4740304516334632938, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796756845), clusterTime: Timestamp(1574796756, 690) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796756, 690), signature: { hash: BinData(0, 627793C6F43E65D2670FF52D2AE5CBCC92C2D1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796755, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 238ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.267-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (bf1d6756-980c-4b37-bb2d-c0b3a10a5bec) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796757, 2779), t: 1 } and commit timestamp Timestamp(1574796757, 2779)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:37.245-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-619--8000595249233899911, commit timestamp: Timestamp(1574796757, 2779)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.085-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.267-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (bf1d6756-980c-4b37-bb2d-c0b3a10a5bec).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.086-0500 I COMMAND [conn65] CMD: dropIndexes test5_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.622-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b with provided UUID: 28bc3985-b059-4779-915b-99c639a70135 and options: { uuid: UUID("28bc3985-b059-4779-915b-99c639a70135"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.267-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 4517e47e-de20-43a7-a189-1bab6ffe0a2a from test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.087-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.634-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.267-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bf1d6756-980c-4b37-bb2d-c0b3a10a5bec)'. Ident: 'index-620--4104909142373009110', commit timestamp: 'Timestamp(1574796757, 2779)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.096-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: f7728175-5421-46cf-bebd-f050473a669a: test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e ( 4517e47e-de20-43a7-a189-1bab6ffe0a2a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.267-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bf1d6756-980c-4b37-bb2d-c0b3a10a5bec)'. Ident: 'index-627--4104909142373009110', commit timestamp: 'Timestamp(1574796757, 2779)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.109-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:37.267-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-619--4104909142373009110, commit timestamp: Timestamp(1574796757, 2779)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.109-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.636-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b with provided UUID: 28bc3985-b059-4779-915b-99c639a70135 and options: { uuid: UUID("28bc3985-b059-4779-915b-99c639a70135"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.109-0500 I STORAGE [conn110] Index build initialized: 3a50d035-da24-440b-8472-0173056d6647: test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b (ec59aa05-4754-4102-9a71-692013247d23 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.109-0500 I INDEX [conn110] Waiting for index build to complete: 3a50d035-da24-440b-8472-0173056d6647
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.109-0500 I INDEX [conn108] Index build completed: f7728175-5421-46cf-bebd-f050473a669a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.109-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.109-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7 with generated UUID: 86f10f1c-9f3e-4c9b-8e4c-d6a159902655 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.109-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.111-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb with generated UUID: fbe4c4d3-a547-4624-84e7-dd2ade23e5fb and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.117-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.129-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.130-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 3a50d035-da24-440b-8472-0173056d6647: test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b ( ec59aa05-4754-4102-9a71-692013247d23 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.130-0500 I INDEX [conn110] Index build completed: 3a50d035-da24-440b-8472-0173056d6647
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.138-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.138-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.138-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (4a2e1ed1-f9fc-49f9-b144-94b6a8db1734) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796757, 2020), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.138-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (4a2e1ed1-f9fc-49f9-b144-94b6a8db1734).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.138-0500 I STORAGE [conn114] renameCollection: renaming collection bf1d6756-980c-4b37-bb2d-c0b3a10a5bec from test5_fsmdb0.tmp.agg_out.ed42a011-fad9-44ed-86eb-76a475007719 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.138-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (4a2e1ed1-f9fc-49f9-b144-94b6a8db1734)'. Ident: 'index-596-8224331490264904478', commit timestamp: 'Timestamp(1574796757, 2020)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.138-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (4a2e1ed1-f9fc-49f9-b144-94b6a8db1734)'. Ident: 'index-601-8224331490264904478', commit timestamp: 'Timestamp(1574796757, 2020)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.138-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-594-8224331490264904478, commit timestamp: Timestamp(1574796757, 2020)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.138-0500 I INDEX [conn112] Registering index build: aa01011b-1999-4410-90c3-6614a771faac
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.138-0500 I INDEX [conn46] Registering index build: 4df88912-22bc-44bf-bed2-3189d72f3a43
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.139-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4926927341884757656, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 830931393734693056, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796756989), clusterTime: Timestamp(1574796756, 2401) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796756, 2401), signature: { hash: BinData(0, 627793C6F43E65D2670FF52D2AE5CBCC92C2D1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796755, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 148ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.141-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549 with generated UUID: 101fd173-019c-4512-af56-396319e293e5 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.163-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.163-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.163-0500 I STORAGE [conn112] Index build initialized: aa01011b-1999-4410-90c3-6614a771faac: test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb (fbe4c4d3-a547-4624-84e7-dd2ade23e5fb ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.164-0500 I INDEX [conn112] Waiting for index build to complete: aa01011b-1999-4410-90c3-6614a771faac
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.173-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.173-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.173-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (bf1d6756-980c-4b37-bb2d-c0b3a10a5bec) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796757, 2779), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.173-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (bf1d6756-980c-4b37-bb2d-c0b3a10a5bec).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.173-0500 I STORAGE [conn108] renameCollection: renaming collection 4517e47e-de20-43a7-a189-1bab6ffe0a2a from test5_fsmdb0.tmp.agg_out.c8d055e2-ee20-4c83-827c-b92f3b5ad62e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.173-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bf1d6756-980c-4b37-bb2d-c0b3a10a5bec)'. Ident: 'index-610-8224331490264904478', commit timestamp: 'Timestamp(1574796757, 2779)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.173-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bf1d6756-980c-4b37-bb2d-c0b3a10a5bec)'. Ident: 'index-611-8224331490264904478', commit timestamp: 'Timestamp(1574796757, 2779)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.173-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-609-8224331490264904478, commit timestamp: Timestamp(1574796757, 2779)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.173-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.173-0500 I INDEX [conn114] Registering index build: fa12a221-83c1-4211-bc53-657b26f32778
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.174-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8665749686988104555, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8215299413141824585, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796757016), clusterTime: Timestamp(1574796757, 67) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796757, 67), signature: { hash: BinData(0, 98E14D2AA1BFB2BD86C0561765265E5D47C393E0), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796755, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 156ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.174-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.177-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b with generated UUID: 28bc3985-b059-4779-915b-99c639a70135 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.178-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.193-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: aa01011b-1999-4410-90c3-6614a771faac: test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb ( fbe4c4d3-a547-4624-84e7-dd2ade23e5fb ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.202-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:37.208-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.616-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.616-0500 I STORAGE [conn46] Index build initialized: 4df88912-22bc-44bf-bed2-3189d72f3a43: test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7 (86f10f1c-9f3e-4c9b-8e4c-d6a159902655 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.616-0500 I INDEX [conn46] Waiting for index build to complete: 4df88912-22bc-44bf-bed2-3189d72f3a43
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.616-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b appName: "tid:0" command: create { create: "tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b", temp: true, validationLevel: "moderate", validationAction: "error", databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796757, 2907), signature: { hash: BinData(0, 98E14D2AA1BFB2BD86C0561765265E5D47C393E0), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796755, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2439ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.616-0500 I INDEX [conn112] Index build completed: aa01011b-1999-4410-90c3-6614a771faac
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.616-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.616-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796757, 2019), signature: { hash: BinData(0, 98E14D2AA1BFB2BD86C0561765265E5D47C393E0), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796755, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 289 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2478ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.616-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (4517e47e-de20-43a7-a189-1bab6ffe0a2a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 2), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.616-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (4517e47e-de20-43a7-a189-1bab6ffe0a2a).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.616-0500 I STORAGE [conn110] renameCollection: renaming collection ec59aa05-4754-4102-9a71-692013247d23 from test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.616-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (4517e47e-de20-43a7-a189-1bab6ffe0a2a)'. Ident: 'index-614-8224331490264904478', commit timestamp: 'Timestamp(1574796759, 2)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.616-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (4517e47e-de20-43a7-a189-1bab6ffe0a2a)'. Ident: 'index-615-8224331490264904478', commit timestamp: 'Timestamp(1574796759, 2)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.616-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-612-8224331490264904478, commit timestamp: Timestamp(1574796759, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.617-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.617-0500 I INDEX [conn108] Registering index build: 3a28d89a-0003-491e-98ed-cd840e6a5b28
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.617-0500 I COMMAND [conn110] command test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b appName: "tid:1" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b", to: "test5_fsmdb0.agg_out", collectionOptions: { validationLevel: "moderate", validationAction: "error" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796757, 3026), signature: { hash: BinData(0, 98E14D2AA1BFB2BD86C0561765265E5D47C393E0), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796755, 1), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 2436776 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2437ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.617-0500 I COMMAND [conn220] command test5_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796757, 1834), lsid: { id: UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") }, $clusterTime: { clusterTime: Timestamp(1574796757, 1898), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796757, 1834). Collection minimum timestamp is Timestamp(1574796757, 2085)" errName:SnapshotUnavailable errCode:246 reslen:582 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2417906 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 2418ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.617-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5886160606000452345, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6986566181268010141, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796757049), clusterTime: Timestamp(1574796757, 762) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796757, 762), signature: { hash: BinData(0, 98E14D2AA1BFB2BD86C0561765265E5D47C393E0), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796755, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2566ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.617-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.620-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.620-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a with generated UUID: 0f06907b-c7c0-46be-8386-a7b0ef9298f5 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.627-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 4df88912-22bc-44bf-bed2-3189d72f3a43: test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7 ( 86f10f1c-9f3e-4c9b-8e4c-d6a159902655 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.640-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.640-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.640-0500 I STORAGE [conn114] Index build initialized: fa12a221-83c1-4211-bc53-657b26f32778: test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549 (101fd173-019c-4512-af56-396319e293e5 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.640-0500 I INDEX [conn114] Waiting for index build to complete: fa12a221-83c1-4211-bc53-657b26f32778
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.640-0500 I INDEX [conn46] Index build completed: 4df88912-22bc-44bf-bed2-3189d72f3a43
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.640-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7 appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796757, 2019), signature: { hash: BinData(0, 98E14D2AA1BFB2BD86C0561765265E5D47C393E0), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796755, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 18407 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2510ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.647-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.648-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.648-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.648-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 85255ee4-4f88-4d04-b2b1-84315950f044: test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb (fbe4c4d3-a547-4624-84e7-dd2ade23e5fb ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.648-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.649-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.650-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.651-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.652-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b (ec59aa05-4754-4102-9a71-692013247d23) to test5_fsmdb0.agg_out and drop 4517e47e-de20-43a7-a189-1bab6ffe0a2a.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.652-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (4517e47e-de20-43a7-a189-1bab6ffe0a2a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 2), t: 1 } and commit timestamp Timestamp(1574796759, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.652-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (4517e47e-de20-43a7-a189-1bab6ffe0a2a).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.652-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection ec59aa05-4754-4102-9a71-692013247d23 from test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.652-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (4517e47e-de20-43a7-a189-1bab6ffe0a2a)'. Ident: 'index-624--8000595249233899911', commit timestamp: 'Timestamp(1574796759, 2)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.652-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (4517e47e-de20-43a7-a189-1bab6ffe0a2a)'. Ident: 'index-629--8000595249233899911', commit timestamp: 'Timestamp(1574796759, 2)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.652-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-623--8000595249233899911, commit timestamp: Timestamp(1574796759, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.654-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 85255ee4-4f88-4d04-b2b1-84315950f044: test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb ( fbe4c4d3-a547-4624-84e7-dd2ade23e5fb ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.660-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.660-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.660-0500 I STORAGE [conn108] Index build initialized: 3a28d89a-0003-491e-98ed-cd840e6a5b28: test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b (28bc3985-b059-4779-915b-99c639a70135 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.660-0500 I INDEX [conn108] Waiting for index build to complete: 3a28d89a-0003-491e-98ed-cd840e6a5b28
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.660-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.660-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (ec59aa05-4754-4102-9a71-692013247d23) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 508), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.660-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (ec59aa05-4754-4102-9a71-692013247d23).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.660-0500 I STORAGE [conn46] renameCollection: renaming collection fbe4c4d3-a547-4624-84e7-dd2ade23e5fb from test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.660-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ec59aa05-4754-4102-9a71-692013247d23)'. Ident: 'index-618-8224331490264904478', commit timestamp: 'Timestamp(1574796759, 508)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.660-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ec59aa05-4754-4102-9a71-692013247d23)'. Ident: 'index-619-8224331490264904478', commit timestamp: 'Timestamp(1574796759, 508)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.660-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-616-8224331490264904478, commit timestamp: Timestamp(1574796759, 508)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.660-0500 I INDEX [conn110] Registering index build: 9862f35e-915a-4550-87c6-c771498b6bd5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.660-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.660-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.661-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4181833271001174374, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5475875884544686684, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796757110), clusterTime: Timestamp(1574796757, 1963) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796757, 1964), signature: { hash: BinData(0, 98E14D2AA1BFB2BD86C0561765265E5D47C393E0), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796755, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2549ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:39.661-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796757, 1963), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2550ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.661-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.662-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.664-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d with generated UUID: f9d6ae3d-980f-412a-a3b9-7e917efdba65 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.665-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.665-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.665-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: ebf039a1-128e-48ec-8af8-91fc2fdada39: test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb (fbe4c4d3-a547-4624-84e7-dd2ade23e5fb ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.666-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.666-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.669-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.669-0500 I INDEX [ReplWriterWorker-12] index build: starting on test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.669-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.669-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 2abb39f2-8f6e-4c65-ab93-c368e043aa29: test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7 (86f10f1c-9f3e-4c9b-8e4c-d6a159902655 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.669-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.669-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b (ec59aa05-4754-4102-9a71-692013247d23) to test5_fsmdb0.agg_out and drop 4517e47e-de20-43a7-a189-1bab6ffe0a2a.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.669-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (4517e47e-de20-43a7-a189-1bab6ffe0a2a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 2), t: 1 } and commit timestamp Timestamp(1574796759, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.669-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (4517e47e-de20-43a7-a189-1bab6ffe0a2a).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.669-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection ec59aa05-4754-4102-9a71-692013247d23 from test5_fsmdb0.tmp.agg_out.8ffc7a08-7a50-4a87-8063-23e9adabab1b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.669-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (4517e47e-de20-43a7-a189-1bab6ffe0a2a)'. Ident: 'index-624--4104909142373009110', commit timestamp: 'Timestamp(1574796759, 2)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.669-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (4517e47e-de20-43a7-a189-1bab6ffe0a2a)'. Ident: 'index-629--4104909142373009110', commit timestamp: 'Timestamp(1574796759, 2)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.669-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-623--4104909142373009110, commit timestamp: Timestamp(1574796759, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.670-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.671-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a with provided UUID: 0f06907b-c7c0-46be-8386-a7b0ef9298f5 and options: { uuid: UUID("0f06907b-c7c0-46be-8386-a7b0ef9298f5"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.671-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.673-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.673-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: ebf039a1-128e-48ec-8af8-91fc2fdada39: test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb ( fbe4c4d3-a547-4624-84e7-dd2ade23e5fb ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.675-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.681-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 2abb39f2-8f6e-4c65-ab93-c368e043aa29: test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7 ( 86f10f1c-9f3e-4c9b-8e4c-d6a159902655 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.688-0500 I INDEX [ReplWriterWorker-12] index build: starting on test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.688-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.688-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 7b1048ff-cc19-4cf3-b6bb-f9d89953342c: test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7 (86f10f1c-9f3e-4c9b-8e4c-d6a159902655 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.688-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.688-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.689-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.690-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.691-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.691-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.691-0500 I STORAGE [conn110] Index build initialized: 9862f35e-915a-4550-87c6-c771498b6bd5: test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a (0f06907b-c7c0-46be-8386-a7b0ef9298f5 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.691-0500 I INDEX [conn110] Waiting for index build to complete: 9862f35e-915a-4550-87c6-c771498b6bd5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.691-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a with provided UUID: 0f06907b-c7c0-46be-8386-a7b0ef9298f5 and options: { uuid: UUID("0f06907b-c7c0-46be-8386-a7b0ef9298f5"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.692-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 3a28d89a-0003-491e-98ed-cd840e6a5b28: test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b ( 28bc3985-b059-4779-915b-99c639a70135 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.692-0500 I INDEX [conn108] Index build completed: 3a28d89a-0003-491e-98ed-cd840e6a5b28
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.692-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 7b1048ff-cc19-4cf3-b6bb-f9d89953342c: test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7 ( 86f10f1c-9f3e-4c9b-8e4c-d6a159902655 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.693-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: fa12a221-83c1-4211-bc53-657b26f32778: test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549 ( 101fd173-019c-4512-af56-396319e293e5 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.693-0500 I INDEX [conn114] Index build completed: fa12a221-83c1-4211-bc53-657b26f32778
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.693-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549 appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796757, 2779), signature: { hash: BinData(0, 98E14D2AA1BFB2BD86C0561765265E5D47C393E0), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796755, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 565 } }, Collection: { acquireCount: { w: 1, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 86 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2519ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.699-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb (fbe4c4d3-a547-4624-84e7-dd2ade23e5fb) to test5_fsmdb0.agg_out and drop ec59aa05-4754-4102-9a71-692013247d23.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.699-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (ec59aa05-4754-4102-9a71-692013247d23) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 508), t: 1 } and commit timestamp Timestamp(1574796759, 508)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.699-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (ec59aa05-4754-4102-9a71-692013247d23).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.699-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection fbe4c4d3-a547-4624-84e7-dd2ade23e5fb from test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.699-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ec59aa05-4754-4102-9a71-692013247d23)'. Ident: 'index-626--8000595249233899911', commit timestamp: 'Timestamp(1574796759, 508)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.699-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ec59aa05-4754-4102-9a71-692013247d23)'. Ident: 'index-635--8000595249233899911', commit timestamp: 'Timestamp(1574796759, 508)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.699-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-625--8000595249233899911, commit timestamp: Timestamp(1574796759, 508)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.701-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.702-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.702-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (fbe4c4d3-a547-4624-84e7-dd2ade23e5fb) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 1015), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.702-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (fbe4c4d3-a547-4624-84e7-dd2ade23e5fb).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.702-0500 I STORAGE [conn112] renameCollection: renaming collection 86f10f1c-9f3e-4c9b-8e4c-d6a159902655 from test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.702-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (fbe4c4d3-a547-4624-84e7-dd2ade23e5fb)'. Ident: 'index-624-8224331490264904478', commit timestamp: 'Timestamp(1574796759, 1015)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.702-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (fbe4c4d3-a547-4624-84e7-dd2ade23e5fb)'. Ident: 'index-625-8224331490264904478', commit timestamp: 'Timestamp(1574796759, 1015)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.702-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-622-8224331490264904478, commit timestamp: Timestamp(1574796759, 1015)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.702-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.702-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1278101961775670291, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8357913056317456642, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796757086), clusterTime: Timestamp(1574796757, 1512) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796757, 1512), signature: { hash: BinData(0, 98E14D2AA1BFB2BD86C0561765265E5D47C393E0), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796755, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 22040 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2615ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.702-0500 I INDEX [conn46] Registering index build: d3f6ba60-392a-445e-afb1-872eba47af62
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:39.702-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796757, 1512), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2616ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.703-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.705-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8 with generated UUID: 519c4b85-a724-4df5-afb1-c17a51e4e0f0 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.707-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.710-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.711-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 9862f35e-915a-4550-87c6-c771498b6bd5: test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a ( 0f06907b-c7c0-46be-8386-a7b0ef9298f5 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.711-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d with provided UUID: f9d6ae3d-980f-412a-a3b9-7e917efdba65 and options: { uuid: UUID("f9d6ae3d-980f-412a-a3b9-7e917efdba65"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.714-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb (fbe4c4d3-a547-4624-84e7-dd2ade23e5fb) to test5_fsmdb0.agg_out and drop ec59aa05-4754-4102-9a71-692013247d23.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.715-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (ec59aa05-4754-4102-9a71-692013247d23) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 508), t: 1 } and commit timestamp Timestamp(1574796759, 508)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.715-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (ec59aa05-4754-4102-9a71-692013247d23).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.715-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection fbe4c4d3-a547-4624-84e7-dd2ade23e5fb from test5_fsmdb0.tmp.agg_out.932b63b7-10d8-4b37-a609-19285ba661bb to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.715-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ec59aa05-4754-4102-9a71-692013247d23)'. Ident: 'index-626--4104909142373009110', commit timestamp: 'Timestamp(1574796759, 508)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.715-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ec59aa05-4754-4102-9a71-692013247d23)'. Ident: 'index-635--4104909142373009110', commit timestamp: 'Timestamp(1574796759, 508)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.715-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-625--4104909142373009110, commit timestamp: Timestamp(1574796759, 508)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.720-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.720-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.721-0500 I STORAGE [conn46] Index build initialized: d3f6ba60-392a-445e-afb1-872eba47af62: test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d (f9d6ae3d-980f-412a-a3b9-7e917efdba65 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.721-0500 I INDEX [conn46] Waiting for index build to complete: d3f6ba60-392a-445e-afb1-872eba47af62
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.721-0500 I INDEX [conn110] Index build completed: 9862f35e-915a-4550-87c6-c771498b6bd5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.721-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.724-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.726-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d with provided UUID: f9d6ae3d-980f-412a-a3b9-7e917efdba65 and options: { uuid: UUID("f9d6ae3d-980f-412a-a3b9-7e917efdba65"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.729-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.737-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.740-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.740-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.740-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (86f10f1c-9f3e-4c9b-8e4c-d6a159902655) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 1586), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.740-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (86f10f1c-9f3e-4c9b-8e4c-d6a159902655).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.740-0500 I STORAGE [conn112] renameCollection: renaming collection 101fd173-019c-4512-af56-396319e293e5 from test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.740-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (86f10f1c-9f3e-4c9b-8e4c-d6a159902655)'. Ident: 'index-623-8224331490264904478', commit timestamp: 'Timestamp(1574796759, 1586)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.740-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (86f10f1c-9f3e-4c9b-8e4c-d6a159902655)'. Ident: 'index-629-8224331490264904478', commit timestamp: 'Timestamp(1574796759, 1586)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.740-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-621-8224331490264904478, commit timestamp: Timestamp(1574796759, 1586)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.740-0500 I INDEX [conn108] Registering index build: 99f13918-22ec-40e3-8b95-a806bc7aa491
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.740-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3476340627711681150, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6746479173725449068, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796757140), clusterTime: Timestamp(1574796757, 2084) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796757, 2084), signature: { hash: BinData(0, 98E14D2AA1BFB2BD86C0561765265E5D47C393E0), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796755, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2599ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:39.741-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796757, 2084), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2600ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.742-0500 I COMMAND [conn70] CMD: dropIndexes test5_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.744-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:39.762-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796757, 2843), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2587ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.743-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: d3f6ba60-392a-445e-afb1-872eba47af62: test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d ( f9d6ae3d-980f-412a-a3b9-7e917efdba65 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:39.799-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796759, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 181ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.741-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.744-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.761-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:39.833-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796759, 572), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 170ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:39.964-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796759, 2151), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 201ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.760-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.761-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.744-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 8d9337a0-cb8e-4d39-a570-d14ec8f977e2: test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b (28bc3985-b059-4779-915b-99c639a70135 ): indexes: 1
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:39.870-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796759, 1015), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 166ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:39.999-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796759, 2657), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 198ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.760-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.761-0500 I STORAGE [conn108] Index build initialized: 99f13918-22ec-40e3-8b95-a806bc7aa491: test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8 (519c4b85-a724-4df5-afb1-c17a51e4e0f0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.744-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:39.923-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796759, 2216), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 159ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.760-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 9285a425-4716-4749-a730-81dcb08cbaed: test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b (28bc3985-b059-4779-915b-99c639a70135 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.761-0500 I INDEX [conn108] Waiting for index build to complete: 99f13918-22ec-40e3-8b95-a806bc7aa491
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.745-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:40.017-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796759, 3160), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 183ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.760-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.761-0500 I INDEX [conn46] Index build completed: d3f6ba60-392a-445e-afb1-872eba47af62
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.746-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:40.054-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796759, 3537), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 182ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.761-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.761-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.752-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 8d9337a0-cb8e-4d39-a570-d14ec8f977e2: test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b ( 28bc3985-b059-4779-915b-99c639a70135 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.763-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:42.755-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796759, 4043), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2831ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.761-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (101fd173-019c-4512-af56-396319e293e5) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 2152), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.764-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.767-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 9285a425-4716-4749-a730-81dcb08cbaed: test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b ( 28bc3985-b059-4779-915b-99c639a70135 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.761-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (101fd173-019c-4512-af56-396319e293e5).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.764-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.782-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.761-0500 I STORAGE [conn114] renameCollection: renaming collection 28bc3985-b059-4779-915b-99c639a70135 from test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.764-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 0650ee3b-a83b-4902-8691-179542f4011f: test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549 (101fd173-019c-4512-af56-396319e293e5 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.782-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.762-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (101fd173-019c-4512-af56-396319e293e5)'. Ident: 'index-628-8224331490264904478', commit timestamp: 'Timestamp(1574796759, 2152)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.765-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.782-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 796ecf40-a3c7-445c-b0a9-4436eb23023e: test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549 (101fd173-019c-4512-af56-396319e293e5 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.762-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (101fd173-019c-4512-af56-396319e293e5)'. Ident: 'index-633-8224331490264904478', commit timestamp: 'Timestamp(1574796759, 2152)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.765-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.782-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.762-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-626-8224331490264904478, commit timestamp: Timestamp(1574796759, 2152)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.768-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.783-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.762-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.769-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7 (86f10f1c-9f3e-4c9b-8e4c-d6a159902655) to test5_fsmdb0.agg_out and drop fbe4c4d3-a547-4624-84e7-dd2ade23e5fb.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.785-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.762-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5098832750591355768, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3423535259939134559, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796757175), clusterTime: Timestamp(1574796757, 2843) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796757, 2907), signature: { hash: BinData(0, 98E14D2AA1BFB2BD86C0561765265E5D47C393E0), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796755, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2585ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.769-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (fbe4c4d3-a547-4624-84e7-dd2ade23e5fb) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 1015), t: 1 } and commit timestamp Timestamp(1574796759, 1015)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.788-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7 (86f10f1c-9f3e-4c9b-8e4c-d6a159902655) to test5_fsmdb0.agg_out and drop fbe4c4d3-a547-4624-84e7-dd2ade23e5fb.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.763-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.769-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (fbe4c4d3-a547-4624-84e7-dd2ade23e5fb).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.788-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (fbe4c4d3-a547-4624-84e7-dd2ade23e5fb) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 1015), t: 1 } and commit timestamp Timestamp(1574796759, 1015)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.764-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a with generated UUID: a38e1881-81b7-45f9-89d7-1595b3dff2c1 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.769-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 86f10f1c-9f3e-4c9b-8e4c-d6a159902655 from test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.788-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (fbe4c4d3-a547-4624-84e7-dd2ade23e5fb).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.764-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076 with generated UUID: d6137596-7229-4f0e-9258-a410f3a38483 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.769-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (fbe4c4d3-a547-4624-84e7-dd2ade23e5fb)'. Ident: 'index-634--8000595249233899911', commit timestamp: 'Timestamp(1574796759, 1015)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.788-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 86f10f1c-9f3e-4c9b-8e4c-d6a159902655 from test5_fsmdb0.tmp.agg_out.e4cb51f9-da04-4cec-915b-70cd2e39ddf7 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.766-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.769-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (fbe4c4d3-a547-4624-84e7-dd2ade23e5fb)'. Ident: 'index-641--8000595249233899911', commit timestamp: 'Timestamp(1574796759, 1015)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.788-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (fbe4c4d3-a547-4624-84e7-dd2ade23e5fb)'. Ident: 'index-634--4104909142373009110', commit timestamp: 'Timestamp(1574796759, 1015)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.782-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 99f13918-22ec-40e3-8b95-a806bc7aa491: test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8 ( 519c4b85-a724-4df5-afb1-c17a51e4e0f0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.769-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-633--8000595249233899911, commit timestamp: Timestamp(1574796759, 1015)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.788-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (fbe4c4d3-a547-4624-84e7-dd2ade23e5fb)'. Ident: 'index-641--4104909142373009110', commit timestamp: 'Timestamp(1574796759, 1015)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.782-0500 I INDEX [conn108] Index build completed: 99f13918-22ec-40e3-8b95-a806bc7aa491
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.770-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8 with provided UUID: 519c4b85-a724-4df5-afb1-c17a51e4e0f0 and options: { uuid: UUID("519c4b85-a724-4df5-afb1-c17a51e4e0f0"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.788-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-633--4104909142373009110, commit timestamp: Timestamp(1574796759, 1015)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.791-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.773-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 0650ee3b-a83b-4902-8691-179542f4011f: test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549 ( 101fd173-019c-4512-af56-396319e293e5 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.788-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 796ecf40-a3c7-445c-b0a9-4436eb23023e: test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549 ( 101fd173-019c-4512-af56-396319e293e5 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.798-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.789-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.790-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8 with provided UUID: 519c4b85-a724-4df5-afb1-c17a51e4e0f0 and options: { uuid: UUID("519c4b85-a724-4df5-afb1-c17a51e4e0f0"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.798-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.819-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.804-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.799-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (28bc3985-b059-4779-915b-99c639a70135) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 2593), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.819-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.837-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.799-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (28bc3985-b059-4779-915b-99c639a70135).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.819-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 729b652d-19a1-4550-b106-32bd8cf4c0b6: test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a (0f06907b-c7c0-46be-8386-a7b0ef9298f5 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.837-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.799-0500 I STORAGE [conn110] renameCollection: renaming collection 0f06907b-c7c0-46be-8386-a7b0ef9298f5 from test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.819-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.837-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 5e20de81-ce04-4010-9e0a-d0c86e52f476: test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a (0f06907b-c7c0-46be-8386-a7b0ef9298f5 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.799-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (28bc3985-b059-4779-915b-99c639a70135)'. Ident: 'index-632-8224331490264904478', commit timestamp: 'Timestamp(1574796759, 2593)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.820-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.837-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.799-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (28bc3985-b059-4779-915b-99c639a70135)'. Ident: 'index-637-8224331490264904478', commit timestamp: 'Timestamp(1574796759, 2593)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.822-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.837-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.799-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-630-8224331490264904478, commit timestamp: Timestamp(1574796759, 2593)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.825-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 729b652d-19a1-4550-b106-32bd8cf4c0b6: test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a ( 0f06907b-c7c0-46be-8386-a7b0ef9298f5 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.839-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.799-0500 I INDEX [conn112] Registering index build: e88170a7-6ec0-47ae-8be2-3b6aa853af2d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.840-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.844-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 5e20de81-ce04-4010-9e0a-d0c86e52f476: test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a ( 0f06907b-c7c0-46be-8386-a7b0ef9298f5 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.799-0500 I INDEX [conn114] Registering index build: fe87530f-e505-424b-b055-bc03deafa387
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.840-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.858-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.799-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1399779910987540200, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8914111226069565880, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796759618), clusterTime: Timestamp(1574796759, 2) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796759, 2), signature: { hash: BinData(0, 51FCA4F8D1E8B167B6381EA173F59B5D2F1752AE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 179ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.840-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 258725a2-097d-411a-a5ee-8dc0eace4339: test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d (f9d6ae3d-980f-412a-a3b9-7e917efdba65 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.858-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.802-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b with generated UUID: 39acb95d-d2d3-4131-9240-2d099d5f97d3 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.840-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.858-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 7ddedcdb-3dc8-496a-83cf-277c5a717786: test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d (f9d6ae3d-980f-412a-a3b9-7e917efdba65 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.824-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.841-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.858-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.824-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.842-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549 (101fd173-019c-4512-af56-396319e293e5) to test5_fsmdb0.agg_out and drop 86f10f1c-9f3e-4c9b-8e4c-d6a159902655.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.859-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.824-0500 I STORAGE [conn112] Index build initialized: e88170a7-6ec0-47ae-8be2-3b6aa853af2d: test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076 (d6137596-7229-4f0e-9258-a410f3a38483 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.844-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.860-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549 (101fd173-019c-4512-af56-396319e293e5) to test5_fsmdb0.agg_out and drop 86f10f1c-9f3e-4c9b-8e4c-d6a159902655.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.824-0500 I INDEX [conn112] Waiting for index build to complete: e88170a7-6ec0-47ae-8be2-3b6aa853af2d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.844-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (86f10f1c-9f3e-4c9b-8e4c-d6a159902655) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 1586), t: 1 } and commit timestamp Timestamp(1574796759, 1586)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.861-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.832-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.844-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (86f10f1c-9f3e-4c9b-8e4c-d6a159902655).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.861-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (86f10f1c-9f3e-4c9b-8e4c-d6a159902655) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 1586), t: 1 } and commit timestamp Timestamp(1574796759, 1586)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.832-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.844-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 101fd173-019c-4512-af56-396319e293e5 from test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.861-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (86f10f1c-9f3e-4c9b-8e4c-d6a159902655).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.832-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (0f06907b-c7c0-46be-8386-a7b0ef9298f5) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 3096), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.844-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (86f10f1c-9f3e-4c9b-8e4c-d6a159902655)'. Ident: 'index-632--8000595249233899911', commit timestamp: 'Timestamp(1574796759, 1586)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.861-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 101fd173-019c-4512-af56-396319e293e5 from test5_fsmdb0.tmp.agg_out.0916d2c3-0666-419e-a7c9-6e4b88f8e549 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.832-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (0f06907b-c7c0-46be-8386-a7b0ef9298f5).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.844-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (86f10f1c-9f3e-4c9b-8e4c-d6a159902655)'. Ident: 'index-643--8000595249233899911', commit timestamp: 'Timestamp(1574796759, 1586)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.861-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (86f10f1c-9f3e-4c9b-8e4c-d6a159902655)'. Ident: 'index-632--4104909142373009110', commit timestamp: 'Timestamp(1574796759, 1586)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.832-0500 I STORAGE [conn46] renameCollection: renaming collection f9d6ae3d-980f-412a-a3b9-7e917efdba65 from test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.844-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-631--8000595249233899911, commit timestamp: Timestamp(1574796759, 1586)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.861-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (86f10f1c-9f3e-4c9b-8e4c-d6a159902655)'. Ident: 'index-643--4104909142373009110', commit timestamp: 'Timestamp(1574796759, 1586)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.832-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0f06907b-c7c0-46be-8386-a7b0ef9298f5)'. Ident: 'index-636-8224331490264904478', commit timestamp: 'Timestamp(1574796759, 3096)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.847-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 258725a2-097d-411a-a5ee-8dc0eace4339: test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d ( f9d6ae3d-980f-412a-a3b9-7e917efdba65 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.861-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-631--4104909142373009110, commit timestamp: Timestamp(1574796759, 1586)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.832-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0f06907b-c7c0-46be-8386-a7b0ef9298f5)'. Ident: 'index-639-8224331490264904478', commit timestamp: 'Timestamp(1574796759, 3096)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.849-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b (28bc3985-b059-4779-915b-99c639a70135) to test5_fsmdb0.agg_out and drop 101fd173-019c-4512-af56-396319e293e5.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.863-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 7ddedcdb-3dc8-496a-83cf-277c5a717786: test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d ( f9d6ae3d-980f-412a-a3b9-7e917efdba65 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.832-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-634-8224331490264904478, commit timestamp: Timestamp(1574796759, 3096)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.849-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (101fd173-019c-4512-af56-396319e293e5) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 2152), t: 1 } and commit timestamp Timestamp(1574796759, 2152)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.867-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b (28bc3985-b059-4779-915b-99c639a70135) to test5_fsmdb0.agg_out and drop 101fd173-019c-4512-af56-396319e293e5.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.833-0500 I INDEX [conn108] Registering index build: ce0ad941-7fba-4ac8-a34b-dbbefc890782
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.849-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (101fd173-019c-4512-af56-396319e293e5).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.867-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (101fd173-019c-4512-af56-396319e293e5) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 2152), t: 1 } and commit timestamp Timestamp(1574796759, 2152)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.833-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.849-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 28bc3985-b059-4779-915b-99c639a70135 from test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.867-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (101fd173-019c-4512-af56-396319e293e5).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.833-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1562954521198295636, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7471567402029203194, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796759662), clusterTime: Timestamp(1574796759, 572) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796759, 572), signature: { hash: BinData(0, 51FCA4F8D1E8B167B6381EA173F59B5D2F1752AE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 169ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.849-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (101fd173-019c-4512-af56-396319e293e5)'. Ident: 'index-638--8000595249233899911', commit timestamp: 'Timestamp(1574796759, 2152)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.867-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 28bc3985-b059-4779-915b-99c639a70135 from test5_fsmdb0.tmp.agg_out.13cb5f61-2c7a-47d8-986d-a4117dba446b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.833-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.849-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (101fd173-019c-4512-af56-396319e293e5)'. Ident: 'index-651--8000595249233899911', commit timestamp: 'Timestamp(1574796759, 2152)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.867-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (101fd173-019c-4512-af56-396319e293e5)'. Ident: 'index-638--4104909142373009110', commit timestamp: 'Timestamp(1574796759, 2152)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.835-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.849-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-637--8000595249233899911, commit timestamp: Timestamp(1574796759, 2152)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.867-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (101fd173-019c-4512-af56-396319e293e5)'. Ident: 'index-651--4104909142373009110', commit timestamp: 'Timestamp(1574796759, 2152)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.836-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602 with generated UUID: 729b80eb-6f7d-413a-8110-a36cde3bc772 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.851-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a with provided UUID: a38e1881-81b7-45f9-89d7-1595b3dff2c1 and options: { uuid: UUID("a38e1881-81b7-45f9-89d7-1595b3dff2c1"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.867-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-637--4104909142373009110, commit timestamp: Timestamp(1574796759, 2152)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.844-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: e88170a7-6ec0-47ae-8be2-3b6aa853af2d: test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076 ( d6137596-7229-4f0e-9258-a410f3a38483 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.868-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.869-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a with provided UUID: a38e1881-81b7-45f9-89d7-1595b3dff2c1 and options: { uuid: UUID("a38e1881-81b7-45f9-89d7-1595b3dff2c1"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.861-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.869-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076 with provided UUID: d6137596-7229-4f0e-9258-a410f3a38483 and options: { uuid: UUID("d6137596-7229-4f0e-9258-a410f3a38483"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.884-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.861-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.885-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.886-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076 with provided UUID: d6137596-7229-4f0e-9258-a410f3a38483 and options: { uuid: UUID("d6137596-7229-4f0e-9258-a410f3a38483"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.861-0500 I STORAGE [conn114] Index build initialized: fe87530f-e505-424b-b055-bc03deafa387: test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a (a38e1881-81b7-45f9-89d7-1595b3dff2c1 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.901-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.901-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.861-0500 I INDEX [conn114] Waiting for index build to complete: fe87530f-e505-424b-b055-bc03deafa387
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.901-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.917-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.861-0500 I INDEX [conn112] Index build completed: e88170a7-6ec0-47ae-8be2-3b6aa853af2d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.902-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 61d72ae3-2e34-4605-b910-26494cc6785f: test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8 (519c4b85-a724-4df5-afb1-c17a51e4e0f0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.917-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.869-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.902-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.917-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 37478798-8d95-424b-a99d-56f523edd651: test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8 (519c4b85-a724-4df5-afb1-c17a51e4e0f0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.869-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.902-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.917-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.869-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (f9d6ae3d-980f-412a-a3b9-7e917efdba65) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 3537), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.904-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.918-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.869-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (f9d6ae3d-980f-412a-a3b9-7e917efdba65).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.906-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a (0f06907b-c7c0-46be-8386-a7b0ef9298f5) to test5_fsmdb0.agg_out and drop 28bc3985-b059-4779-915b-99c639a70135.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.921-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.869-0500 I STORAGE [conn110] renameCollection: renaming collection 519c4b85-a724-4df5-afb1-c17a51e4e0f0 from test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.906-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test5_fsmdb0.agg_out (28bc3985-b059-4779-915b-99c639a70135) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 2593), t: 1 } and commit timestamp Timestamp(1574796759, 2593)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.923-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a (0f06907b-c7c0-46be-8386-a7b0ef9298f5) to test5_fsmdb0.agg_out and drop 28bc3985-b059-4779-915b-99c639a70135.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.869-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f9d6ae3d-980f-412a-a3b9-7e917efdba65)'. Ident: 'index-642-8224331490264904478', commit timestamp: 'Timestamp(1574796759, 3537)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.906-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test5_fsmdb0.agg_out (28bc3985-b059-4779-915b-99c639a70135).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.923-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (28bc3985-b059-4779-915b-99c639a70135) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 2593), t: 1 } and commit timestamp Timestamp(1574796759, 2593)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.869-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f9d6ae3d-980f-412a-a3b9-7e917efdba65)'. Ident: 'index-643-8224331490264904478', commit timestamp: 'Timestamp(1574796759, 3537)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.907-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection 0f06907b-c7c0-46be-8386-a7b0ef9298f5 from test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.923-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (28bc3985-b059-4779-915b-99c639a70135).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.869-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-640-8224331490264904478, commit timestamp: Timestamp(1574796759, 3537)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.907-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (28bc3985-b059-4779-915b-99c639a70135)'. Ident: 'index-640--8000595249233899911', commit timestamp: 'Timestamp(1574796759, 2593)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.923-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 0f06907b-c7c0-46be-8386-a7b0ef9298f5 from test5_fsmdb0.tmp.agg_out.c9c09fef-146b-4000-9f29-d8d9695b2f1a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.869-0500 I INDEX [conn46] Registering index build: 2d9398db-ff82-4e75-b866-65156ea56aa7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.907-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (28bc3985-b059-4779-915b-99c639a70135)'. Ident: 'index-649--8000595249233899911', commit timestamp: 'Timestamp(1574796759, 2593)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.923-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (28bc3985-b059-4779-915b-99c639a70135)'. Ident: 'index-640--4104909142373009110', commit timestamp: 'Timestamp(1574796759, 2593)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.870-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.907-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-639--8000595249233899911, commit timestamp: Timestamp(1574796759, 2593)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.923-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (28bc3985-b059-4779-915b-99c639a70135)'. Ident: 'index-649--4104909142373009110', commit timestamp: 'Timestamp(1574796759, 2593)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.870-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2864748277592064048, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 9154272099970170719, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796759704), clusterTime: Timestamp(1574796759, 1015) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796759, 1015), signature: { hash: BinData(0, 51FCA4F8D1E8B167B6381EA173F59B5D2F1752AE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 165ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.907-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 61d72ae3-2e34-4605-b910-26494cc6785f: test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8 ( 519c4b85-a724-4df5-afb1-c17a51e4e0f0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.923-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-639--4104909142373009110, commit timestamp: Timestamp(1574796759, 2593)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.871-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.913-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b with provided UUID: 39acb95d-d2d3-4131-9240-2d099d5f97d3 and options: { uuid: UUID("39acb95d-d2d3-4131-9240-2d099d5f97d3"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.924-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 37478798-8d95-424b-a99d-56f523edd651: test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8 ( 519c4b85-a724-4df5-afb1-c17a51e4e0f0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.872-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2 with generated UUID: a6ce9d79-143a-4ced-9e13-c3f939be986a and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.929-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.926-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796759, 2593) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796759, 2657), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 3379 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 122ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.880-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.933-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d (f9d6ae3d-980f-412a-a3b9-7e917efdba65) to test5_fsmdb0.agg_out and drop 0f06907b-c7c0-46be-8386-a7b0ef9298f5.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.930-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b with provided UUID: 39acb95d-d2d3-4131-9240-2d099d5f97d3 and options: { uuid: UUID("39acb95d-d2d3-4131-9240-2d099d5f97d3"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.896-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.933-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (0f06907b-c7c0-46be-8386-a7b0ef9298f5) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 3096), t: 1 } and commit timestamp Timestamp(1574796759, 3096)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.944-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.896-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.933-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (0f06907b-c7c0-46be-8386-a7b0ef9298f5).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.948-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d (f9d6ae3d-980f-412a-a3b9-7e917efdba65) to test5_fsmdb0.agg_out and drop 0f06907b-c7c0-46be-8386-a7b0ef9298f5.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.896-0500 I STORAGE [conn108] Index build initialized: ce0ad941-7fba-4ac8-a34b-dbbefc890782: test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b (39acb95d-d2d3-4131-9240-2d099d5f97d3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.933-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection f9d6ae3d-980f-412a-a3b9-7e917efdba65 from test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.948-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (0f06907b-c7c0-46be-8386-a7b0ef9298f5) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 3096), t: 1 } and commit timestamp Timestamp(1574796759, 3096)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.896-0500 I INDEX [conn108] Waiting for index build to complete: ce0ad941-7fba-4ac8-a34b-dbbefc890782
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.933-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0f06907b-c7c0-46be-8386-a7b0ef9298f5)'. Ident: 'index-646--8000595249233899911', commit timestamp: 'Timestamp(1574796759, 3096)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.948-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (0f06907b-c7c0-46be-8386-a7b0ef9298f5).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.898-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: fe87530f-e505-424b-b055-bc03deafa387: test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a ( a38e1881-81b7-45f9-89d7-1595b3dff2c1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.933-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0f06907b-c7c0-46be-8386-a7b0ef9298f5)'. Ident: 'index-655--8000595249233899911', commit timestamp: 'Timestamp(1574796759, 3096)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.948-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection f9d6ae3d-980f-412a-a3b9-7e917efdba65 from test5_fsmdb0.tmp.agg_out.316fc8a6-d6f4-4af6-8c75-8d855f743d7d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.906-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.933-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-645--8000595249233899911, commit timestamp: Timestamp(1574796759, 3096)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.948-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0f06907b-c7c0-46be-8386-a7b0ef9298f5)'. Ident: 'index-646--4104909142373009110', commit timestamp: 'Timestamp(1574796759, 3096)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.921-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.954-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.948-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0f06907b-c7c0-46be-8386-a7b0ef9298f5)'. Ident: 'index-655--4104909142373009110', commit timestamp: 'Timestamp(1574796759, 3096)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.921-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.954-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.948-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-645--4104909142373009110, commit timestamp: Timestamp(1574796759, 3096)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.921-0500 I STORAGE [conn46] Index build initialized: 2d9398db-ff82-4e75-b866-65156ea56aa7: test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602 (729b80eb-6f7d-413a-8110-a36cde3bc772 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.954-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: fe138666-e00c-4f77-9637-706c82cc3395: test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076 (d6137596-7229-4f0e-9258-a410f3a38483 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.969-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.921-0500 I INDEX [conn46] Waiting for index build to complete: 2d9398db-ff82-4e75-b866-65156ea56aa7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.954-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.969-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.921-0500 I INDEX [conn114] Index build completed: fe87530f-e505-424b-b055-bc03deafa387
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.955-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.969-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: a45806f9-1b7a-4a93-a0d1-de787e070de7: test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076 (d6137596-7229-4f0e-9258-a410f3a38483 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.921-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.957-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.969-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.921-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796759, 2592), signature: { hash: BinData(0, 51FCA4F8D1E8B167B6381EA173F59B5D2F1752AE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 15698 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 129ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.957-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602 with provided UUID: 729b80eb-6f7d-413a-8110-a36cde3bc772 and options: { uuid: UUID("729b80eb-6f7d-413a-8110-a36cde3bc772"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.970-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.921-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (519c4b85-a724-4df5-afb1-c17a51e4e0f0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 4043), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.959-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: fe138666-e00c-4f77-9637-706c82cc3395: test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076 ( d6137596-7229-4f0e-9258-a410f3a38483 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.973-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.921-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (519c4b85-a724-4df5-afb1-c17a51e4e0f0).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.974-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.975-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602 with provided UUID: 729b80eb-6f7d-413a-8110-a36cde3bc772 and options: { uuid: UUID("729b80eb-6f7d-413a-8110-a36cde3bc772"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.921-0500 I STORAGE [conn112] renameCollection: renaming collection d6137596-7229-4f0e-9258-a410f3a38483 from test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.978-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8 (519c4b85-a724-4df5-afb1-c17a51e4e0f0) to test5_fsmdb0.agg_out and drop f9d6ae3d-980f-412a-a3b9-7e917efdba65.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.976-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: a45806f9-1b7a-4a93-a0d1-de787e070de7: test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076 ( d6137596-7229-4f0e-9258-a410f3a38483 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.921-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (519c4b85-a724-4df5-afb1-c17a51e4e0f0)'. Ident: 'index-646-8224331490264904478', commit timestamp: 'Timestamp(1574796759, 4043)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.978-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (f9d6ae3d-980f-412a-a3b9-7e917efdba65) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 3537), t: 1 } and commit timestamp Timestamp(1574796759, 3537)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.991-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.921-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (519c4b85-a724-4df5-afb1-c17a51e4e0f0)'. Ident: 'index-647-8224331490264904478', commit timestamp: 'Timestamp(1574796759, 4043)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.978-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (f9d6ae3d-980f-412a-a3b9-7e917efdba65).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.996-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8 (519c4b85-a724-4df5-afb1-c17a51e4e0f0) to test5_fsmdb0.agg_out and drop f9d6ae3d-980f-412a-a3b9-7e917efdba65.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.921-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-644-8224331490264904478, commit timestamp: Timestamp(1574796759, 4043)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.978-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 519c4b85-a724-4df5-afb1-c17a51e4e0f0 from test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.996-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (f9d6ae3d-980f-412a-a3b9-7e917efdba65) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 3537), t: 1 } and commit timestamp Timestamp(1574796759, 3537)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.921-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.978-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f9d6ae3d-980f-412a-a3b9-7e917efdba65)'. Ident: 'index-648--8000595249233899911', commit timestamp: 'Timestamp(1574796759, 3537)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.996-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (f9d6ae3d-980f-412a-a3b9-7e917efdba65).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.922-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.978-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f9d6ae3d-980f-412a-a3b9-7e917efdba65)'. Ident: 'index-657--8000595249233899911', commit timestamp: 'Timestamp(1574796759, 3537)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.996-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 519c4b85-a724-4df5-afb1-c17a51e4e0f0 from test5_fsmdb0.tmp.agg_out.77f2e131-8731-4f2a-a33d-d2774a2e72b8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.922-0500 I INDEX [conn110] Registering index build: b77a63c3-4243-4c4f-8951-52c6b981de0b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.978-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-647--8000595249233899911, commit timestamp: Timestamp(1574796759, 3537)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.996-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f9d6ae3d-980f-412a-a3b9-7e917efdba65)'. Ident: 'index-648--4104909142373009110', commit timestamp: 'Timestamp(1574796759, 3537)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.922-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.979-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2 with provided UUID: a6ce9d79-143a-4ced-9e13-c3f939be986a and options: { uuid: UUID("a6ce9d79-143a-4ced-9e13-c3f939be986a"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.996-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f9d6ae3d-980f-412a-a3b9-7e917efdba65)'. Ident: 'index-657--4104909142373009110', commit timestamp: 'Timestamp(1574796759, 3537)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.923-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7387447627294914582, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7100814232502303572, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796759763), clusterTime: Timestamp(1574796759, 2216) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796759, 2216), signature: { hash: BinData(0, 51FCA4F8D1E8B167B6381EA173F59B5D2F1752AE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 158ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:39.993-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.996-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-647--4104909142373009110, commit timestamp: Timestamp(1574796759, 3537)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.925-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.016-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:39.997-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2 with provided UUID: a6ce9d79-143a-4ced-9e13-c3f939be986a and options: { uuid: UUID("a6ce9d79-143a-4ced-9e13-c3f939be986a"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.925-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b with generated UUID: 741a718d-5025-4bbf-8f15-0c288d313d26 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.016-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.012-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.926-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.016-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 03f08599-b06f-485d-a43e-1a3c3fc439a6: test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a (a38e1881-81b7-45f9-89d7-1595b3dff2c1 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.033-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.933-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: ce0ad941-7fba-4ac8-a34b-dbbefc890782: test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b ( 39acb95d-d2d3-4131-9240-2d099d5f97d3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.016-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.033-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.942-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.017-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.033-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 572cd9aa-0bf6-4015-bd2b-a36690904cb7: test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a (a38e1881-81b7-45f9-89d7-1595b3dff2c1 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.951-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.019-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.033-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.951-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.021-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076 (d6137596-7229-4f0e-9258-a410f3a38483) to test5_fsmdb0.agg_out and drop 519c4b85-a724-4df5-afb1-c17a51e4e0f0.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.034-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.951-0500 I STORAGE [conn110] Index build initialized: b77a63c3-4243-4c4f-8951-52c6b981de0b: test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2 (a6ce9d79-143a-4ced-9e13-c3f939be986a ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.021-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (519c4b85-a724-4df5-afb1-c17a51e4e0f0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 4043), t: 1 } and commit timestamp Timestamp(1574796759, 4043)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.036-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.951-0500 I INDEX [conn110] Waiting for index build to complete: b77a63c3-4243-4c4f-8951-52c6b981de0b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.021-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (519c4b85-a724-4df5-afb1-c17a51e4e0f0).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.038-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076 (d6137596-7229-4f0e-9258-a410f3a38483) to test5_fsmdb0.agg_out and drop 519c4b85-a724-4df5-afb1-c17a51e4e0f0.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.951-0500 I INDEX [conn108] Index build completed: ce0ad941-7fba-4ac8-a34b-dbbefc890782
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.021-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection d6137596-7229-4f0e-9258-a410f3a38483 from test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.038-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (519c4b85-a724-4df5-afb1-c17a51e4e0f0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 4043), t: 1 } and commit timestamp Timestamp(1574796759, 4043)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.951-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.021-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (519c4b85-a724-4df5-afb1-c17a51e4e0f0)'. Ident: 'index-654--8000595249233899911', commit timestamp: 'Timestamp(1574796759, 4043)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.038-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (519c4b85-a724-4df5-afb1-c17a51e4e0f0).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.951-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796759, 3096), signature: { hash: BinData(0, 51FCA4F8D1E8B167B6381EA173F59B5D2F1752AE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 8114 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 118ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.021-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (519c4b85-a724-4df5-afb1-c17a51e4e0f0)'. Ident: 'index-663--8000595249233899911', commit timestamp: 'Timestamp(1574796759, 4043)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.038-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection d6137596-7229-4f0e-9258-a410f3a38483 from test5_fsmdb0.tmp.agg_out.67d5cf3a-3043-4017-b4a2-13eb4340f076 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.958-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.021-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-653--8000595249233899911, commit timestamp: Timestamp(1574796759, 4043)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.038-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (519c4b85-a724-4df5-afb1-c17a51e4e0f0)'. Ident: 'index-654--4104909142373009110', commit timestamp: 'Timestamp(1574796759, 4043)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.959-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 2d9398db-ff82-4e75-b866-65156ea56aa7: test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602 ( 729b80eb-6f7d-413a-8110-a36cde3bc772 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.025-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 03f08599-b06f-485d-a43e-1a3c3fc439a6: test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a ( a38e1881-81b7-45f9-89d7-1595b3dff2c1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.038-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (519c4b85-a724-4df5-afb1-c17a51e4e0f0)'. Ident: 'index-663--4104909142373009110', commit timestamp: 'Timestamp(1574796759, 4043)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.959-0500 I INDEX [conn46] Index build completed: 2d9398db-ff82-4e75-b866-65156ea56aa7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.039-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.038-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-653--4104909142373009110, commit timestamp: Timestamp(1574796759, 4043)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.960-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.039-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.040-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 572cd9aa-0bf6-4015-bd2b-a36690904cb7: test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a ( a38e1881-81b7-45f9-89d7-1595b3dff2c1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.963-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.039-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: d36244e5-3b55-48a9-b128-226c65b04786: test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b (39acb95d-d2d3-4131-9240-2d099d5f97d3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.056-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.963-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.039-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.056-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.963-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (d6137596-7229-4f0e-9258-a410f3a38483) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 4552), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.040-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.056-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: ba088d70-ff77-41b2-866a-c589377c3043: test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b (39acb95d-d2d3-4131-9240-2d099d5f97d3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.963-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (d6137596-7229-4f0e-9258-a410f3a38483).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.040-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b with provided UUID: 741a718d-5025-4bbf-8f15-0c288d313d26 and options: { uuid: UUID("741a718d-5025-4bbf-8f15-0c288d313d26"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.056-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.963-0500 I STORAGE [conn112] renameCollection: renaming collection a38e1881-81b7-45f9-89d7-1595b3dff2c1 from test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.043-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.057-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.963-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d6137596-7229-4f0e-9258-a410f3a38483)'. Ident: 'index-652-8224331490264904478', commit timestamp: 'Timestamp(1574796759, 4552)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.052-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: d36244e5-3b55-48a9-b128-226c65b04786: test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b ( 39acb95d-d2d3-4131-9240-2d099d5f97d3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.057-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796759, 4045) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796759, 4046), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 128ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.963-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d6137596-7229-4f0e-9258-a410f3a38483)'. Ident: 'index-653-8224331490264904478', commit timestamp: 'Timestamp(1574796759, 4552)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.059-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.060-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.963-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-650-8224331490264904478, commit timestamp: Timestamp(1574796759, 4552)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.079-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.061-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b with provided UUID: 741a718d-5025-4bbf-8f15-0c288d313d26 and options: { uuid: UUID("741a718d-5025-4bbf-8f15-0c288d313d26"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.963-0500 I INDEX [conn114] Registering index build: 5922784e-0921-4294-a4a4-ef57ad1e7853
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.079-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.062-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: ba088d70-ff77-41b2-866a-c589377c3043: test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b ( 39acb95d-d2d3-4131-9240-2d099d5f97d3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.964-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3269807635018640000, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8197650983181344459, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796759762), clusterTime: Timestamp(1574796759, 2151) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796759, 2216), signature: { hash: BinData(0, 51FCA4F8D1E8B167B6381EA173F59B5D2F1752AE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 200ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.079-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 7f5b84e3-6188-441b-8145-6ec66bb4c9d0: test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602 (729b80eb-6f7d-413a-8110-a36cde3bc772 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.076-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.965-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: b77a63c3-4243-4c4f-8951-52c6b981de0b: test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2 ( a6ce9d79-143a-4ced-9e13-c3f939be986a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.079-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.096-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.968-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7 with generated UUID: 7b5de924-72b7-42c0-8fd2-b8e841365693 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.080-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.096-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.988-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.083-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.096-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: df2a1a1b-0569-4f67-989d-9ed74c199cf6: test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602 (729b80eb-6f7d-413a-8110-a36cde3bc772 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.988-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.086-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 7f5b84e3-6188-441b-8145-6ec66bb4c9d0: test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602 ( 729b80eb-6f7d-413a-8110-a36cde3bc772 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.096-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.988-0500 I STORAGE [conn114] Index build initialized: 5922784e-0921-4294-a4a4-ef57ad1e7853: test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b (741a718d-5025-4bbf-8f15-0c288d313d26 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.100-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.097-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.988-0500 I INDEX [conn114] Waiting for index build to complete: 5922784e-0921-4294-a4a4-ef57ad1e7853
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.100-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.099-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.988-0500 I INDEX [conn110] Index build completed: b77a63c3-4243-4c4f-8951-52c6b981de0b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.100-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 4d5f21fe-b2db-4b48-a667-4614691814d2: test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2 (a6ce9d79-143a-4ced-9e13-c3f939be986a ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.103-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: df2a1a1b-0569-4f67-989d-9ed74c199cf6: test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602 ( 729b80eb-6f7d-413a-8110-a36cde3bc772 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.988-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.100-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.117-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.995-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.100-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.117-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.995-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.101-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a (a38e1881-81b7-45f9-89d7-1595b3dff2c1) to test5_fsmdb0.agg_out and drop d6137596-7229-4f0e-9258-a410f3a38483.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.117-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 9523c709-891d-4fdb-98b1-88eb5acd8d9e: test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2 (a6ce9d79-143a-4ced-9e13-c3f939be986a ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.998-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.104-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.117-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.998-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.104-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test5_fsmdb0.agg_out (d6137596-7229-4f0e-9258-a410f3a38483) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 4552), t: 1 } and commit timestamp Timestamp(1574796759, 4552)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.118-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.998-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (a38e1881-81b7-45f9-89d7-1595b3dff2c1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 5505), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.104-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test5_fsmdb0.agg_out (d6137596-7229-4f0e-9258-a410f3a38483).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.118-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a (a38e1881-81b7-45f9-89d7-1595b3dff2c1) to test5_fsmdb0.agg_out and drop d6137596-7229-4f0e-9258-a410f3a38483.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.998-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (a38e1881-81b7-45f9-89d7-1595b3dff2c1).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.104-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection a38e1881-81b7-45f9-89d7-1595b3dff2c1 from test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.120-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.998-0500 I STORAGE [conn108] renameCollection: renaming collection 39acb95d-d2d3-4131-9240-2d099d5f97d3 from test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.105-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d6137596-7229-4f0e-9258-a410f3a38483)'. Ident: 'index-662--8000595249233899911', commit timestamp: 'Timestamp(1574796759, 4552)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.120-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.agg_out (d6137596-7229-4f0e-9258-a410f3a38483) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 4552), t: 1 } and commit timestamp Timestamp(1574796759, 4552)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.998-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (a38e1881-81b7-45f9-89d7-1595b3dff2c1)'. Ident: 'index-651-8224331490264904478', commit timestamp: 'Timestamp(1574796759, 5505)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.105-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d6137596-7229-4f0e-9258-a410f3a38483)'. Ident: 'index-667--8000595249233899911', commit timestamp: 'Timestamp(1574796759, 4552)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.120-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.agg_out (d6137596-7229-4f0e-9258-a410f3a38483).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.998-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (a38e1881-81b7-45f9-89d7-1595b3dff2c1)'. Ident: 'index-657-8224331490264904478', commit timestamp: 'Timestamp(1574796759, 5505)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.105-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-661--8000595249233899911, commit timestamp: Timestamp(1574796759, 4552)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.120-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection a38e1881-81b7-45f9-89d7-1595b3dff2c1 from test5_fsmdb0.tmp.agg_out.c81fa1e5-aca6-4042-9bc5-71b13435513a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.998-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-649-8224331490264904478, commit timestamp: Timestamp(1574796759, 5505)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.105-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7 with provided UUID: 7b5de924-72b7-42c0-8fd2-b8e841365693 and options: { uuid: UUID("7b5de924-72b7-42c0-8fd2-b8e841365693"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.120-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d6137596-7229-4f0e-9258-a410f3a38483)'. Ident: 'index-662--4104909142373009110', commit timestamp: 'Timestamp(1574796759, 4552)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.998-0500 I INDEX [conn112] Registering index build: 317ebd83-3a16-42d6-9686-1700b837b2ca
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.107-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 4d5f21fe-b2db-4b48-a667-4614691814d2: test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2 ( a6ce9d79-143a-4ced-9e13-c3f939be986a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.120-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d6137596-7229-4f0e-9258-a410f3a38483)'. Ident: 'index-667--4104909142373009110', commit timestamp: 'Timestamp(1574796759, 4552)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:39.999-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1059592561896686490, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1160778163493068927, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796759801), clusterTime: Timestamp(1574796759, 2657) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796759, 2721), signature: { hash: BinData(0, 51FCA4F8D1E8B167B6381EA173F59B5D2F1752AE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 197ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.121-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.120-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-661--4104909142373009110, commit timestamp: Timestamp(1574796759, 4552)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.001-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 5922784e-0921-4294-a4a4-ef57ad1e7853: test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b ( 741a718d-5025-4bbf-8f15-0c288d313d26 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.144-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.122-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 9523c709-891d-4fdb-98b1-88eb5acd8d9e: test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2 ( a6ce9d79-143a-4ced-9e13-c3f939be986a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.016-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.144-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.123-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7 with provided UUID: 7b5de924-72b7-42c0-8fd2-b8e841365693 and options: { uuid: UUID("7b5de924-72b7-42c0-8fd2-b8e841365693"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.016-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.144-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 52101693-242b-4e1d-bc16-821b8837bf6f: test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b (741a718d-5025-4bbf-8f15-0c288d313d26 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.135-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.016-0500 I STORAGE [conn112] Index build initialized: 317ebd83-3a16-42d6-9686-1700b837b2ca: test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7 (7b5de924-72b7-42c0-8fd2-b8e841365693 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.144-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.166-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.017-0500 I INDEX [conn112] Waiting for index build to complete: 317ebd83-3a16-42d6-9686-1700b837b2ca
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.144-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.166-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.017-0500 I INDEX [conn114] Index build completed: 5922784e-0921-4294-a4a4-ef57ad1e7853
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.145-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b (39acb95d-d2d3-4131-9240-2d099d5f97d3) to test5_fsmdb0.agg_out and drop a38e1881-81b7-45f9-89d7-1595b3dff2c1.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.166-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: f64a45da-b85a-495c-8b3f-3a788447acbe: test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b (741a718d-5025-4bbf-8f15-0c288d313d26 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.017-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.146-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.166-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.017-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (39acb95d-d2d3-4131-9240-2d099d5f97d3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796760, 2), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.146-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (a38e1881-81b7-45f9-89d7-1595b3dff2c1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 5505), t: 1 } and commit timestamp Timestamp(1574796759, 5505)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.167-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.017-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (39acb95d-d2d3-4131-9240-2d099d5f97d3).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.146-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (a38e1881-81b7-45f9-89d7-1595b3dff2c1).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.168-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b (39acb95d-d2d3-4131-9240-2d099d5f97d3) to test5_fsmdb0.agg_out and drop a38e1881-81b7-45f9-89d7-1595b3dff2c1.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.017-0500 I STORAGE [conn46] renameCollection: renaming collection 729b80eb-6f7d-413a-8110-a36cde3bc772 from test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.146-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 39acb95d-d2d3-4131-9240-2d099d5f97d3 from test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.170-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.017-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (39acb95d-d2d3-4131-9240-2d099d5f97d3)'. Ident: 'index-656-8224331490264904478', commit timestamp: 'Timestamp(1574796760, 2)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.146-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (a38e1881-81b7-45f9-89d7-1595b3dff2c1)'. Ident: 'index-660--8000595249233899911', commit timestamp: 'Timestamp(1574796759, 5505)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.170-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (a38e1881-81b7-45f9-89d7-1595b3dff2c1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796759, 5505), t: 1 } and commit timestamp Timestamp(1574796759, 5505)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.017-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (39acb95d-d2d3-4131-9240-2d099d5f97d3)'. Ident: 'index-661-8224331490264904478', commit timestamp: 'Timestamp(1574796760, 2)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.146-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (a38e1881-81b7-45f9-89d7-1595b3dff2c1)'. Ident: 'index-673--8000595249233899911', commit timestamp: 'Timestamp(1574796759, 5505)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.170-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (a38e1881-81b7-45f9-89d7-1595b3dff2c1).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.017-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-654-8224331490264904478, commit timestamp: Timestamp(1574796760, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.146-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-659--8000595249233899911, commit timestamp: Timestamp(1574796759, 5505)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.170-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection 39acb95d-d2d3-4131-9240-2d099d5f97d3 from test5_fsmdb0.tmp.agg_out.f86928aa-e969-435e-8c7d-46cc0a23c72b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.017-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.148-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 52101693-242b-4e1d-bc16-821b8837bf6f: test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b ( 741a718d-5025-4bbf-8f15-0c288d313d26 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.170-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (a38e1881-81b7-45f9-89d7-1595b3dff2c1)'. Ident: 'index-660--4104909142373009110', commit timestamp: 'Timestamp(1574796759, 5505)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.017-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 9140867837625618996, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3927285544937577414, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796759834), clusterTime: Timestamp(1574796759, 3160) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796759, 3224), signature: { hash: BinData(0, 51FCA4F8D1E8B167B6381EA173F59B5D2F1752AE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 182ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.151-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602 (729b80eb-6f7d-413a-8110-a36cde3bc772) to test5_fsmdb0.agg_out and drop 39acb95d-d2d3-4131-9240-2d099d5f97d3.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.170-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (a38e1881-81b7-45f9-89d7-1595b3dff2c1)'. Ident: 'index-673--4104909142373009110', commit timestamp: 'Timestamp(1574796759, 5505)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.017-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b with generated UUID: 575ffa6d-b5a3-4e26-8672-2dc8d3c41398 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.151-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (39acb95d-d2d3-4131-9240-2d099d5f97d3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796760, 2), t: 1 } and commit timestamp Timestamp(1574796760, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.170-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-659--4104909142373009110, commit timestamp: Timestamp(1574796759, 5505)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.018-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.151-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (39acb95d-d2d3-4131-9240-2d099d5f97d3).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.171-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: f64a45da-b85a-495c-8b3f-3a788447acbe: test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b ( 741a718d-5025-4bbf-8f15-0c288d313d26 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.020-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c with generated UUID: ea6396c1-14d0-4bd9-88f9-4579395c07c3 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.151-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 729b80eb-6f7d-413a-8110-a36cde3bc772 from test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.174-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602 (729b80eb-6f7d-413a-8110-a36cde3bc772) to test5_fsmdb0.agg_out and drop 39acb95d-d2d3-4131-9240-2d099d5f97d3.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.029-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.151-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (39acb95d-d2d3-4131-9240-2d099d5f97d3)'. Ident: 'index-666--8000595249233899911', commit timestamp: 'Timestamp(1574796760, 2)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.175-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (39acb95d-d2d3-4131-9240-2d099d5f97d3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796760, 2), t: 1 } and commit timestamp Timestamp(1574796760, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.046-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.151-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (39acb95d-d2d3-4131-9240-2d099d5f97d3)'. Ident: 'index-675--8000595249233899911', commit timestamp: 'Timestamp(1574796760, 2)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.175-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (39acb95d-d2d3-4131-9240-2d099d5f97d3).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.047-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 317ebd83-3a16-42d6-9686-1700b837b2ca: test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7 ( 7b5de924-72b7-42c0-8fd2-b8e841365693 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.151-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-665--8000595249233899911, commit timestamp: Timestamp(1574796760, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.175-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 729b80eb-6f7d-413a-8110-a36cde3bc772 from test5_fsmdb0.tmp.agg_out.e759fe2c-07b8-4615-a61e-82e54c3e8602 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.175-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (39acb95d-d2d3-4131-9240-2d099d5f97d3)'. Ident: 'index-666--4104909142373009110', commit timestamp: 'Timestamp(1574796760, 2)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.157-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b with provided UUID: 575ffa6d-b5a3-4e26-8672-2dc8d3c41398 and options: { uuid: UUID("575ffa6d-b5a3-4e26-8672-2dc8d3c41398"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.047-0500 I INDEX [conn112] Index build completed: 317ebd83-3a16-42d6-9686-1700b837b2ca
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.175-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (39acb95d-d2d3-4131-9240-2d099d5f97d3)'. Ident: 'index-675--4104909142373009110', commit timestamp: 'Timestamp(1574796760, 2)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.175-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.053-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.175-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-665--4104909142373009110, commit timestamp: Timestamp(1574796760, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.176-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c with provided UUID: ea6396c1-14d0-4bd9-88f9-4579395c07c3 and options: { uuid: UUID("ea6396c1-14d0-4bd9-88f9-4579395c07c3"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.053-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.177-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b with provided UUID: 575ffa6d-b5a3-4e26-8672-2dc8d3c41398 and options: { uuid: UUID("575ffa6d-b5a3-4e26-8672-2dc8d3c41398"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.191-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.053-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (729b80eb-6f7d-413a-8110-a36cde3bc772) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796760, 507), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.191-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.210-0500 I INDEX [ReplWriterWorker-12] index build: starting on test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.053-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (729b80eb-6f7d-413a-8110-a36cde3bc772).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.192-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c with provided UUID: ea6396c1-14d0-4bd9-88f9-4579395c07c3 and options: { uuid: UUID("ea6396c1-14d0-4bd9-88f9-4579395c07c3"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.210-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.053-0500 I STORAGE [conn110] renameCollection: renaming collection a6ce9d79-143a-4ced-9e13-c3f939be986a from test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.206-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.210-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 6933946c-21b0-42ce-a01e-f7feb9431053: test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7 (7b5de924-72b7-42c0-8fd2-b8e841365693 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.053-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (729b80eb-6f7d-413a-8110-a36cde3bc772)'. Ident: 'index-660-8224331490264904478', commit timestamp: 'Timestamp(1574796760, 507)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.224-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.211-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.053-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (729b80eb-6f7d-413a-8110-a36cde3bc772)'. Ident: 'index-665-8224331490264904478', commit timestamp: 'Timestamp(1574796760, 507)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.224-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.211-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.053-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-658-8224331490264904478, commit timestamp: Timestamp(1574796760, 507)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.224-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 1bb99f05-f8a8-4724-befc-da280e55bd08: test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7 (7b5de924-72b7-42c0-8fd2-b8e841365693 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.212-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2 (a6ce9d79-143a-4ced-9e13-c3f939be986a) to test5_fsmdb0.agg_out and drop 729b80eb-6f7d-413a-8110-a36cde3bc772.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.053-0500 I INDEX [conn108] Registering index build: c09f6385-6a2c-4ee7-bcc1-debda9b6f9d3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.224-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.213-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.053-0500 I INDEX [conn46] Registering index build: c679b3ee-368f-40b3-886d-e8800b081e02
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.225-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.214-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (729b80eb-6f7d-413a-8110-a36cde3bc772) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796760, 507), t: 1 } and commit timestamp Timestamp(1574796760, 507)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.054-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1140330322676416024, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 963826095656677489, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796759871), clusterTime: Timestamp(1574796759, 3537) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796759, 3537), signature: { hash: BinData(0, 51FCA4F8D1E8B167B6381EA173F59B5D2F1752AE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 181ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.225-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2 (a6ce9d79-143a-4ced-9e13-c3f939be986a) to test5_fsmdb0.agg_out and drop 729b80eb-6f7d-413a-8110-a36cde3bc772.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.214-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (729b80eb-6f7d-413a-8110-a36cde3bc772).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.056-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 with generated UUID: 3243bad4-833c-4c6f-8ad7-3014ed95dcf8 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.227-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.214-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection a6ce9d79-143a-4ced-9e13-c3f939be986a from test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.078-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.228-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (729b80eb-6f7d-413a-8110-a36cde3bc772) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796760, 507), t: 1 } and commit timestamp Timestamp(1574796760, 507)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.214-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (729b80eb-6f7d-413a-8110-a36cde3bc772)'. Ident: 'index-670--8000595249233899911', commit timestamp: 'Timestamp(1574796760, 507)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.078-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.228-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (729b80eb-6f7d-413a-8110-a36cde3bc772).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.214-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (729b80eb-6f7d-413a-8110-a36cde3bc772)'. Ident: 'index-679--8000595249233899911', commit timestamp: 'Timestamp(1574796760, 507)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.078-0500 I STORAGE [conn108] Index build initialized: c09f6385-6a2c-4ee7-bcc1-debda9b6f9d3: test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c (ea6396c1-14d0-4bd9-88f9-4579395c07c3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.228-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection a6ce9d79-143a-4ced-9e13-c3f939be986a from test5_fsmdb0.tmp.agg_out.0ac77b78-3ead-4477-aeb7-c5445cfe90e2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.214-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-669--8000595249233899911, commit timestamp: Timestamp(1574796760, 507)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.078-0500 I INDEX [conn108] Waiting for index build to complete: c09f6385-6a2c-4ee7-bcc1-debda9b6f9d3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.228-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (729b80eb-6f7d-413a-8110-a36cde3bc772)'. Ident: 'index-670--4104909142373009110', commit timestamp: 'Timestamp(1574796760, 507)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.215-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 6933946c-21b0-42ce-a01e-f7feb9431053: test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7 ( 7b5de924-72b7-42c0-8fd2-b8e841365693 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.086-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.228-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (729b80eb-6f7d-413a-8110-a36cde3bc772)'. Ident: 'index-679--4104909142373009110', commit timestamp: 'Timestamp(1574796760, 507)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.216-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 with provided UUID: 3243bad4-833c-4c6f-8ad7-3014ed95dcf8 and options: { uuid: UUID("3243bad4-833c-4c6f-8ad7-3014ed95dcf8"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.086-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.228-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-669--4104909142373009110, commit timestamp: Timestamp(1574796760, 507)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.232-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.086-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (a6ce9d79-143a-4ced-9e13-c3f939be986a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796760, 1010), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.230-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796760, 571) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796760, 636), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 16220 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 169ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.236-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b (741a718d-5025-4bbf-8f15-0c288d313d26) to test5_fsmdb0.agg_out and drop a6ce9d79-143a-4ced-9e13-c3f939be986a.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.086-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (a6ce9d79-143a-4ced-9e13-c3f939be986a).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.230-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 1bb99f05-f8a8-4724-befc-da280e55bd08: test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7 ( 7b5de924-72b7-42c0-8fd2-b8e841365693 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.236-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (a6ce9d79-143a-4ced-9e13-c3f939be986a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796760, 1010), t: 1 } and commit timestamp Timestamp(1574796760, 1010)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.086-0500 I STORAGE [conn114] renameCollection: renaming collection 741a718d-5025-4bbf-8f15-0c288d313d26 from test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.233-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 with provided UUID: 3243bad4-833c-4c6f-8ad7-3014ed95dcf8 and options: { uuid: UUID("3243bad4-833c-4c6f-8ad7-3014ed95dcf8"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.236-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (a6ce9d79-143a-4ced-9e13-c3f939be986a).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.087-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (a6ce9d79-143a-4ced-9e13-c3f939be986a)'. Ident: 'index-664-8224331490264904478', commit timestamp: 'Timestamp(1574796760, 1010)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.274-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.237-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection 741a718d-5025-4bbf-8f15-0c288d313d26 from test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.087-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (a6ce9d79-143a-4ced-9e13-c3f939be986a)'. Ident: 'index-667-8224331490264904478', commit timestamp: 'Timestamp(1574796760, 1010)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.278-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b (741a718d-5025-4bbf-8f15-0c288d313d26) to test5_fsmdb0.agg_out and drop a6ce9d79-143a-4ced-9e13-c3f939be986a.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.237-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (a6ce9d79-143a-4ced-9e13-c3f939be986a)'. Ident: 'index-672--8000595249233899911', commit timestamp: 'Timestamp(1574796760, 1010)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.087-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-662-8224331490264904478, commit timestamp: Timestamp(1574796760, 1010)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.279-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (a6ce9d79-143a-4ced-9e13-c3f939be986a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796760, 1010), t: 1 } and commit timestamp Timestamp(1574796760, 1010)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.237-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (a6ce9d79-143a-4ced-9e13-c3f939be986a)'. Ident: 'index-681--8000595249233899911', commit timestamp: 'Timestamp(1574796760, 1010)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.087-0500 I INDEX [conn110] Registering index build: c0e0802a-8bfb-4e20-968b-faad9ae94f6e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.279-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (a6ce9d79-143a-4ced-9e13-c3f939be986a).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:40.237-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-671--8000595249233899911, commit timestamp: Timestamp(1574796760, 1010)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.087-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.279-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 741a718d-5025-4bbf-8f15-0c288d313d26 from test5_fsmdb0.tmp.agg_out.c55d8f5b-b399-4829-aa38-15e74ad4d71b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.087-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5440882261896762943, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 776649223228459724, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796759924), clusterTime: Timestamp(1574796759, 4043) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796759, 4043), signature: { hash: BinData(0, 51FCA4F8D1E8B167B6381EA173F59B5D2F1752AE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 162ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.279-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (a6ce9d79-143a-4ced-9e13-c3f939be986a)'. Ident: 'index-672--4104909142373009110', commit timestamp: 'Timestamp(1574796760, 1010)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.279-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (a6ce9d79-143a-4ced-9e13-c3f939be986a)'. Ident: 'index-681--4104909142373009110', commit timestamp: 'Timestamp(1574796760, 1010)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:40.279-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-671--4104909142373009110, commit timestamp: Timestamp(1574796760, 1010)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.087-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:40.107-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:42.755-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:42.755-0500 I STORAGE [conn46] Index build initialized: c679b3ee-368f-40b3-886d-e8800b081e02: test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b (575ffa6d-b5a3-4e26-8672-2dc8d3c41398 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:42.755-0500 I INDEX [conn46] Waiting for index build to complete: c679b3ee-368f-40b3-886d-e8800b081e02
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.324-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.324-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.325-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (741a718d-5025-4bbf-8f15-0c288d313d26) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 3), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.325-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (741a718d-5025-4bbf-8f15-0c288d313d26).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.325-0500 I STORAGE [conn112] renameCollection: renaming collection 7b5de924-72b7-42c0-8fd2-b8e841365693 from test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.325-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (741a718d-5025-4bbf-8f15-0c288d313d26)'. Ident: 'index-670-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 3)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.325-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (741a718d-5025-4bbf-8f15-0c288d313d26)'. Ident: 'index-671-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 3)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.325-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-668-8224331490264904478, commit timestamp: Timestamp(1574796765, 3)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.325-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7 appName: "tid:3" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7", to: "test5_fsmdb0.agg_out", collectionOptions: { validationLevel: "moderate", validationAction: "error" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796760, 1446), signature: { hash: BinData(0, 75202972CB9C321D6CD6AD3744328C13AD4E3003), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 5222433 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 5222ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.325-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.325-0500 I COMMAND [conn220] command test5_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796760, 571), lsid: { id: UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") }, $clusterTime: { clusterTime: Timestamp(1574796760, 636), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796760, 571). Collection minimum timestamp is Timestamp(1574796760, 572)" errName:SnapshotUnavailable errCode:246 reslen:580 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 5093757 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 5093ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.325-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 48388597039964775, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8798753119657760302, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796759967), clusterTime: Timestamp(1574796759, 4552) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796759, 4552), signature: { hash: BinData(0, 51FCA4F8D1E8B167B6381EA173F59B5D2F1752AE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 5357ms
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:45.325-0500 I COMMAND [replSetDistLockPinger] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1574796765158) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } planSummary: IDHACK keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keysInserted:1 keysDeleted:1 numYields:0 reslen:367 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 167ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:45.326-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796759, 4552), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 5359ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.327-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: c09f6385-6a2c-4ee7-bcc1-debda9b6f9d3: test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c ( ea6396c1-14d0-4bd9-88f9-4579395c07c3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.327-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.335-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.342-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.342-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.342-0500 I STORAGE [conn110] Index build initialized: c0e0802a-8bfb-4e20-968b-faad9ae94f6e: test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 (3243bad4-833c-4c6f-8ad7-3014ed95dcf8 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.342-0500 I INDEX [conn110] Waiting for index build to complete: c0e0802a-8bfb-4e20-968b-faad9ae94f6e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.342-0500 I INDEX [conn108] Index build completed: c09f6385-6a2c-4ee7-bcc1-debda9b6f9d3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.342-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796760, 507), signature: { hash: BinData(0, 75202972CB9C321D6CD6AD3744328C13AD4E3003), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 5288ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.342-0500 I COMMAND [conn70] CMD: dropIndexes test5_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.342-0500 I COMMAND [conn67] command test5_fsmdb0.agg_out appName: "tid:0" command: collMod { collMod: "agg_out", validationLevel: "off", writeConcern: { w: 1, wtimeout: 0 }, allowImplicitCollectionCreation: false, shardVersion: [ Timestamp(0, 0), ObjectId('000000000000000000000000') ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796760, 1074), signature: { hash: BinData(0, 75202972CB9C321D6CD6AD3744328C13AD4E3003), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 2585235 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2585ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.342-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.342-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: c679b3ee-368f-40b3-886d-e8800b081e02: test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b ( 575ffa6d-b5a3-4e26-8672-2dc8d3c41398 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.342-0500 I INDEX [conn46] Index build completed: c679b3ee-368f-40b3-886d-e8800b081e02
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.342-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796760, 506), signature: { hash: BinData(0, 75202972CB9C321D6CD6AD3744328C13AD4E3003), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 15468 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 5296ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:45.342-0500 I COMMAND [conn200] command test5_fsmdb0.agg_out appName: "tid:0" command: collMod { collMod: "agg_out", validationLevel: "off", lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796760, 1074), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } numYields:0 reslen:249 protocol:op_msg 2586ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.343-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.343-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.343-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.343-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 39836341-3d2d-43d7-8992-0209802b01b3: test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c (ea6396c1-14d0-4bd9-88f9-4579395c07c3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.343-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.344-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.345-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9 with generated UUID: 33b208a0-af24-4d01-9f08-28c0a2cedcfc and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.345-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7 (7b5de924-72b7-42c0-8fd2-b8e841365693) to test5_fsmdb0.agg_out and drop 741a718d-5025-4bbf-8f15-0c288d313d26.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.345-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d with generated UUID: 75a87e57-1ef3-4da7-9776-7a259202f74b and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.345-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.346-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.347-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (741a718d-5025-4bbf-8f15-0c288d313d26) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 3), t: 1 } and commit timestamp Timestamp(1574796765, 3)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.347-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (741a718d-5025-4bbf-8f15-0c288d313d26).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.347-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 7b5de924-72b7-42c0-8fd2-b8e841365693 from test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.347-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (741a718d-5025-4bbf-8f15-0c288d313d26)'. Ident: 'index-678--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 3)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.347-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (741a718d-5025-4bbf-8f15-0c288d313d26)'. Ident: 'index-685--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 3)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.347-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-677--8000595249233899911, commit timestamp: Timestamp(1574796765, 3)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.349-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 39836341-3d2d-43d7-8992-0209802b01b3: test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c ( ea6396c1-14d0-4bd9-88f9-4579395c07c3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.360-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.360-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.360-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: e3e147ac-2d7d-4e1d-85b1-34f37e1f269b: test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c (ea6396c1-14d0-4bd9-88f9-4579395c07c3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.360-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.361-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.362-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7 (7b5de924-72b7-42c0-8fd2-b8e841365693) to test5_fsmdb0.agg_out and drop 741a718d-5025-4bbf-8f15-0c288d313d26.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.362-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: c0e0802a-8bfb-4e20-968b-faad9ae94f6e: test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 ( 3243bad4-833c-4c6f-8ad7-3014ed95dcf8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.362-0500 I INDEX [conn110] Index build completed: c0e0802a-8bfb-4e20-968b-faad9ae94f6e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.362-0500 I COMMAND [conn110] command test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796760, 1010), signature: { hash: BinData(0, 75202972CB9C321D6CD6AD3744328C13AD4E3003), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 2569859 } }, Collection: { acquireCount: { w: 1, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 89 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 5275ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.364-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.364-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.364-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.364-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 5ee44a13-bd54-429d-83d3-dabe8ba48b60: test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b (575ffa6d-b5a3-4e26-8672-2dc8d3c41398 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.365-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.364-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (741a718d-5025-4bbf-8f15-0c288d313d26) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 3), t: 1 } and commit timestamp Timestamp(1574796765, 3)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.364-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (741a718d-5025-4bbf-8f15-0c288d313d26).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.365-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection 7b5de924-72b7-42c0-8fd2-b8e841365693 from test5_fsmdb0.tmp.agg_out.37489012-226e-4ecd-b1ce-e60a3a0dfcf7 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.365-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (741a718d-5025-4bbf-8f15-0c288d313d26)'. Ident: 'index-678--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 3)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.365-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (741a718d-5025-4bbf-8f15-0c288d313d26)'. Ident: 'index-685--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 3)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.365-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-677--4104909142373009110, commit timestamp: Timestamp(1574796765, 3)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.365-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.366-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: e3e147ac-2d7d-4e1d-85b1-34f37e1f269b: test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c ( ea6396c1-14d0-4bd9-88f9-4579395c07c3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.367-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.370-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 5ee44a13-bd54-429d-83d3-dabe8ba48b60: test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b ( 575ffa6d-b5a3-4e26-8672-2dc8d3c41398 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.370-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.371-0500 I INDEX [conn112] Registering index build: 6125ceb3-94f0-453f-8ae6-2ae7b058ae97
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.374-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9 with provided UUID: 33b208a0-af24-4d01-9f08-28c0a2cedcfc and options: { uuid: UUID("33b208a0-af24-4d01-9f08-28c0a2cedcfc"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.377-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.379-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.379-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.379-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 2eebfcf8-0f87-47df-8d03-4d17f6520828: test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b (575ffa6d-b5a3-4e26-8672-2dc8d3c41398 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.379-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.380-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.383-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.387-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 2eebfcf8-0f87-47df-8d03-4d17f6520828: test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b ( 575ffa6d-b5a3-4e26-8672-2dc8d3c41398 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.387-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.388-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d with provided UUID: 75a87e57-1ef3-4da7-9776-7a259202f74b and options: { uuid: UUID("75a87e57-1ef3-4da7-9776-7a259202f74b"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.389-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9 with provided UUID: 33b208a0-af24-4d01-9f08-28c0a2cedcfc and options: { uuid: UUID("33b208a0-af24-4d01-9f08-28c0a2cedcfc"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.392-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.392-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.392-0500 I STORAGE [conn112] Index build initialized: 6125ceb3-94f0-453f-8ae6-2ae7b058ae97: test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9 (33b208a0-af24-4d01-9f08-28c0a2cedcfc ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.392-0500 I INDEX [conn112] Waiting for index build to complete: 6125ceb3-94f0-453f-8ae6-2ae7b058ae97
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.393-0500 I INDEX [conn114] Registering index build: eb708d97-2ceb-4777-a6ab-0ef414bb1170
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.393-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.393-0500 I COMMAND [conn46] CMD: drop test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.393-0500 I COMMAND [conn108] CMD: drop test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.393-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.402-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.404-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.404-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.405-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d with provided UUID: 75a87e57-1ef3-4da7-9776-7a259202f74b and options: { uuid: UUID("75a87e57-1ef3-4da7-9776-7a259202f74b"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.411-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.411-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.411-0500 I STORAGE [conn114] Index build initialized: eb708d97-2ceb-4777-a6ab-0ef414bb1170: test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d (75a87e57-1ef3-4da7-9776-7a259202f74b ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.411-0500 I INDEX [conn114] Waiting for index build to complete: eb708d97-2ceb-4777-a6ab-0ef414bb1170
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.411-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b (575ffa6d-b5a3-4e26-8672-2dc8d3c41398) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.411-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c (ea6396c1-14d0-4bd9-88f9-4579395c07c3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.411-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b (575ffa6d-b5a3-4e26-8672-2dc8d3c41398).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.411-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c (ea6396c1-14d0-4bd9-88f9-4579395c07c3).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.411-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b (575ffa6d-b5a3-4e26-8672-2dc8d3c41398)'. Ident: 'index-679-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 1516)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.411-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c (ea6396c1-14d0-4bd9-88f9-4579395c07c3)'. Ident: 'index-680-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 1517)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.411-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b (575ffa6d-b5a3-4e26-8672-2dc8d3c41398)'. Ident: 'index-685-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 1516)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.411-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c (ea6396c1-14d0-4bd9-88f9-4579395c07c3)'. Ident: 'index-681-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 1517)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.411-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b'. Ident: collection-677-8224331490264904478, commit timestamp: Timestamp(1574796765, 1516)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.411-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c'. Ident: collection-678-8224331490264904478, commit timestamp: Timestamp(1574796765, 1517)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.411-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.412-0500 I COMMAND [conn65] command test5_fsmdb0.agg_out appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 27991037063303611, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4919906201207489732, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796760018), clusterTime: Timestamp(1574796760, 66) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796760, 67), signature: { hash: BinData(0, 75202972CB9C321D6CD6AD3744328C13AD4E3003), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:993 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 5392ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.412-0500 I COMMAND [conn71] command test5_fsmdb0.agg_out appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3944822928295224106, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 20578596626576080, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796760000), clusterTime: Timestamp(1574796759, 5621) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796759, 5621), signature: { hash: BinData(0, 51FCA4F8D1E8B167B6381EA173F59B5D2F1752AE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:993 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 5410ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.412-0500 I COMMAND [conn110] CMD: drop test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.412-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 (3243bad4-833c-4c6f-8ad7-3014ed95dcf8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.412-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 (3243bad4-833c-4c6f-8ad7-3014ed95dcf8).
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:45.412-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796759, 5621), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:823 protocol:op_msg 5411ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:45.412-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796760, 66), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:823 protocol:op_msg 5393ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.412-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 (3243bad4-833c-4c6f-8ad7-3014ed95dcf8)'. Ident: 'index-684-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 1518)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.412-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 (3243bad4-833c-4c6f-8ad7-3014ed95dcf8)'. Ident: 'index-687-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 1518)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.412-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5'. Ident: collection-682-8224331490264904478, commit timestamp: Timestamp(1574796765, 1518)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.412-0500 I COMMAND [conn68] command test5_fsmdb0.agg_out appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4618218038797724375, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4370883790915327054, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796760055), clusterTime: Timestamp(1574796760, 571) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796760, 571), signature: { hash: BinData(0, 75202972CB9C321D6CD6AD3744328C13AD4E3003), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:993 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 5356ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:45.413-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796760, 571), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:823 protocol:op_msg 5357ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.420-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.421-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:45.481-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796765, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 137ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.413-0500 I COMMAND [conn71] CMD: dropIndexes test5_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:45.535-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796765, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 191ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.420-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.413-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 6125ceb3-94f0-453f-8ae6-2ae7b058ae97: test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9 ( 33b208a0-af24-4d01-9f08-28c0a2cedcfc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:45.667-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796765, 2153), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 185ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.436-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:45.585-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796765, 1517), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 172ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.413-0500 I INDEX [conn112] Index build completed: 6125ceb3-94f0-453f-8ae6-2ae7b058ae97
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.420-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 3939b5d4-604e-40ce-b358-8b7022d599d8: test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 (3243bad4-833c-4c6f-8ad7-3014ed95dcf8 ): indexes: 1
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:45.667-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796765, 1518), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 253ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.436-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:45.631-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796765, 1518), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 217ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.414-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.420-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.436-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 6706f1f4-f7f9-445d-a9aa-14c0adef438e: test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 (3243bad4-833c-4c6f-8ad7-3014ed95dcf8 ): indexes: 1
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:45.722-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796765, 2530), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 185ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.414-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c with generated UUID: 3e793ce7-1bcc-40a3-9c62-a981bcce6b79 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.421-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.436-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:45.771-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796765, 3042), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 166ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.415-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4 with generated UUID: 397b7151-be41-40c6-b546-95044e2b69b6 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.423-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.437-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.416-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa with generated UUID: 26bd50e6-8229-4ee0-bdc9-ba6cd3107488 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:46.069-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796765, 3802), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 437ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.426-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 3939b5d4-604e-40ce-b358-8b7022d599d8: test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 ( 3243bad4-833c-4c6f-8ad7-3014ed95dcf8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.439-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.416-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.448-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.442-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 6706f1f4-f7f9-445d-a9aa-14c0adef438e: test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 ( 3243bad4-833c-4c6f-8ad7-3014ed95dcf8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.439-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: eb708d97-2ceb-4777-a6ab-0ef414bb1170: test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d ( 75a87e57-1ef3-4da7-9776-7a259202f74b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.448-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.465-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.439-0500 I INDEX [conn114] Index build completed: eb708d97-2ceb-4777-a6ab-0ef414bb1170
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.448-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 1b0e1d94-bfb7-485f-9a2b-df1fe6cefc48: test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9 (33b208a0-af24-4d01-9f08-28c0a2cedcfc ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.465-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.448-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.448-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.465-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 6be696cf-72b0-46e6-bca0-e2b8fe4757c7: test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9 (33b208a0-af24-4d01-9f08-28c0a2cedcfc ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.448-0500 I INDEX [conn110] Registering index build: 8a33b2f4-1e5e-47f5-bcc3-135b19cfddc3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.448-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.465-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.456-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.450-0500 I COMMAND [ReplWriterWorker-10] CMD: drop test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.466-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.464-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.450-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b (575ffa6d-b5a3-4e26-8672-2dc8d3c41398) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 1516), t: 1 } and commit timestamp Timestamp(1574796765, 1516)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.450-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b (575ffa6d-b5a3-4e26-8672-2dc8d3c41398).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.479-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.467-0500 I COMMAND [ReplWriterWorker-12] CMD: drop test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.450-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b (575ffa6d-b5a3-4e26-8672-2dc8d3c41398)'. Ident: 'index-688--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 1516)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.479-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.467-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b (575ffa6d-b5a3-4e26-8672-2dc8d3c41398) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 1516), t: 1 } and commit timestamp Timestamp(1574796765, 1516)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.450-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b (575ffa6d-b5a3-4e26-8672-2dc8d3c41398)'. Ident: 'index-697--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 1516)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.479-0500 I STORAGE [conn110] Index build initialized: 8a33b2f4-1e5e-47f5-bcc3-135b19cfddc3: test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c (3e793ce7-1bcc-40a3-9c62-a981bcce6b79 ): indexes: 1
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:46.073-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796765, 4550), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 404ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.467-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b (575ffa6d-b5a3-4e26-8672-2dc8d3c41398).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.450-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b'. Ident: collection-687--8000595249233899911, commit timestamp: Timestamp(1574796765, 1516)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.480-0500 I INDEX [conn110] Waiting for index build to complete: 8a33b2f4-1e5e-47f5-bcc3-135b19cfddc3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.468-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b (575ffa6d-b5a3-4e26-8672-2dc8d3c41398)'. Ident: 'index-688--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 1516)'
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:46.073-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796765, 4550), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 405ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.451-0500 I COMMAND [ReplWriterWorker-15] CMD: drop test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.480-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.468-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b (575ffa6d-b5a3-4e26-8672-2dc8d3c41398)'. Ident: 'index-697--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 1516)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.451-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c (ea6396c1-14d0-4bd9-88f9-4579395c07c3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 1517), t: 1 } and commit timestamp Timestamp(1574796765, 1517)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.480-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (7b5de924-72b7-42c0-8fd2-b8e841365693) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 2089), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.468-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.5d69191b-acfa-4511-8021-3b9a6dcd9c2b'. Ident: collection-687--4104909142373009110, commit timestamp: Timestamp(1574796765, 1516)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.451-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c (ea6396c1-14d0-4bd9-88f9-4579395c07c3).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.480-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (7b5de924-72b7-42c0-8fd2-b8e841365693).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.468-0500 I COMMAND [ReplWriterWorker-3] CMD: drop test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.451-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.480-0500 I STORAGE [conn112] renameCollection: renaming collection 33b208a0-af24-4d01-9f08-28c0a2cedcfc from test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.468-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c (ea6396c1-14d0-4bd9-88f9-4579395c07c3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 1517), t: 1 } and commit timestamp Timestamp(1574796765, 1517)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.451-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c (ea6396c1-14d0-4bd9-88f9-4579395c07c3)'. Ident: 'index-690--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 1517)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.480-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (7b5de924-72b7-42c0-8fd2-b8e841365693)'. Ident: 'index-674-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 2089)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.468-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c (ea6396c1-14d0-4bd9-88f9-4579395c07c3).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.451-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c (ea6396c1-14d0-4bd9-88f9-4579395c07c3)'. Ident: 'index-695--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 1517)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.480-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (7b5de924-72b7-42c0-8fd2-b8e841365693)'. Ident: 'index-675-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 2089)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.469-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c (ea6396c1-14d0-4bd9-88f9-4579395c07c3)'. Ident: 'index-690--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 1517)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.451-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c'. Ident: collection-689--8000595249233899911, commit timestamp: Timestamp(1574796765, 1517)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.480-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-672-8224331490264904478, commit timestamp: Timestamp(1574796765, 2089)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.469-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c (ea6396c1-14d0-4bd9-88f9-4579395c07c3)'. Ident: 'index-695--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 1517)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.451-0500 I COMMAND [ReplWriterWorker-8] CMD: drop test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.480-0500 I INDEX [conn46] Registering index build: 5dda4601-7f5c-44af-97b0-79ee0b118ef1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.469-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.23bac218-c35d-41ad-9326-2f06c0dd211c'. Ident: collection-689--4104909142373009110, commit timestamp: Timestamp(1574796765, 1517)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.452-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 (3243bad4-833c-4c6f-8ad7-3014ed95dcf8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 1518), t: 1 } and commit timestamp Timestamp(1574796765, 1518)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.480-0500 I INDEX [conn108] Registering index build: 9544795f-7b22-4cf7-965b-e45e9a63b358
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.469-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.452-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 (3243bad4-833c-4c6f-8ad7-3014ed95dcf8).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.480-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.469-0500 I COMMAND [ReplWriterWorker-0] CMD: drop test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.452-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 (3243bad4-833c-4c6f-8ad7-3014ed95dcf8)'. Ident: 'index-694--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 1518)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.480-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2139670886314638746, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2061171803249121441, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796765343), clusterTime: Timestamp(1574796765, 7) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796765, 7), signature: { hash: BinData(0, 1CB732F8AE5191EAB756BBA4BDC9781F2738676E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 136ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.469-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 (3243bad4-833c-4c6f-8ad7-3014ed95dcf8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 1518), t: 1 } and commit timestamp Timestamp(1574796765, 1518)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.452-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 (3243bad4-833c-4c6f-8ad7-3014ed95dcf8)'. Ident: 'index-703--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 1518)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.481-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.469-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 (3243bad4-833c-4c6f-8ad7-3014ed95dcf8).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.452-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5'. Ident: collection-693--8000595249233899911, commit timestamp: Timestamp(1574796765, 1518)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.483-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8 with generated UUID: 82d69723-7c63-43bb-bbf6-f924212a860c and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.469-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 (3243bad4-833c-4c6f-8ad7-3014ed95dcf8)'. Ident: 'index-694--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 1518)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.452-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c with provided UUID: 3e793ce7-1bcc-40a3-9c62-a981bcce6b79 and options: { uuid: UUID("3e793ce7-1bcc-40a3-9c62-a981bcce6b79"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.511-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.469-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5 (3243bad4-833c-4c6f-8ad7-3014ed95dcf8)'. Ident: 'index-703--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 1518)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.454-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 1b0e1d94-bfb7-485f-9a2b-df1fe6cefc48: test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9 ( 33b208a0-af24-4d01-9f08-28c0a2cedcfc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.526-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.469-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.1c9336f4-6806-42b0-bd23-6bd73c70e0c5'. Ident: collection-693--4104909142373009110, commit timestamp: Timestamp(1574796765, 1518)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.467-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.526-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.470-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c with provided UUID: 3e793ce7-1bcc-40a3-9c62-a981bcce6b79 and options: { uuid: UUID("3e793ce7-1bcc-40a3-9c62-a981bcce6b79"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.468-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4 with provided UUID: 397b7151-be41-40c6-b546-95044e2b69b6 and options: { uuid: UUID("397b7151-be41-40c6-b546-95044e2b69b6"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.526-0500 I STORAGE [conn46] Index build initialized: 5dda4601-7f5c-44af-97b0-79ee0b118ef1: test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4 (397b7151-be41-40c6-b546-95044e2b69b6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.471-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 6be696cf-72b0-46e6-bca0-e2b8fe4757c7: test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9 ( 33b208a0-af24-4d01-9f08-28c0a2cedcfc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.481-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.526-0500 I INDEX [conn46] Waiting for index build to complete: 5dda4601-7f5c-44af-97b0-79ee0b118ef1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.486-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.482-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa with provided UUID: 26bd50e6-8229-4ee0-bdc9-ba6cd3107488 and options: { uuid: UUID("26bd50e6-8229-4ee0-bdc9-ba6cd3107488"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.528-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 8a33b2f4-1e5e-47f5-bcc3-135b19cfddc3: test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c ( 3e793ce7-1bcc-40a3-9c62-a981bcce6b79 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.487-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4 with provided UUID: 397b7151-be41-40c6-b546-95044e2b69b6 and options: { uuid: UUID("397b7151-be41-40c6-b546-95044e2b69b6"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.496-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.528-0500 I INDEX [conn110] Index build completed: 8a33b2f4-1e5e-47f5-bcc3-135b19cfddc3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.501-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.510-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.534-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.502-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa with provided UUID: 26bd50e6-8229-4ee0-bdc9-ba6cd3107488 and options: { uuid: UUID("26bd50e6-8229-4ee0-bdc9-ba6cd3107488"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.510-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.534-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.515-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.510-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 9df59ead-cd81-4c30-ab0c-498aafbaa4e5: test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d (75a87e57-1ef3-4da7-9776-7a259202f74b ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.534-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (33b208a0-af24-4d01-9f08-28c0a2cedcfc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 2530), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.529-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.510-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.534-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (33b208a0-af24-4d01-9f08-28c0a2cedcfc).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.529-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.511-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.534-0500 I STORAGE [conn114] renameCollection: renaming collection 75a87e57-1ef3-4da7-9776-7a259202f74b from test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.530-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 5d3dcc28-5e4f-4b3f-b31d-4a1985ebbff7: test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d (75a87e57-1ef3-4da7-9776-7a259202f74b ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.515-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 64 side writes (inserted: 64, deleted: 0) for '_id_hashed' in 2 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.534-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (33b208a0-af24-4d01-9f08-28c0a2cedcfc)'. Ident: 'index-691-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 2530)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.530-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.515-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.534-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (33b208a0-af24-4d01-9f08-28c0a2cedcfc)'. Ident: 'index-693-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 2530)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.530-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.516-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9 (33b208a0-af24-4d01-9f08-28c0a2cedcfc) to test5_fsmdb0.agg_out and drop 7b5de924-72b7-42c0-8fd2-b8e841365693.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.534-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-689-8224331490264904478, commit timestamp: Timestamp(1574796765, 2530)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.532-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:46.117-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796765, 5058), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 393ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:46.271-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796766, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 196ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.516-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (7b5de924-72b7-42c0-8fd2-b8e841365693) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 2089), t: 1 } and commit timestamp Timestamp(1574796765, 2089)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.535-0500 I INDEX [conn112] Registering index build: 156005a6-8fbc-418f-b54c-23fa46465a90
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.534-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 5d3dcc28-5e4f-4b3f-b31d-4a1985ebbff7: test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d ( 75a87e57-1ef3-4da7-9776-7a259202f74b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:46.164-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796765, 5564), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 391ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:46.290-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796766, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 215ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.516-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (7b5de924-72b7-42c0-8fd2-b8e841365693).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.535-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.544-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9 (33b208a0-af24-4d01-9f08-28c0a2cedcfc) to test5_fsmdb0.agg_out and drop 7b5de924-72b7-42c0-8fd2-b8e841365693.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:46.249-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796765, 6069), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 178ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.516-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 33b208a0-af24-4d01-9f08-28c0a2cedcfc from test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.535-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3847720676095899551, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2441770692191936809, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796765343), clusterTime: Timestamp(1574796765, 7) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796765, 7), signature: { hash: BinData(0, 1CB732F8AE5191EAB756BBA4BDC9781F2738676E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 190ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.544-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.agg_out (7b5de924-72b7-42c0-8fd2-b8e841365693) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 2089), t: 1 } and commit timestamp Timestamp(1574796765, 2089)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:46.290-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796766, 1015), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 125ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.516-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (7b5de924-72b7-42c0-8fd2-b8e841365693)'. Ident: 'index-684--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 2089)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.535-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.544-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.agg_out (7b5de924-72b7-42c0-8fd2-b8e841365693).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.516-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (7b5de924-72b7-42c0-8fd2-b8e841365693)'. Ident: 'index-691--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 2089)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.537-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.544-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection 33b208a0-af24-4d01-9f08-28c0a2cedcfc from test5_fsmdb0.tmp.agg_out.f40f1cb3-1222-4de1-a3cf-1e5d360185e9 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.516-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-683--8000595249233899911, commit timestamp: Timestamp(1574796765, 2089)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.538-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534 with generated UUID: 54c42a64-af99-4dd3-bc45-0edfb751a5dc and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.545-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (7b5de924-72b7-42c0-8fd2-b8e841365693)'. Ident: 'index-684--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 2089)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.517-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 9df59ead-cd81-4c30-ab0c-498aafbaa4e5: test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d ( 75a87e57-1ef3-4da7-9776-7a259202f74b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.545-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 5dda4601-7f5c-44af-97b0-79ee0b118ef1: test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4 ( 397b7151-be41-40c6-b546-95044e2b69b6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.545-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (7b5de924-72b7-42c0-8fd2-b8e841365693)'. Ident: 'index-691--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 2089)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.537-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8 with provided UUID: 82d69723-7c63-43bb-bbf6-f924212a860c and options: { uuid: UUID("82d69723-7c63-43bb-bbf6-f924212a860c"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.562-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.545-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-683--4104909142373009110, commit timestamp: Timestamp(1574796765, 2089)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.550-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.562-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.551-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8 with provided UUID: 82d69723-7c63-43bb-bbf6-f924212a860c and options: { uuid: UUID("82d69723-7c63-43bb-bbf6-f924212a860c"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.568-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.562-0500 I STORAGE [conn108] Index build initialized: 9544795f-7b22-4cf7-965b-e45e9a63b358: test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa (26bd50e6-8229-4ee0-bdc9-ba6cd3107488 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.564-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.568-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.562-0500 I INDEX [conn46] Index build completed: 5dda4601-7f5c-44af-97b0-79ee0b118ef1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.584-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.568-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 27f18d49-7681-4abf-9d75-d5b00f49a822: test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c (3e793ce7-1bcc-40a3-9c62-a981bcce6b79 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.562-0500 I INDEX [conn108] Waiting for index build to complete: 9544795f-7b22-4cf7-965b-e45e9a63b358
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.585-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.569-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.562-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4 appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796765, 2087), signature: { hash: BinData(0, 1CB732F8AE5191EAB756BBA4BDC9781F2738676E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 23115 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 105ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.585-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: ef66b8fc-f2fc-4c03-a6cc-55944dc13561: test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c (3e793ce7-1bcc-40a3-9c62-a981bcce6b79 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.569-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.570-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.585-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.571-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d (75a87e57-1ef3-4da7-9776-7a259202f74b) to test5_fsmdb0.agg_out and drop 33b208a0-af24-4d01-9f08-28c0a2cedcfc.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.584-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.586-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.571-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.584-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.587-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d (75a87e57-1ef3-4da7-9776-7a259202f74b) to test5_fsmdb0.agg_out and drop 33b208a0-af24-4d01-9f08-28c0a2cedcfc.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.572-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (33b208a0-af24-4d01-9f08-28c0a2cedcfc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 2530), t: 1 } and commit timestamp Timestamp(1574796765, 2530)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.584-0500 I STORAGE [conn112] Index build initialized: 156005a6-8fbc-418f-b54c-23fa46465a90: test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8 (82d69723-7c63-43bb-bbf6-f924212a860c ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.589-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.572-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (33b208a0-af24-4d01-9f08-28c0a2cedcfc).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.584-0500 I INDEX [conn112] Waiting for index build to complete: 156005a6-8fbc-418f-b54c-23fa46465a90
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.589-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (33b208a0-af24-4d01-9f08-28c0a2cedcfc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 2530), t: 1 } and commit timestamp Timestamp(1574796765, 2530)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.572-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 75a87e57-1ef3-4da7-9776-7a259202f74b from test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.585-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.589-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (33b208a0-af24-4d01-9f08-28c0a2cedcfc).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.572-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (33b208a0-af24-4d01-9f08-28c0a2cedcfc)'. Ident: 'index-700--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 2530)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.585-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (75a87e57-1ef3-4da7-9776-7a259202f74b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 3036), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.589-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 75a87e57-1ef3-4da7-9776-7a259202f74b from test5_fsmdb0.tmp.agg_out.75651634-95fb-4384-b674-a83e576b6f1d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.572-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (33b208a0-af24-4d01-9f08-28c0a2cedcfc)'. Ident: 'index-705--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 2530)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.585-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (75a87e57-1ef3-4da7-9776-7a259202f74b).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.589-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (33b208a0-af24-4d01-9f08-28c0a2cedcfc)'. Ident: 'index-700--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 2530)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.572-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-699--8000595249233899911, commit timestamp: Timestamp(1574796765, 2530)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.585-0500 I STORAGE [conn110] renameCollection: renaming collection 3e793ce7-1bcc-40a3-9c62-a981bcce6b79 from test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.589-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (33b208a0-af24-4d01-9f08-28c0a2cedcfc)'. Ident: 'index-705--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 2530)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.574-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 27f18d49-7681-4abf-9d75-d5b00f49a822: test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c ( 3e793ce7-1bcc-40a3-9c62-a981bcce6b79 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.585-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (75a87e57-1ef3-4da7-9776-7a259202f74b)'. Ident: 'index-692-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 3036)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.589-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-699--4104909142373009110, commit timestamp: Timestamp(1574796765, 2530)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.587-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.585-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (75a87e57-1ef3-4da7-9776-7a259202f74b)'. Ident: 'index-695-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 3036)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.591-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: ef66b8fc-f2fc-4c03-a6cc-55944dc13561: test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c ( 3e793ce7-1bcc-40a3-9c62-a981bcce6b79 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.587-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.585-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-690-8224331490264904478, commit timestamp: Timestamp(1574796765, 3036)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.605-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.587-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 95a9cb7d-53cc-4f81-8b37-8c4c1c2945de: test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4 (397b7151-be41-40c6-b546-95044e2b69b6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.585-0500 I INDEX [conn114] Registering index build: 45546782-c91b-4df6-9e0b-a0597d27d87d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.605-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.588-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.585-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.605-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 948ec78c-cd5e-4525-9a1f-db2810bc03fc: test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4 (397b7151-be41-40c6-b546-95044e2b69b6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.588-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.585-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.605-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.591-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.585-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3766351720635422330, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2439146749797006975, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796765413), clusterTime: Timestamp(1574796765, 1518) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796765, 1518), signature: { hash: BinData(0, 1CB732F8AE5191EAB756BBA4BDC9781F2738676E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 171ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.606-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.592-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534 with provided UUID: 54c42a64-af99-4dd3-bc45-0edfb751a5dc and options: { uuid: UUID("54c42a64-af99-4dd3-bc45-0edfb751a5dc"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.586-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.609-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.595-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 95a9cb7d-53cc-4f81-8b37-8c4c1c2945de: test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4 ( 397b7151-be41-40c6-b546-95044e2b69b6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.586-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.611-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534 with provided UUID: 54c42a64-af99-4dd3-bc45-0edfb751a5dc and options: { uuid: UUID("54c42a64-af99-4dd3-bc45-0edfb751a5dc"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.610-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.595-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.613-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 948ec78c-cd5e-4525-9a1f-db2810bc03fc: test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4 ( 397b7151-be41-40c6-b546-95044e2b69b6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.615-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c (3e793ce7-1bcc-40a3-9c62-a981bcce6b79) to test5_fsmdb0.agg_out and drop 75a87e57-1ef3-4da7-9776-7a259202f74b.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.598-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.626-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.615-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (75a87e57-1ef3-4da7-9776-7a259202f74b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 3036), t: 1 } and commit timestamp Timestamp(1574796765, 3036)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.604-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.630-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c (3e793ce7-1bcc-40a3-9c62-a981bcce6b79) to test5_fsmdb0.agg_out and drop 75a87e57-1ef3-4da7-9776-7a259202f74b.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.615-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (75a87e57-1ef3-4da7-9776-7a259202f74b).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.604-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.630-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (75a87e57-1ef3-4da7-9776-7a259202f74b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 3036), t: 1 } and commit timestamp Timestamp(1574796765, 3036)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.615-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 3e793ce7-1bcc-40a3-9c62-a981bcce6b79 from test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.604-0500 I STORAGE [conn114] Index build initialized: 45546782-c91b-4df6-9e0b-a0597d27d87d: test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534 (54c42a64-af99-4dd3-bc45-0edfb751a5dc ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.630-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (75a87e57-1ef3-4da7-9776-7a259202f74b).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.615-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (75a87e57-1ef3-4da7-9776-7a259202f74b)'. Ident: 'index-702--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 3036)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.604-0500 I INDEX [conn114] Waiting for index build to complete: 45546782-c91b-4df6-9e0b-a0597d27d87d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.630-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 3e793ce7-1bcc-40a3-9c62-a981bcce6b79 from test5_fsmdb0.tmp.agg_out.2d91b1dc-fa88-4096-a0a5-646738a6c65c to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.615-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (75a87e57-1ef3-4da7-9776-7a259202f74b)'. Ident: 'index-713--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 3036)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.604-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.630-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (75a87e57-1ef3-4da7-9776-7a259202f74b)'. Ident: 'index-702--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 3036)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.615-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-701--8000595249233899911, commit timestamp: Timestamp(1574796765, 3036)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.606-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3 with generated UUID: 37665d99-8a0b-470b-8393-ee2513497d69 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.630-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (75a87e57-1ef3-4da7-9776-7a259202f74b)'. Ident: 'index-713--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 3036)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.631-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.607-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 156005a6-8fbc-418f-b54c-23fa46465a90: test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8 ( 82d69723-7c63-43bb-bbf6-f924212a860c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.630-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-701--4104909142373009110, commit timestamp: Timestamp(1574796765, 3036)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.631-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.607-0500 I INDEX [conn112] Index build completed: 156005a6-8fbc-418f-b54c-23fa46465a90
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.648-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.631-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 925b1054-321b-4cf9-8027-6419deaf1385: test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8 (82d69723-7c63-43bb-bbf6-f924212a860c ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.608-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 9544795f-7b22-4cf7-965b-e45e9a63b358: test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa ( 26bd50e6-8229-4ee0-bdc9-ba6cd3107488 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.648-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.632-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.608-0500 I INDEX [conn108] Index build completed: 9544795f-7b22-4cf7-965b-e45e9a63b358
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.648-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 990bca58-e4c9-46be-9c9e-ff4da97f793d: test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8 (82d69723-7c63-43bb-bbf6-f924212a860c ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.632-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.608-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796765, 2087), signature: { hash: BinData(0, 1CB732F8AE5191EAB756BBA4BDC9781F2738676E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 24725 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 144ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.648-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.635-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.609-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.649-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.644-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 925b1054-321b-4cf9-8027-6419deaf1385: test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8 ( 82d69723-7c63-43bb-bbf6-f924212a860c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.620-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.650-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.652-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.630-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.654-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 990bca58-e4c9-46be-9c9e-ff4da97f793d: test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8 ( 82d69723-7c63-43bb-bbf6-f924212a860c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.652-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.630-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.669-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.652-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: dfd563c9-3972-41ad-a80a-bc8aa9ec16ce: test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa (26bd50e6-8229-4ee0-bdc9-ba6cd3107488 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.631-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (3e793ce7-1bcc-40a3-9c62-a981bcce6b79) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 3674), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.669-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.652-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.631-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (3e793ce7-1bcc-40a3-9c62-a981bcce6b79).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.669-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 2d75dcf5-635b-412f-9190-7a66e8a7727b: test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa (26bd50e6-8229-4ee0-bdc9-ba6cd3107488 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.652-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.631-0500 I STORAGE [conn46] renameCollection: renaming collection 397b7151-be41-40c6-b546-95044e2b69b6 from test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.669-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.655-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.631-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3e793ce7-1bcc-40a3-9c62-a981bcce6b79)'. Ident: 'index-700-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 3674)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.670-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.658-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: dfd563c9-3972-41ad-a80a-bc8aa9ec16ce: test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa ( 26bd50e6-8229-4ee0-bdc9-ba6cd3107488 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.631-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3e793ce7-1bcc-40a3-9c62-a981bcce6b79)'. Ident: 'index-703-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 3674)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.673-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.660-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3 with provided UUID: 37665d99-8a0b-470b-8393-ee2513497d69 and options: { uuid: UUID("37665d99-8a0b-470b-8393-ee2513497d69"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.631-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-697-8224331490264904478, commit timestamp: Timestamp(1574796765, 3674)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.675-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 2d75dcf5-635b-412f-9190-7a66e8a7727b: test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa ( 26bd50e6-8229-4ee0-bdc9-ba6cd3107488 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.674-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.631-0500 I INDEX [conn110] Registering index build: 8c98e602-2eb4-4839-9039-ed3118fd5f9f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.677-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3 with provided UUID: 37665d99-8a0b-470b-8393-ee2513497d69 and options: { uuid: UUID("37665d99-8a0b-470b-8393-ee2513497d69"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.694-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.631-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4080703657121643537, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8166505999994421035, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796765413), clusterTime: Timestamp(1574796765, 1518) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796765, 1518), signature: { hash: BinData(0, 1CB732F8AE5191EAB756BBA4BDC9781F2738676E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 216ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.693-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.694-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.633-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 45546782-c91b-4df6-9e0b-a0597d27d87d: test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534 ( 54c42a64-af99-4dd3-bc45-0edfb751a5dc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.722-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.694-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: d98f68a4-4b0c-46be-b4fc-a2c4d9f117c3: test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534 (54c42a64-af99-4dd3-bc45-0edfb751a5dc ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.634-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a with generated UUID: 59924b55-fbe3-47a8-823e-12d3d731cb57 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.722-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.694-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.658-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.722-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: d28bbadc-5846-45bb-8287-fcdbdef39ebc: test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534 (54c42a64-af99-4dd3-bc45-0edfb751a5dc ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.695-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.658-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.722-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.696-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4 (397b7151-be41-40c6-b546-95044e2b69b6) to test5_fsmdb0.agg_out and drop 3e793ce7-1bcc-40a3-9c62-a981bcce6b79.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.658-0500 I STORAGE [conn110] Index build initialized: 8c98e602-2eb4-4839-9039-ed3118fd5f9f: test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3 (37665d99-8a0b-470b-8393-ee2513497d69 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.723-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.698-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.658-0500 I INDEX [conn110] Waiting for index build to complete: 8c98e602-2eb4-4839-9039-ed3118fd5f9f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.724-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4 (397b7151-be41-40c6-b546-95044e2b69b6) to test5_fsmdb0.agg_out and drop 3e793ce7-1bcc-40a3-9c62-a981bcce6b79.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.698-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (3e793ce7-1bcc-40a3-9c62-a981bcce6b79) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 3674), t: 1 } and commit timestamp Timestamp(1574796765, 3674)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.658-0500 I INDEX [conn114] Index build completed: 45546782-c91b-4df6-9e0b-a0597d27d87d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.725-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.698-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (3e793ce7-1bcc-40a3-9c62-a981bcce6b79).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.666-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.726-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (3e793ce7-1bcc-40a3-9c62-a981bcce6b79) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 3674), t: 1 } and commit timestamp Timestamp(1574796765, 3674)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.698-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection 397b7151-be41-40c6-b546-95044e2b69b6 from test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.666-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.726-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (3e793ce7-1bcc-40a3-9c62-a981bcce6b79).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.698-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3e793ce7-1bcc-40a3-9c62-a981bcce6b79)'. Ident: 'index-708--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 3674)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.666-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (397b7151-be41-40c6-b546-95044e2b69b6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 4549), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.726-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection 397b7151-be41-40c6-b546-95044e2b69b6 from test5_fsmdb0.tmp.agg_out.96f3e5ec-b003-426c-86dd-59f5d83aa0b4 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.698-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3e793ce7-1bcc-40a3-9c62-a981bcce6b79)'. Ident: 'index-717--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 3674)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.666-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (397b7151-be41-40c6-b546-95044e2b69b6).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.726-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3e793ce7-1bcc-40a3-9c62-a981bcce6b79)'. Ident: 'index-708--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 3674)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.698-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-707--8000595249233899911, commit timestamp: Timestamp(1574796765, 3674)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.666-0500 I STORAGE [conn112] renameCollection: renaming collection 82d69723-7c63-43bb-bbf6-f924212a860c from test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.726-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3e793ce7-1bcc-40a3-9c62-a981bcce6b79)'. Ident: 'index-717--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 3674)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.700-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: d98f68a4-4b0c-46be-b4fc-a2c4d9f117c3: test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534 ( 54c42a64-af99-4dd3-bc45-0edfb751a5dc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.666-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (397b7151-be41-40c6-b546-95044e2b69b6)'. Ident: 'index-701-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 4549)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.726-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-707--4104909142373009110, commit timestamp: Timestamp(1574796765, 3674)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.701-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a with provided UUID: 59924b55-fbe3-47a8-823e-12d3d731cb57 and options: { uuid: UUID("59924b55-fbe3-47a8-823e-12d3d731cb57"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.666-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (397b7151-be41-40c6-b546-95044e2b69b6)'. Ident: 'index-705-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 4549)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.728-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: d28bbadc-5846-45bb-8287-fcdbdef39ebc: test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534 ( 54c42a64-af99-4dd3-bc45-0edfb751a5dc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.714-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.666-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-698-8224331490264904478, commit timestamp: Timestamp(1574796765, 4549)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.729-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a with provided UUID: 59924b55-fbe3-47a8-823e-12d3d731cb57 and options: { uuid: UUID("59924b55-fbe3-47a8-823e-12d3d731cb57"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.718-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8 (82d69723-7c63-43bb-bbf6-f924212a860c) to test5_fsmdb0.agg_out and drop 397b7151-be41-40c6-b546-95044e2b69b6.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.667-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.744-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.719-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (397b7151-be41-40c6-b546-95044e2b69b6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 4549), t: 1 } and commit timestamp Timestamp(1574796765, 4549)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.667-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (82d69723-7c63-43bb-bbf6-f924212a860c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 4550), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.748-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8 (82d69723-7c63-43bb-bbf6-f924212a860c) to test5_fsmdb0.agg_out and drop 397b7151-be41-40c6-b546-95044e2b69b6.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.719-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (397b7151-be41-40c6-b546-95044e2b69b6).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.667-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (82d69723-7c63-43bb-bbf6-f924212a860c).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.749-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (397b7151-be41-40c6-b546-95044e2b69b6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 4549), t: 1 } and commit timestamp Timestamp(1574796765, 4549)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.719-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 82d69723-7c63-43bb-bbf6-f924212a860c from test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.667-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2754382502701328745, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 531411033560965533, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796765482), clusterTime: Timestamp(1574796765, 2153) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796765, 2217), signature: { hash: BinData(0, 1CB732F8AE5191EAB756BBA4BDC9781F2738676E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 184ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.749-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (397b7151-be41-40c6-b546-95044e2b69b6).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.719-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (397b7151-be41-40c6-b546-95044e2b69b6)'. Ident: 'index-710--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 4549)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.667-0500 I STORAGE [conn108] renameCollection: renaming collection 26bd50e6-8229-4ee0-bdc9-ba6cd3107488 from test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.749-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 82d69723-7c63-43bb-bbf6-f924212a860c from test5_fsmdb0.tmp.agg_out.115d5671-600c-4c23-8c01-ebbc940e2aa8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.719-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (397b7151-be41-40c6-b546-95044e2b69b6)'. Ident: 'index-719--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 4549)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.667-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (82d69723-7c63-43bb-bbf6-f924212a860c)'. Ident: 'index-708-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 4550)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.749-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (397b7151-be41-40c6-b546-95044e2b69b6)'. Ident: 'index-710--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 4549)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.719-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-709--8000595249233899911, commit timestamp: Timestamp(1574796765, 4549)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.667-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (82d69723-7c63-43bb-bbf6-f924212a860c)'. Ident: 'index-713-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 4550)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.749-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (397b7151-be41-40c6-b546-95044e2b69b6)'. Ident: 'index-719--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 4549)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.719-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa (26bd50e6-8229-4ee0-bdc9-ba6cd3107488) to test5_fsmdb0.agg_out and drop 82d69723-7c63-43bb-bbf6-f924212a860c.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.667-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-706-8224331490264904478, commit timestamp: Timestamp(1574796765, 4550)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.749-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-709--4104909142373009110, commit timestamp: Timestamp(1574796765, 4549)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.720-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (82d69723-7c63-43bb-bbf6-f924212a860c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 4550), t: 1 } and commit timestamp Timestamp(1574796765, 4550)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.667-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.749-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa (26bd50e6-8229-4ee0-bdc9-ba6cd3107488) to test5_fsmdb0.agg_out and drop 82d69723-7c63-43bb-bbf6-f924212a860c.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.720-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (82d69723-7c63-43bb-bbf6-f924212a860c).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.667-0500 I INDEX [conn46] Registering index build: 61c2a5d1-7c8a-473c-b92c-0da3fc613394
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.749-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (82d69723-7c63-43bb-bbf6-f924212a860c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 4550), t: 1 } and commit timestamp Timestamp(1574796765, 4550)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.720-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 26bd50e6-8229-4ee0-bdc9-ba6cd3107488 from test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.667-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2337610519165447302, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8300175692224425154, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796765414), clusterTime: Timestamp(1574796765, 1518) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796765, 1519), signature: { hash: BinData(0, 1CB732F8AE5191EAB756BBA4BDC9781F2738676E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 252ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.749-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (82d69723-7c63-43bb-bbf6-f924212a860c).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.720-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (82d69723-7c63-43bb-bbf6-f924212a860c)'. Ident: 'index-716--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 4550)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.668-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.749-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 26bd50e6-8229-4ee0-bdc9-ba6cd3107488 from test5_fsmdb0.tmp.agg_out.854d7523-7668-4651-8cf3-99d18786beaa to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.720-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (82d69723-7c63-43bb-bbf6-f924212a860c)'. Ident: 'index-723--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 4550)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.670-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f with generated UUID: 3129686b-3501-451c-9979-22a32d700968 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.749-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (82d69723-7c63-43bb-bbf6-f924212a860c)'. Ident: 'index-716--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 4550)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.720-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-715--8000595249233899911, commit timestamp: Timestamp(1574796765, 4550)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.670-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b with generated UUID: ff3a27e8-62ae-4a59-a87b-8780d9016000 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.749-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (82d69723-7c63-43bb-bbf6-f924212a860c)'. Ident: 'index-723--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 4550)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.720-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f with provided UUID: 3129686b-3501-451c-9979-22a32d700968 and options: { uuid: UUID("3129686b-3501-451c-9979-22a32d700968"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.678-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.750-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-715--4104909142373009110, commit timestamp: Timestamp(1574796765, 4550)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.735-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.702-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.750-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f with provided UUID: 3129686b-3501-451c-9979-22a32d700968 and options: { uuid: UUID("3129686b-3501-451c-9979-22a32d700968"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.736-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b with provided UUID: ff3a27e8-62ae-4a59-a87b-8780d9016000 and options: { uuid: UUID("ff3a27e8-62ae-4a59-a87b-8780d9016000"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.702-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.764-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.752-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.702-0500 I STORAGE [conn46] Index build initialized: 61c2a5d1-7c8a-473c-b92c-0da3fc613394: test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a (59924b55-fbe3-47a8-823e-12d3d731cb57 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.765-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b with provided UUID: ff3a27e8-62ae-4a59-a87b-8780d9016000 and options: { uuid: UUID("ff3a27e8-62ae-4a59-a87b-8780d9016000"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.771-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.702-0500 I INDEX [conn46] Waiting for index build to complete: 61c2a5d1-7c8a-473c-b92c-0da3fc613394
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.781-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.771-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.702-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.799-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.771-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 1d845fc8-25cc-4be7-8cb0-89591fa05c28: test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3 (37665d99-8a0b-470b-8393-ee2513497d69 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.704-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 8c98e602-2eb4-4839-9039-ed3118fd5f9f: test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3 ( 37665d99-8a0b-470b-8393-ee2513497d69 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.799-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.771-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.704-0500 I INDEX [conn110] Index build completed: 8c98e602-2eb4-4839-9039-ed3118fd5f9f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.799-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 4b47140c-5890-4623-92c7-5ece6b501581: test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3 (37665d99-8a0b-470b-8393-ee2513497d69 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.772-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.713-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.800-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.775-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.719-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.800-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.779-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 1d845fc8-25cc-4be7-8cb0-89591fa05c28: test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3 ( 37665d99-8a0b-470b-8393-ee2513497d69 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.719-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.803-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.796-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.721-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.807-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 4b47140c-5890-4623-92c7-5ece6b501581: test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3 ( 37665d99-8a0b-470b-8393-ee2513497d69 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.796-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.721-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.820-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.796-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 035ea05d-3da7-4ae3-b61d-ca7a8951cc68: test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a (59924b55-fbe3-47a8-823e-12d3d731cb57 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.722-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (26bd50e6-8229-4ee0-bdc9-ba6cd3107488) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 5058), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.820-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.796-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.722-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (26bd50e6-8229-4ee0-bdc9-ba6cd3107488).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.820-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: fe69f4d6-4d24-4291-9076-6df934a89135: test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a (59924b55-fbe3-47a8-823e-12d3d731cb57 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.797-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.722-0500 I STORAGE [conn112] renameCollection: renaming collection 54c42a64-af99-4dd3-bc45-0edfb751a5dc from test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.820-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.798-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534 (54c42a64-af99-4dd3-bc45-0edfb751a5dc) to test5_fsmdb0.agg_out and drop 26bd50e6-8229-4ee0-bdc9-ba6cd3107488.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.722-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (26bd50e6-8229-4ee0-bdc9-ba6cd3107488)'. Ident: 'index-702-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 5058)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.820-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.799-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.722-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (26bd50e6-8229-4ee0-bdc9-ba6cd3107488)'. Ident: 'index-709-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 5058)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.821-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534 (54c42a64-af99-4dd3-bc45-0edfb751a5dc) to test5_fsmdb0.agg_out and drop 26bd50e6-8229-4ee0-bdc9-ba6cd3107488.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.799-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (26bd50e6-8229-4ee0-bdc9-ba6cd3107488) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 5058), t: 1 } and commit timestamp Timestamp(1574796765, 5058)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.722-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-699-8224331490264904478, commit timestamp: Timestamp(1574796765, 5058)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.823-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.799-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (26bd50e6-8229-4ee0-bdc9-ba6cd3107488).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.722-0500 I INDEX [conn114] Registering index build: fd0559d6-a7a4-4341-9f82-351bfd7333f1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.823-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (26bd50e6-8229-4ee0-bdc9-ba6cd3107488) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 5058), t: 1 } and commit timestamp Timestamp(1574796765, 5058)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.799-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 54c42a64-af99-4dd3-bc45-0edfb751a5dc from test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.722-0500 I INDEX [conn108] Registering index build: 7cfe3b79-32c0-4749-9444-ea18a560dbba
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.823-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (26bd50e6-8229-4ee0-bdc9-ba6cd3107488).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.799-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (26bd50e6-8229-4ee0-bdc9-ba6cd3107488)'. Ident: 'index-712--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 5058)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.722-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1113336120875419650, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1781062416773610619, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796765537), clusterTime: Timestamp(1574796765, 2530) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796765, 2532), signature: { hash: BinData(0, 1CB732F8AE5191EAB756BBA4BDC9781F2738676E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796757, 2085), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 184ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.823-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection 54c42a64-af99-4dd3-bc45-0edfb751a5dc from test5_fsmdb0.tmp.agg_out.690fae8f-ba1f-450b-9cbd-ce9fafcb6534 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.799-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (26bd50e6-8229-4ee0-bdc9-ba6cd3107488)'. Ident: 'index-725--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 5058)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.723-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 61c2a5d1-7c8a-473c-b92c-0da3fc613394: test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a ( 59924b55-fbe3-47a8-823e-12d3d731cb57 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.823-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (26bd50e6-8229-4ee0-bdc9-ba6cd3107488)'. Ident: 'index-712--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 5058)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.799-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-711--8000595249233899911, commit timestamp: Timestamp(1574796765, 5058)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.725-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1 with generated UUID: 71ef8fbb-d2bf-4837-9d18-655dd0f90a9f and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.823-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (26bd50e6-8229-4ee0-bdc9-ba6cd3107488)'. Ident: 'index-725--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 5058)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.800-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1 with provided UUID: 71ef8fbb-d2bf-4837-9d18-655dd0f90a9f and options: { uuid: UUID("71ef8fbb-d2bf-4837-9d18-655dd0f90a9f"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.745-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.823-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-711--4104909142373009110, commit timestamp: Timestamp(1574796765, 5058)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.800-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 035ea05d-3da7-4ae3-b61d-ca7a8951cc68: test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a ( 59924b55-fbe3-47a8-823e-12d3d731cb57 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.746-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.824-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1 with provided UUID: 71ef8fbb-d2bf-4837-9d18-655dd0f90a9f and options: { uuid: UUID("71ef8fbb-d2bf-4837-9d18-655dd0f90a9f"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.815-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.746-0500 I STORAGE [conn114] Index build initialized: fd0559d6-a7a4-4341-9f82-351bfd7333f1: test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f (3129686b-3501-451c-9979-22a32d700968 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.826-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: fe69f4d6-4d24-4291-9076-6df934a89135: test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a ( 59924b55-fbe3-47a8-823e-12d3d731cb57 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.835-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.746-0500 I INDEX [conn114] Waiting for index build to complete: fd0559d6-a7a4-4341-9f82-351bfd7333f1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.839-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.835-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.746-0500 I INDEX [conn46] Index build completed: 61c2a5d1-7c8a-473c-b92c-0da3fc613394
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.858-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.835-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 34bd14f4-6a85-44ac-829b-b381d3b87620: test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f (3129686b-3501-451c-9979-22a32d700968 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.746-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.858-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.835-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.754-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.858-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 1d400c4f-0617-4856-822a-662815d74ebb: test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f (3129686b-3501-451c-9979-22a32d700968 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.836-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.755-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.859-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.837-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3 (37665d99-8a0b-470b-8393-ee2513497d69) to test5_fsmdb0.agg_out and drop 54c42a64-af99-4dd3-bc45-0edfb751a5dc.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.763-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.859-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.837-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.770-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.860-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3 (37665d99-8a0b-470b-8393-ee2513497d69) to test5_fsmdb0.agg_out and drop 54c42a64-af99-4dd3-bc45-0edfb751a5dc.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.838-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (54c42a64-af99-4dd3-bc45-0edfb751a5dc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 5564), t: 1 } and commit timestamp Timestamp(1574796765, 5564)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.770-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.862-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.838-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (54c42a64-af99-4dd3-bc45-0edfb751a5dc).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.770-0500 I STORAGE [conn108] Index build initialized: 7cfe3b79-32c0-4749-9444-ea18a560dbba: test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b (ff3a27e8-62ae-4a59-a87b-8780d9016000 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.862-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (54c42a64-af99-4dd3-bc45-0edfb751a5dc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 5564), t: 1 } and commit timestamp Timestamp(1574796765, 5564)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.838-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 37665d99-8a0b-470b-8393-ee2513497d69 from test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.770-0500 I INDEX [conn108] Waiting for index build to complete: 7cfe3b79-32c0-4749-9444-ea18a560dbba
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.862-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (54c42a64-af99-4dd3-bc45-0edfb751a5dc).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.838-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (54c42a64-af99-4dd3-bc45-0edfb751a5dc)'. Ident: 'index-722--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 5564)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.770-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.862-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection 37665d99-8a0b-470b-8393-ee2513497d69 from test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.838-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (54c42a64-af99-4dd3-bc45-0edfb751a5dc)'. Ident: 'index-729--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 5564)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.770-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (54c42a64-af99-4dd3-bc45-0edfb751a5dc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 5564), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.862-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (54c42a64-af99-4dd3-bc45-0edfb751a5dc)'. Ident: 'index-722--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 5564)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.838-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-721--8000595249233899911, commit timestamp: Timestamp(1574796765, 5564)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.770-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (54c42a64-af99-4dd3-bc45-0edfb751a5dc).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.862-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (54c42a64-af99-4dd3-bc45-0edfb751a5dc)'. Ident: 'index-729--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 5564)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.840-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 34bd14f4-6a85-44ac-829b-b381d3b87620: test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f ( 3129686b-3501-451c-9979-22a32d700968 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.770-0500 I STORAGE [conn110] renameCollection: renaming collection 37665d99-8a0b-470b-8393-ee2513497d69 from test5_fsmdb0.tmp.agg_out.e91f8bb1-c17f-4053-a633-2216306115a3 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.862-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-721--4104909142373009110, commit timestamp: Timestamp(1574796765, 5564)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.842-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01 with provided UUID: f941104e-0e0c-4223-83eb-9c6b75e5a251 and options: { uuid: UUID("f941104e-0e0c-4223-83eb-9c6b75e5a251"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.770-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (54c42a64-af99-4dd3-bc45-0edfb751a5dc)'. Ident: 'index-712-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 5564)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.864-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 1d400c4f-0617-4856-822a-662815d74ebb: test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f ( 3129686b-3501-451c-9979-22a32d700968 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.854-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.770-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (54c42a64-af99-4dd3-bc45-0edfb751a5dc)'. Ident: 'index-715-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 5564)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.865-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01 with provided UUID: f941104e-0e0c-4223-83eb-9c6b75e5a251 and options: { uuid: UUID("f941104e-0e0c-4223-83eb-9c6b75e5a251"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.870-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.770-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-710-8224331490264904478, commit timestamp: Timestamp(1574796765, 5564)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.880-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.870-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.771-0500 I INDEX [conn112] Registering index build: 5dab5060-56e9-452e-a6a5-0a7c2439511c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.893-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.870-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 7227f532-119e-4703-9d0a-5b70e0b94f83: test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b (ff3a27e8-62ae-4a59-a87b-8780d9016000 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.771-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.893-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.870-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.771-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 720363106980553203, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6403859588479583709, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796765605), clusterTime: Timestamp(1574796765, 3042) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796765, 3106), signature: { hash: BinData(0, 1CB732F8AE5191EAB756BBA4BDC9781F2738676E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 164ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.893-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: eaef164e-2291-44d8-9d11-1a9a40773276: test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b (ff3a27e8-62ae-4a59-a87b-8780d9016000 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.871-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.771-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: fd0559d6-a7a4-4341-9f82-351bfd7333f1: test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f ( 3129686b-3501-451c-9979-22a32d700968 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.893-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.874-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.771-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.894-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.875-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a (59924b55-fbe3-47a8-823e-12d3d731cb57) to test5_fsmdb0.agg_out and drop 37665d99-8a0b-470b-8393-ee2513497d69.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.773-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01 with generated UUID: f941104e-0e0c-4223-83eb-9c6b75e5a251 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.896-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.875-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.agg_out (37665d99-8a0b-470b-8393-ee2513497d69) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 6069), t: 1 } and commit timestamp Timestamp(1574796765, 6069)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.775-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.897-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a (59924b55-fbe3-47a8-823e-12d3d731cb57) to test5_fsmdb0.agg_out and drop 37665d99-8a0b-470b-8393-ee2513497d69.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.875-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.agg_out (37665d99-8a0b-470b-8393-ee2513497d69).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.793-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 7cfe3b79-32c0-4749-9444-ea18a560dbba: test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b ( ff3a27e8-62ae-4a59-a87b-8780d9016000 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.897-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (37665d99-8a0b-470b-8393-ee2513497d69) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 6069), t: 1 } and commit timestamp Timestamp(1574796765, 6069)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.875-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection 59924b55-fbe3-47a8-823e-12d3d731cb57 from test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.802-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.897-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (37665d99-8a0b-470b-8393-ee2513497d69).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.875-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (37665d99-8a0b-470b-8393-ee2513497d69)'. Ident: 'index-728--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 6069)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.802-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.898-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 59924b55-fbe3-47a8-823e-12d3d731cb57 from test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.875-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (37665d99-8a0b-470b-8393-ee2513497d69)'. Ident: 'index-737--8000595249233899911', commit timestamp: 'Timestamp(1574796765, 6069)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.802-0500 I STORAGE [conn112] Index build initialized: 5dab5060-56e9-452e-a6a5-0a7c2439511c: test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1 (71ef8fbb-d2bf-4837-9d18-655dd0f90a9f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.898-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (37665d99-8a0b-470b-8393-ee2513497d69)'. Ident: 'index-728--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 6069)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.875-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-727--8000595249233899911, commit timestamp: Timestamp(1574796765, 6069)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.802-0500 I INDEX [conn112] Waiting for index build to complete: 5dab5060-56e9-452e-a6a5-0a7c2439511c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.898-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (37665d99-8a0b-470b-8393-ee2513497d69)'. Ident: 'index-737--4104909142373009110', commit timestamp: 'Timestamp(1574796765, 6069)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:45.876-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 7227f532-119e-4703-9d0a-5b70e0b94f83: test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b ( ff3a27e8-62ae-4a59-a87b-8780d9016000 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.802-0500 I INDEX [conn108] Index build completed: 7cfe3b79-32c0-4749-9444-ea18a560dbba
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.898-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-727--4104909142373009110, commit timestamp: Timestamp(1574796765, 6069)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.089-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.802-0500 I INDEX [conn114] Index build completed: fd0559d6-a7a4-4341-9f82-351bfd7333f1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:45.899-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: eaef164e-2291-44d8-9d11-1a9a40773276: test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b ( ff3a27e8-62ae-4a59-a87b-8780d9016000 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.089-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.810-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.105-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.089-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 82a21f58-4464-4388-8dca-4bc92a5086e3: test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1 (71ef8fbb-d2bf-4837-9d18-655dd0f90a9f ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.811-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.105-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.089-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.811-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (37665d99-8a0b-470b-8393-ee2513497d69) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796765, 6069), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.105-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 36510d1b-d7d7-445a-9afc-fb1c68cc6e07: test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1 (71ef8fbb-d2bf-4837-9d18-655dd0f90a9f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.090-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.811-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (37665d99-8a0b-470b-8393-ee2513497d69).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.105-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.091-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b (ff3a27e8-62ae-4a59-a87b-8780d9016000) to test5_fsmdb0.agg_out and drop 59924b55-fbe3-47a8-823e-12d3d731cb57.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.811-0500 I STORAGE [conn46] renameCollection: renaming collection 59924b55-fbe3-47a8-823e-12d3d731cb57 from test5_fsmdb0.tmp.agg_out.69939e60-3aac-4049-8922-269120ccaf5a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.106-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.093-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.811-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (37665d99-8a0b-470b-8393-ee2513497d69)'. Ident: 'index-718-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 6069)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.107-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b (ff3a27e8-62ae-4a59-a87b-8780d9016000) to test5_fsmdb0.agg_out and drop 59924b55-fbe3-47a8-823e-12d3d731cb57.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.093-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (59924b55-fbe3-47a8-823e-12d3d731cb57) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 4), t: 1 } and commit timestamp Timestamp(1574796766, 4)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.811-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (37665d99-8a0b-470b-8393-ee2513497d69)'. Ident: 'index-719-8224331490264904478', commit timestamp: 'Timestamp(1574796765, 6069)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.107-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.093-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (59924b55-fbe3-47a8-823e-12d3d731cb57).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.811-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-717-8224331490264904478, commit timestamp: Timestamp(1574796765, 6069)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.108-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (59924b55-fbe3-47a8-823e-12d3d731cb57) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 4), t: 1 } and commit timestamp Timestamp(1574796766, 4)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.093-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection ff3a27e8-62ae-4a59-a87b-8780d9016000 from test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.811-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.108-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (59924b55-fbe3-47a8-823e-12d3d731cb57).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.094-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (59924b55-fbe3-47a8-823e-12d3d731cb57)'. Ident: 'index-732--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 4)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.811-0500 I INDEX [conn110] Registering index build: 59400ce1-5ac2-4895-b037-a9dbb832d26f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.108-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection ff3a27e8-62ae-4a59-a87b-8780d9016000 from test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.094-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (59924b55-fbe3-47a8-823e-12d3d731cb57)'. Ident: 'index-739--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 4)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.811-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 501734577526649557, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5711315269518266133, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796765632), clusterTime: Timestamp(1574796765, 3802) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796765, 3930), signature: { hash: BinData(0, 1CB732F8AE5191EAB756BBA4BDC9781F2738676E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 178ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.108-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (59924b55-fbe3-47a8-823e-12d3d731cb57)'. Ident: 'index-732--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 4)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.094-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-731--8000595249233899911, commit timestamp: Timestamp(1574796766, 4)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.094-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f (3129686b-3501-451c-9979-22a32d700968) to test5_fsmdb0.agg_out and drop ff3a27e8-62ae-4a59-a87b-8780d9016000.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.108-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (59924b55-fbe3-47a8-823e-12d3d731cb57)'. Ident: 'index-739--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 4)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.108-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-731--4104909142373009110, commit timestamp: Timestamp(1574796766, 4)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.095-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (ff3a27e8-62ae-4a59-a87b-8780d9016000) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 5), t: 1 } and commit timestamp Timestamp(1574796766, 5)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.108-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f (3129686b-3501-451c-9979-22a32d700968) to test5_fsmdb0.agg_out and drop ff3a27e8-62ae-4a59-a87b-8780d9016000.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.095-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (ff3a27e8-62ae-4a59-a87b-8780d9016000).
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:46.810-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796766, 511), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 691ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.109-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (ff3a27e8-62ae-4a59-a87b-8780d9016000) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 5), t: 1 } and commit timestamp Timestamp(1574796766, 5)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.095-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 3129686b-3501-451c-9979-22a32d700968 from test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.109-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (ff3a27e8-62ae-4a59-a87b-8780d9016000).
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:46.811-0500 I COMMAND [conn204] command test5_fsmdb0.agg_out appName: "tid:4" command: dropIndexes { dropIndexes: "agg_out", index: { flag: 1.0 }, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796766, 3536), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } numYields:0 reslen:431 protocol:op_msg 519ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.095-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ff3a27e8-62ae-4a59-a87b-8780d9016000)'. Ident: 'index-736--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 5)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.109-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection 3129686b-3501-451c-9979-22a32d700968 from test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.095-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ff3a27e8-62ae-4a59-a87b-8780d9016000)'. Ident: 'index-747--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 5)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.109-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ff3a27e8-62ae-4a59-a87b-8780d9016000)'. Ident: 'index-736--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 5)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.095-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-735--8000595249233899911, commit timestamp: Timestamp(1574796766, 5)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.109-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ff3a27e8-62ae-4a59-a87b-8780d9016000)'. Ident: 'index-747--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 5)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.095-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 82a21f58-4464-4388-8dca-4bc92a5086e3: test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1 ( 71ef8fbb-d2bf-4837-9d18-655dd0f90a9f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.109-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-735--4104909142373009110, commit timestamp: Timestamp(1574796766, 5)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.106-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0 with provided UUID: b5be7a35-d18e-47c4-bde0-c1502d32e3e4 and options: { uuid: UUID("b5be7a35-d18e-47c4-bde0-c1502d32e3e4"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.109-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 36510d1b-d7d7-445a-9afc-fb1c68cc6e07: test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1 ( 71ef8fbb-d2bf-4837-9d18-655dd0f90a9f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.120-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.121-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0 with provided UUID: b5be7a35-d18e-47c4-bde0-c1502d32e3e4 and options: { uuid: UUID("b5be7a35-d18e-47c4-bde0-c1502d32e3e4"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.120-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647 with provided UUID: 1d6fce01-9df6-4152-8d74-4ed9a632908b and options: { uuid: UUID("1d6fce01-9df6-4152-8d74-4ed9a632908b"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.134-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.134-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.135-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647 with provided UUID: 1d6fce01-9df6-4152-8d74-4ed9a632908b and options: { uuid: UUID("1d6fce01-9df6-4152-8d74-4ed9a632908b"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.135-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532 with provided UUID: df3d59f1-fcdc-49bf-99d1-83876386957e and options: { uuid: UUID("df3d59f1-fcdc-49bf-99d1-83876386957e"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.151-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.148-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.152-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532 with provided UUID: df3d59f1-fcdc-49bf-99d1-83876386957e and options: { uuid: UUID("df3d59f1-fcdc-49bf-99d1-83876386957e"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.168-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.169-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.168-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.197-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.168-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 7ca9dd75-38f9-4b6a-9f92-60db8875c933: test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01 (f941104e-0e0c-4223-83eb-9c6b75e5a251 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.197-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.812-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.168-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.198-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 21e15adb-f406-44f7-a90d-8f713c800f43: test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01 (f941104e-0e0c-4223-83eb-9c6b75e5a251 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:45.828-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.169-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.198-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.069-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.170-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1 (71ef8fbb-d2bf-4837-9d18-655dd0f90a9f) to test5_fsmdb0.agg_out and drop 3129686b-3501-451c-9979-22a32d700968.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.198-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.069-0500 I STORAGE [conn110] Index build initialized: 59400ce1-5ac2-4895-b037-a9dbb832d26f: test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01 (f941104e-0e0c-4223-83eb-9c6b75e5a251 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.171-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.199-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1 (71ef8fbb-d2bf-4837-9d18-655dd0f90a9f) to test5_fsmdb0.agg_out and drop 3129686b-3501-451c-9979-22a32d700968.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.069-0500 I INDEX [conn110] Waiting for index build to complete: 59400ce1-5ac2-4895-b037-a9dbb832d26f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.172-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (3129686b-3501-451c-9979-22a32d700968) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 511), t: 1 } and commit timestamp Timestamp(1574796766, 511)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.201-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.072-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.172-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (3129686b-3501-451c-9979-22a32d700968).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.201-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (3129686b-3501-451c-9979-22a32d700968) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 511), t: 1 } and commit timestamp Timestamp(1574796766, 511)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.072-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.172-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection 71ef8fbb-d2bf-4837-9d18-655dd0f90a9f from test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.201-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (3129686b-3501-451c-9979-22a32d700968).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.072-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (59924b55-fbe3-47a8-823e-12d3d731cb57) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 4), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.172-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3129686b-3501-451c-9979-22a32d700968)'. Ident: 'index-734--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 511)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.201-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 71ef8fbb-d2bf-4837-9d18-655dd0f90a9f from test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.072-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (59924b55-fbe3-47a8-823e-12d3d731cb57).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.172-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3129686b-3501-451c-9979-22a32d700968)'. Ident: 'index-743--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 511)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.201-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3129686b-3501-451c-9979-22a32d700968)'. Ident: 'index-734--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 511)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.072-0500 I STORAGE [conn114] renameCollection: renaming collection ff3a27e8-62ae-4a59-a87b-8780d9016000 from test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.172-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-733--8000595249233899911, commit timestamp: Timestamp(1574796766, 511)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.201-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3129686b-3501-451c-9979-22a32d700968)'. Ident: 'index-743--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 511)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.073-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (59924b55-fbe3-47a8-823e-12d3d731cb57)'. Ident: 'index-722-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 4)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.172-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d with provided UUID: c3763a64-0bfd-4e76-821d-698d988201e3 and options: { uuid: UUID("c3763a64-0bfd-4e76-821d-698d988201e3"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.201-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-733--4104909142373009110, commit timestamp: Timestamp(1574796766, 511)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.073-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (59924b55-fbe3-47a8-823e-12d3d731cb57)'. Ident: 'index-723-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 4)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.173-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 7ca9dd75-38f9-4b6a-9f92-60db8875c933: test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01 ( f941104e-0e0c-4223-83eb-9c6b75e5a251 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.202-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d with provided UUID: c3763a64-0bfd-4e76-821d-698d988201e3 and options: { uuid: UUID("c3763a64-0bfd-4e76-821d-698d988201e3"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.073-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-720-8224331490264904478, commit timestamp: Timestamp(1574796766, 4)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.186-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.203-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 21e15adb-f406-44f7-a90d-8f713c800f43: test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01 ( f941104e-0e0c-4223-83eb-9c6b75e5a251 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.073-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b appName: "tid:1" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.30e63e34-3042-49aa-b76d-5bfae690d47b", to: "test5_fsmdb0.agg_out", collectionOptions: { validationLevel: "off", validationAction: "error" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796765, 7069), signature: { hash: BinData(0, 1CB732F8AE5191EAB756BBA4BDC9781F2738676E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 242820 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 243ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.192-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01 (f941104e-0e0c-4223-83eb-9c6b75e5a251) to test5_fsmdb0.agg_out and drop 71ef8fbb-d2bf-4837-9d18-655dd0f90a9f.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.219-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.073-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.192-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (71ef8fbb-d2bf-4837-9d18-655dd0f90a9f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 1015), t: 1 } and commit timestamp Timestamp(1574796766, 1015)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.225-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01 (f941104e-0e0c-4223-83eb-9c6b75e5a251) to test5_fsmdb0.agg_out and drop 71ef8fbb-d2bf-4837-9d18-655dd0f90a9f.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.073-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (ff3a27e8-62ae-4a59-a87b-8780d9016000) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 5), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.192-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (71ef8fbb-d2bf-4837-9d18-655dd0f90a9f).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.226-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.agg_out (71ef8fbb-d2bf-4837-9d18-655dd0f90a9f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 1015), t: 1 } and commit timestamp Timestamp(1574796766, 1015)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.073-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (ff3a27e8-62ae-4a59-a87b-8780d9016000).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.193-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection f941104e-0e0c-4223-83eb-9c6b75e5a251 from test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.226-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.agg_out (71ef8fbb-d2bf-4837-9d18-655dd0f90a9f).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.073-0500 I STORAGE [conn108] renameCollection: renaming collection 3129686b-3501-451c-9979-22a32d700968 from test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.193-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (71ef8fbb-d2bf-4837-9d18-655dd0f90a9f)'. Ident: 'index-742--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 1015)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.226-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection f941104e-0e0c-4223-83eb-9c6b75e5a251 from test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.073-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ff3a27e8-62ae-4a59-a87b-8780d9016000)'. Ident: 'index-728-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 5)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.193-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (71ef8fbb-d2bf-4837-9d18-655dd0f90a9f)'. Ident: 'index-749--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 1015)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.226-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (71ef8fbb-d2bf-4837-9d18-655dd0f90a9f)'. Ident: 'index-742--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 1015)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.073-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ff3a27e8-62ae-4a59-a87b-8780d9016000)'. Ident: 'index-733-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 5)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.193-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-741--8000595249233899911, commit timestamp: Timestamp(1574796766, 1015)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.226-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (71ef8fbb-d2bf-4837-9d18-655dd0f90a9f)'. Ident: 'index-749--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 1015)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.073-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-725-8224331490264904478, commit timestamp: Timestamp(1574796766, 5)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.204-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d with provided UUID: 22157587-4a84-46d6-b062-436bedaef575 and options: { uuid: UUID("22157587-4a84-46d6-b062-436bedaef575"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.226-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-741--4104909142373009110, commit timestamp: Timestamp(1574796766, 1015)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.073-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4395032128618378469, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6972028802270770586, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796765669), clusterTime: Timestamp(1574796765, 4550) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796765, 4550), signature: { hash: BinData(0, 1CB732F8AE5191EAB756BBA4BDC9781F2738676E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 403ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.220-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.226-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d with provided UUID: 22157587-4a84-46d6-b062-436bedaef575 and options: { uuid: UUID("22157587-4a84-46d6-b062-436bedaef575"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.073-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f appName: "tid:3" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.f3388ac3-1292-4be4-9f5d-e878cc76826f", to: "test5_fsmdb0.agg_out", collectionOptions: { validationLevel: "off", validationAction: "error" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796765, 7069), signature: { hash: BinData(0, 1CB732F8AE5191EAB756BBA4BDC9781F2738676E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 242118 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 242ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.236-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.243-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.073-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.236-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.259-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.073-0500 I COMMAND [conn220] command test5_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796765, 5560), lsid: { id: UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") }, $clusterTime: { clusterTime: Timestamp(1574796765, 5560), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796765, 5560). Collection minimum timestamp is Timestamp(1574796766, 5)" errName:SnapshotUnavailable errCode:246 reslen:579 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 228253 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 228ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.236-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: d00041e0-f39f-4a02-924c-153b724f4c7f: test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0 (b5be7a35-d18e-47c4-bde0-c1502d32e3e4 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.259-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.073-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4982950462205293984, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1172308854811062392, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796765668), clusterTime: Timestamp(1574796765, 4550) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796765, 4550), signature: { hash: BinData(0, 1CB732F8AE5191EAB756BBA4BDC9781F2738676E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 404ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.236-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.259-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: b81fe4cc-9465-48d0-8af6-277663171f1c: test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0 (b5be7a35-d18e-47c4-bde0-c1502d32e3e4 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.074-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0 with generated UUID: b5be7a35-d18e-47c4-bde0-c1502d32e3e4 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.237-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.259-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.075-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 5dab5060-56e9-452e-a6a5-0a7c2439511c: test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1 ( 71ef8fbb-d2bf-4837-9d18-655dd0f90a9f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.239-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.259-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.075-0500 I INDEX [conn112] Index build completed: 5dab5060-56e9-452e-a6a5-0a7c2439511c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.248-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: d00041e0-f39f-4a02-924c-153b724f4c7f: test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0 ( b5be7a35-d18e-47c4-bde0-c1502d32e3e4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.262-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.075-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796765, 5560), signature: { hash: BinData(0, 1CB732F8AE5191EAB756BBA4BDC9781F2738676E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 15333 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 319ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.255-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.271-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: b81fe4cc-9465-48d0-8af6-277663171f1c: test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0 ( b5be7a35-d18e-47c4-bde0-c1502d32e3e4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.075-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.255-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.279-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.076-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647 with generated UUID: 1d6fce01-9df6-4152-8d74-4ed9a632908b and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.255-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: b2deacf5-b7f0-45a7-b47b-59b2572c9fc4: test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532 (df3d59f1-fcdc-49bf-99d1-83876386957e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.279-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.076-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532 with generated UUID: df3d59f1-fcdc-49bf-99d1-83876386957e and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.255-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.280-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 14d13cb6-cd8f-43a6-82f6-1c54bec86ddd: test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532 (df3d59f1-fcdc-49bf-99d1-83876386957e ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.096-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.256-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.280-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.104-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.258-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.280-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.110-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.260-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: b2deacf5-b7f0-45a7-b47b-59b2572c9fc4: test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532 ( df3d59f1-fcdc-49bf-99d1-83876386957e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.282-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.116-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.276-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.284-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 14d13cb6-cd8f-43a6-82f6-1c54bec86ddd: test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532 ( df3d59f1-fcdc-49bf-99d1-83876386957e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.116-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.276-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.305-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.116-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (3129686b-3501-451c-9979-22a32d700968) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 511), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.276-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: cfe23a86-94ef-4076-9f71-309412194b7a: test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647 (1d6fce01-9df6-4152-8d74-4ed9a632908b ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.305-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.117-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (3129686b-3501-451c-9979-22a32d700968).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.276-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.305-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 7e62ffb8-8838-4e12-a9ee-997ead4fe7aa: test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647 (1d6fce01-9df6-4152-8d74-4ed9a632908b ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.117-0500 I STORAGE [conn112] renameCollection: renaming collection 71ef8fbb-d2bf-4837-9d18-655dd0f90a9f from test5_fsmdb0.tmp.agg_out.bb0ee42b-d1ea-4137-9f71-eb6c1940e4e1 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.277-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.305-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.117-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3129686b-3501-451c-9979-22a32d700968)'. Ident: 'index-727-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 511)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.279-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.306-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.117-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3129686b-3501-451c-9979-22a32d700968)'. Ident: 'index-729-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 511)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.282-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: cfe23a86-94ef-4076-9f71-309412194b7a: test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647 ( 1d6fce01-9df6-4152-8d74-4ed9a632908b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.308-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.117-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-724-8224331490264904478, commit timestamp: Timestamp(1574796766, 511)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.297-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.312-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 7e62ffb8-8838-4e12-a9ee-997ead4fe7aa: test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647 ( 1d6fce01-9df6-4152-8d74-4ed9a632908b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.117-0500 I INDEX [conn108] Registering index build: 945bb3af-a36f-4a94-b31d-51dc79c5e33a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.297-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.326-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.117-0500 I INDEX [conn46] Registering index build: 37a58cf9-f010-451c-bf37-eea70ddc9197
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.297-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 457b6f7e-ca94-407f-a5dc-df26af9f6e08: test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d (c3763a64-0bfd-4e76-821d-698d988201e3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.326-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.117-0500 I INDEX [conn114] Registering index build: 252327bd-a92d-4a4d-bccf-de18fe0707a0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.297-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.326-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 33c60a30-4f74-4457-b656-3abf03525de5: test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d (c3763a64-0bfd-4e76-821d-698d988201e3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.117-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5590148107350341929, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5993711492269950987, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796765724), clusterTime: Timestamp(1574796765, 5058) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796765, 5058), signature: { hash: BinData(0, 1CB732F8AE5191EAB756BBA4BDC9781F2738676E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 392ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.298-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.326-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.119-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 59400ce1-5ac2-4895-b037-a9dbb832d26f: test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01 ( f941104e-0e0c-4223-83eb-9c6b75e5a251 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.299-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.326-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.120-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d with generated UUID: c3763a64-0bfd-4e76-821d-698d988201e3 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.306-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 457b6f7e-ca94-407f-a5dc-df26af9f6e08: test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d ( c3763a64-0bfd-4e76-821d-698d988201e3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.329-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.138-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.313-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.336-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 33c60a30-4f74-4457-b656-3abf03525de5: test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d ( c3763a64-0bfd-4e76-821d-698d988201e3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.138-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.313-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.342-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.138-0500 I STORAGE [conn108] Index build initialized: 945bb3af-a36f-4a94-b31d-51dc79c5e33a: test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0 (b5be7a35-d18e-47c4-bde0-c1502d32e3e4 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.313-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 46713236-3a13-4d57-b93d-280f8a4eccee: test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d (22157587-4a84-46d6-b062-436bedaef575 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.342-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.138-0500 I INDEX [conn108] Waiting for index build to complete: 945bb3af-a36f-4a94-b31d-51dc79c5e33a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.314-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.342-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 33fa0054-df5b-4b9e-860d-f2892c22b846: test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d (22157587-4a84-46d6-b062-436bedaef575 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.138-0500 I INDEX [conn110] Index build completed: 59400ce1-5ac2-4895-b037-a9dbb832d26f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.314-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.342-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.138-0500 I COMMAND [conn110] command test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01 appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796765, 6069), signature: { hash: BinData(0, 1CB732F8AE5191EAB756BBA4BDC9781F2738676E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 114 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 327ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.315-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0 (b5be7a35-d18e-47c4-bde0-c1502d32e3e4) to test5_fsmdb0.agg_out and drop f941104e-0e0c-4223-83eb-9c6b75e5a251.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.343-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.145-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.318-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.344-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0 (b5be7a35-d18e-47c4-bde0-c1502d32e3e4) to test5_fsmdb0.agg_out and drop f941104e-0e0c-4223-83eb-9c6b75e5a251.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.145-0500 I INDEX [conn112] Registering index build: 4380f39e-d7c2-4d8d-a74b-ac5162269150
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.318-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (f941104e-0e0c-4223-83eb-9c6b75e5a251) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 1530), t: 1 } and commit timestamp Timestamp(1574796766, 1530)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.345-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.163-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.318-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (f941104e-0e0c-4223-83eb-9c6b75e5a251).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.345-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (f941104e-0e0c-4223-83eb-9c6b75e5a251) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 1530), t: 1 } and commit timestamp Timestamp(1574796766, 1530)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.163-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.318-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection b5be7a35-d18e-47c4-bde0-c1502d32e3e4 from test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.345-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (f941104e-0e0c-4223-83eb-9c6b75e5a251).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.163-0500 I STORAGE [conn46] Index build initialized: 37a58cf9-f010-451c-bf37-eea70ddc9197: test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532 (df3d59f1-fcdc-49bf-99d1-83876386957e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.318-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f941104e-0e0c-4223-83eb-9c6b75e5a251)'. Ident: 'index-746--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 1530)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.345-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection b5be7a35-d18e-47c4-bde0-c1502d32e3e4 from test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.163-0500 I INDEX [conn46] Waiting for index build to complete: 37a58cf9-f010-451c-bf37-eea70ddc9197
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.318-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f941104e-0e0c-4223-83eb-9c6b75e5a251)'. Ident: 'index-757--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 1530)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.345-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f941104e-0e0c-4223-83eb-9c6b75e5a251)'. Ident: 'index-746--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 1530)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.163-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.318-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-745--8000595249233899911, commit timestamp: Timestamp(1574796766, 1530)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.345-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f941104e-0e0c-4223-83eb-9c6b75e5a251)'. Ident: 'index-757--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 1530)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.163-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (71ef8fbb-d2bf-4837-9d18-655dd0f90a9f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 1015), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.319-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 46713236-3a13-4d57-b93d-280f8a4eccee: test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d ( 22157587-4a84-46d6-b062-436bedaef575 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.345-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-745--4104909142373009110, commit timestamp: Timestamp(1574796766, 1530)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.163-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (71ef8fbb-d2bf-4837-9d18-655dd0f90a9f).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.331-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b with provided UUID: 7ae1c0e8-6a07-4767-872a-447ad75a3c9a and options: { uuid: UUID("7ae1c0e8-6a07-4767-872a-447ad75a3c9a"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.346-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 33fa0054-df5b-4b9e-860d-f2892c22b846: test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d ( 22157587-4a84-46d6-b062-436bedaef575 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.163-0500 I STORAGE [conn110] renameCollection: renaming collection f941104e-0e0c-4223-83eb-9c6b75e5a251 from test5_fsmdb0.tmp.agg_out.8e981150-0785-4c01-99bf-74c150327e01 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.345-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.350-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b with provided UUID: 7ae1c0e8-6a07-4767-872a-447ad75a3c9a and options: { uuid: UUID("7ae1c0e8-6a07-4767-872a-447ad75a3c9a"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.163-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (71ef8fbb-d2bf-4837-9d18-655dd0f90a9f)'. Ident: 'index-732-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 1015)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.353-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532 (df3d59f1-fcdc-49bf-99d1-83876386957e) to test5_fsmdb0.agg_out and drop b5be7a35-d18e-47c4-bde0-c1502d32e3e4.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.363-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.163-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (71ef8fbb-d2bf-4837-9d18-655dd0f90a9f)'. Ident: 'index-735-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 1015)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.353-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (b5be7a35-d18e-47c4-bde0-c1502d32e3e4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 2288), t: 1 } and commit timestamp Timestamp(1574796766, 2288)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.370-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532 (df3d59f1-fcdc-49bf-99d1-83876386957e) to test5_fsmdb0.agg_out and drop b5be7a35-d18e-47c4-bde0-c1502d32e3e4.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.163-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-730-8224331490264904478, commit timestamp: Timestamp(1574796766, 1015)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.353-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (b5be7a35-d18e-47c4-bde0-c1502d32e3e4).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.370-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (b5be7a35-d18e-47c4-bde0-c1502d32e3e4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 2288), t: 1 } and commit timestamp Timestamp(1574796766, 2288)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.163-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.353-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection df3d59f1-fcdc-49bf-99d1-83876386957e from test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.370-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (b5be7a35-d18e-47c4-bde0-c1502d32e3e4).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.163-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.353-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (b5be7a35-d18e-47c4-bde0-c1502d32e3e4)'. Ident: 'index-752--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 2288)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.370-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection df3d59f1-fcdc-49bf-99d1-83876386957e from test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.163-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2619287750340708150, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 992224586906430406, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796765772), clusterTime: Timestamp(1574796765, 5564) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796765, 5564), signature: { hash: BinData(0, 1CB732F8AE5191EAB756BBA4BDC9781F2738676E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 390ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.353-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (b5be7a35-d18e-47c4-bde0-c1502d32e3e4)'. Ident: 'index-763--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 2288)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.370-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (b5be7a35-d18e-47c4-bde0-c1502d32e3e4)'. Ident: 'index-752--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 2288)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.164-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.353-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-751--8000595249233899911, commit timestamp: Timestamp(1574796766, 2288)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.370-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (b5be7a35-d18e-47c4-bde0-c1502d32e3e4)'. Ident: 'index-763--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 2288)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.164-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.359-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647 (1d6fce01-9df6-4152-8d74-4ed9a632908b) to test5_fsmdb0.agg_out and drop df3d59f1-fcdc-49bf-99d1-83876386957e.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.370-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-751--4104909142373009110, commit timestamp: Timestamp(1574796766, 2288)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.167-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d with generated UUID: 22157587-4a84-46d6-b062-436bedaef575 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.359-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (df3d59f1-fcdc-49bf-99d1-83876386957e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 3534), t: 1 } and commit timestamp Timestamp(1574796766, 3534)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.378-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647 (1d6fce01-9df6-4152-8d74-4ed9a632908b) to test5_fsmdb0.agg_out and drop df3d59f1-fcdc-49bf-99d1-83876386957e.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.173-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.359-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (df3d59f1-fcdc-49bf-99d1-83876386957e).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.379-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (df3d59f1-fcdc-49bf-99d1-83876386957e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 3534), t: 1 } and commit timestamp Timestamp(1574796766, 3534)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.176-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.359-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 1d6fce01-9df6-4152-8d74-4ed9a632908b from test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.379-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (df3d59f1-fcdc-49bf-99d1-83876386957e).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.191-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.359-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (df3d59f1-fcdc-49bf-99d1-83876386957e)'. Ident: 'index-756--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 3534)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.379-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 1d6fce01-9df6-4152-8d74-4ed9a632908b from test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.191-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.359-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (df3d59f1-fcdc-49bf-99d1-83876386957e)'. Ident: 'index-765--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 3534)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.379-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (df3d59f1-fcdc-49bf-99d1-83876386957e)'. Ident: 'index-756--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 3534)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.191-0500 I STORAGE [conn114] Index build initialized: 252327bd-a92d-4a4d-bccf-de18fe0707a0: test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647 (1d6fce01-9df6-4152-8d74-4ed9a632908b ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.359-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-755--8000595249233899911, commit timestamp: Timestamp(1574796766, 3534)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.379-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (df3d59f1-fcdc-49bf-99d1-83876386957e)'. Ident: 'index-765--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 3534)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.191-0500 I INDEX [conn114] Waiting for index build to complete: 252327bd-a92d-4a4d-bccf-de18fe0707a0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.360-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d (22157587-4a84-46d6-b062-436bedaef575) to test5_fsmdb0.agg_out and drop 1d6fce01-9df6-4152-8d74-4ed9a632908b.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.379-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-755--4104909142373009110, commit timestamp: Timestamp(1574796766, 3534)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.192-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 945bb3af-a36f-4a94-b31d-51dc79c5e33a: test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0 ( b5be7a35-d18e-47c4-bde0-c1502d32e3e4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.360-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (1d6fce01-9df6-4152-8d74-4ed9a632908b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 3535), t: 1 } and commit timestamp Timestamp(1574796766, 3535)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.379-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d (22157587-4a84-46d6-b062-436bedaef575) to test5_fsmdb0.agg_out and drop 1d6fce01-9df6-4152-8d74-4ed9a632908b.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.193-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 37a58cf9-f010-451c-bf37-eea70ddc9197: test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532 ( df3d59f1-fcdc-49bf-99d1-83876386957e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.360-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (1d6fce01-9df6-4152-8d74-4ed9a632908b).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.379-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.agg_out (1d6fce01-9df6-4152-8d74-4ed9a632908b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 3535), t: 1 } and commit timestamp Timestamp(1574796766, 3535)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.202-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.360-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 22157587-4a84-46d6-b062-436bedaef575 from test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.379-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.agg_out (1d6fce01-9df6-4152-8d74-4ed9a632908b).
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:46.873-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796766, 1594), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 622ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:46.958-0500 I NETWORK [listener] connection accepted from 127.0.0.1:47732 #223 (48 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:46.978-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796766, 3535), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 687ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.203-0500 I INDEX [conn110] Registering index build: 897e9b7a-2abe-4d65-ae44-a882083a36bf
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.360-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1d6fce01-9df6-4152-8d74-4ed9a632908b)'. Ident: 'index-754--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 3535)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.380-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection 22157587-4a84-46d6-b062-436bedaef575 from test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:46.998-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796766, 3536), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 186ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:46.959-0500 I NETWORK [conn223] received client metadata from 127.0.0.1:47732 conn223: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:46.999-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796766, 3533), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 708ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.216-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.360-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1d6fce01-9df6-4152-8d74-4ed9a632908b)'. Ident: 'index-767--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 3535)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.380-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1d6fce01-9df6-4152-8d74-4ed9a632908b)'. Ident: 'index-754--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 3535)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:47.030-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796766, 4045), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 155ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:47.118-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796766, 5585), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 138ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.216-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.360-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-753--8000595249233899911, commit timestamp: Timestamp(1574796766, 3535)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.380-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1d6fce01-9df6-4152-8d74-4ed9a632908b)'. Ident: 'index-767--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 3535)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:47.067-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796766, 3537), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 254ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.217-0500 I STORAGE [conn112] Index build initialized: 4380f39e-d7c2-4d8d-a74b-ac5162269150: test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d (c3763a64-0bfd-4e76-821d-698d988201e3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.361-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d (c3763a64-0bfd-4e76-821d-698d988201e3) to test5_fsmdb0.agg_out and drop 22157587-4a84-46d6-b062-436bedaef575.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.380-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-753--4104909142373009110, commit timestamp: Timestamp(1574796766, 3535)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.217-0500 I INDEX [conn112] Waiting for index build to complete: 4380f39e-d7c2-4d8d-a74b-ac5162269150
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.361-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (22157587-4a84-46d6-b062-436bedaef575) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 3536), t: 1 } and commit timestamp Timestamp(1574796766, 3536)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.380-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d (c3763a64-0bfd-4e76-821d-698d988201e3) to test5_fsmdb0.agg_out and drop 22157587-4a84-46d6-b062-436bedaef575.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.217-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.361-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (22157587-4a84-46d6-b062-436bedaef575).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.380-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.agg_out (22157587-4a84-46d6-b062-436bedaef575) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 3536), t: 1 } and commit timestamp Timestamp(1574796766, 3536)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.217-0500 I INDEX [conn108] Index build completed: 945bb3af-a36f-4a94-b31d-51dc79c5e33a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.361-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection c3763a64-0bfd-4e76-821d-698d988201e3 from test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.380-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.agg_out (22157587-4a84-46d6-b062-436bedaef575).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.217-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0 appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796766, 510), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 12672 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 112ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.361-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (22157587-4a84-46d6-b062-436bedaef575)'. Ident: 'index-762--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 3536)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.380-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection c3763a64-0bfd-4e76-821d-698d988201e3 from test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.217-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.361-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (22157587-4a84-46d6-b062-436bedaef575)'. Ident: 'index-771--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 3536)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.380-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (22157587-4a84-46d6-b062-436bedaef575)'. Ident: 'index-762--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 3536)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.228-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.361-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-761--8000595249233899911, commit timestamp: Timestamp(1574796766, 3536)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.380-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (22157587-4a84-46d6-b062-436bedaef575)'. Ident: 'index-771--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 3536)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.236-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.839-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb with provided UUID: 9ebef7d4-7d4f-451f-8aca-70ff48b900f2 and options: { uuid: UUID("9ebef7d4-7d4f-451f-8aca-70ff48b900f2"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.380-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-761--4104909142373009110, commit timestamp: Timestamp(1574796766, 3536)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.236-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.851-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.852-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb with provided UUID: 9ebef7d4-7d4f-451f-8aca-70ff48b900f2 and options: { uuid: UUID("9ebef7d4-7d4f-451f-8aca-70ff48b900f2"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.236-0500 I STORAGE [conn110] Index build initialized: 897e9b7a-2abe-4d65-ae44-a882083a36bf: test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d (22157587-4a84-46d6-b062-436bedaef575 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.852-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573 with provided UUID: 1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd and options: { uuid: UUID("1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.867-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.236-0500 I INDEX [conn110] Waiting for index build to complete: 897e9b7a-2abe-4d65-ae44-a882083a36bf
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.867-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.868-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573 with provided UUID: 1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd and options: { uuid: UUID("1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.236-0500 I INDEX [conn46] Index build completed: 37a58cf9-f010-451c-bf37-eea70ddc9197
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.883-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.884-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.236-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.883-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.901-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.236-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.883-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 17a31a07-3954-47ca-bccc-d01496eaa863: test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b (7ae1c0e8-6a07-4767-872a-447ad75a3c9a ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.901-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.236-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532 appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796766, 511), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 65 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 119ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.883-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.901-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 8d237ed3-8bd6-4d93-8ef4-5892a3a439e9: test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b (7ae1c0e8-6a07-4767-872a-447ad75a3c9a ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.242-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 252327bd-a92d-4a4d-bccf-de18fe0707a0: test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647 ( 1d6fce01-9df6-4152-8d74-4ed9a632908b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.884-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.901-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.242-0500 I INDEX [conn114] Index build completed: 252327bd-a92d-4a4d-bccf-de18fe0707a0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.885-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1 with provided UUID: 5eda5599-db9e-42df-a0cb-c9cba6cb5ea7 and options: { uuid: UUID("5eda5599-db9e-42df-a0cb-c9cba6cb5ea7"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.901-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.242-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647 appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796766, 510), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 6933 } }, Collection: { acquireCount: { w: 1, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 83 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 131ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.887-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.903-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.242-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.896-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 17a31a07-3954-47ca-bccc-d01496eaa863: test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b ( 7ae1c0e8-6a07-4767-872a-447ad75a3c9a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.905-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1 with provided UUID: 5eda5599-db9e-42df-a0cb-c9cba6cb5ea7 and options: { uuid: UUID("5eda5599-db9e-42df-a0cb-c9cba6cb5ea7"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.243-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.904-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.906-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 8d237ed3-8bd6-4d93-8ef4-5892a3a439e9: test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b ( 7ae1c0e8-6a07-4767-872a-447ad75a3c9a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.245-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.905-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45 with provided UUID: c5c04a31-df85-4197-9177-405c8855e420 and options: { uuid: UUID("c5c04a31-df85-4197-9177-405c8855e420"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.921-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.248-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.921-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.922-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45 with provided UUID: c5c04a31-df85-4197-9177-405c8855e420 and options: { uuid: UUID("c5c04a31-df85-4197-9177-405c8855e420"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.248-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.927-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b (7ae1c0e8-6a07-4767-872a-447ad75a3c9a) to test5_fsmdb0.agg_out and drop c3763a64-0bfd-4e76-821d-698d988201e3.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.937-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.248-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (f941104e-0e0c-4223-83eb-9c6b75e5a251) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 1530), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.927-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.agg_out (c3763a64-0bfd-4e76-821d-698d988201e3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 4045), t: 1 } and commit timestamp Timestamp(1574796766, 4045)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.953-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b (7ae1c0e8-6a07-4767-872a-447ad75a3c9a) to test5_fsmdb0.agg_out and drop c3763a64-0bfd-4e76-821d-698d988201e3.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.248-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (f941104e-0e0c-4223-83eb-9c6b75e5a251).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.927-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.agg_out (c3763a64-0bfd-4e76-821d-698d988201e3).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.953-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (c3763a64-0bfd-4e76-821d-698d988201e3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 4045), t: 1 } and commit timestamp Timestamp(1574796766, 4045)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.248-0500 I STORAGE [conn108] renameCollection: renaming collection b5be7a35-d18e-47c4-bde0-c1502d32e3e4 from test5_fsmdb0.tmp.agg_out.d3222b7e-75aa-4c4f-8e4f-f297cb349dc0 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.927-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection 7ae1c0e8-6a07-4767-872a-447ad75a3c9a from test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.953-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (c3763a64-0bfd-4e76-821d-698d988201e3).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.248-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f941104e-0e0c-4223-83eb-9c6b75e5a251)'. Ident: 'index-738-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 1530)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.927-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c3763a64-0bfd-4e76-821d-698d988201e3)'. Ident: 'index-760--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 4045)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.953-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 7ae1c0e8-6a07-4767-872a-447ad75a3c9a from test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.248-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f941104e-0e0c-4223-83eb-9c6b75e5a251)'. Ident: 'index-739-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 1530)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.927-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c3763a64-0bfd-4e76-821d-698d988201e3)'. Ident: 'index-769--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 4045)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.953-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c3763a64-0bfd-4e76-821d-698d988201e3)'. Ident: 'index-760--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 4045)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.248-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-736-8224331490264904478, commit timestamp: Timestamp(1574796766, 1530)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.927-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-759--8000595249233899911, commit timestamp: Timestamp(1574796766, 4045)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.953-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c3763a64-0bfd-4e76-821d-698d988201e3)'. Ident: 'index-769--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 4045)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.249-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 855949085444231870, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8326951655371857511, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796766071), clusterTime: Timestamp(1574796765, 6069) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796766, 5), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 175ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.928-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29 with provided UUID: 30a9dce2-217b-4655-a7de-fceb42e20721 and options: { uuid: UUID("30a9dce2-217b-4655-a7de-fceb42e20721"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.953-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-759--4104909142373009110, commit timestamp: Timestamp(1574796766, 4045)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.249-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 4380f39e-d7c2-4d8d-a74b-ac5162269150: test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d ( c3763a64-0bfd-4e76-821d-698d988201e3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.943-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.954-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29 with provided UUID: 30a9dce2-217b-4655-a7de-fceb42e20721 and options: { uuid: UUID("30a9dce2-217b-4655-a7de-fceb42e20721"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.249-0500 I INDEX [conn112] Index build completed: 4380f39e-d7c2-4d8d-a74b-ac5162269150
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.958-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.969-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.249-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796766, 577), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 104ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.958-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.984-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.252-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b with generated UUID: 7ae1c0e8-6a07-4767-872a-447ad75a3c9a and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.958-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 9394c730-169d-4334-acb1-d601e7a9073f: test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb (9ebef7d4-7d4f-451f-8aca-70ff48b900f2 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.984-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.252-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 897e9b7a-2abe-4d65-ae44-a882083a36bf: test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d ( 22157587-4a84-46d6-b062-436bedaef575 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.958-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.984-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 46ac8fbe-6f12-4f83-82e9-c43b7502005b: test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb (9ebef7d4-7d4f-451f-8aca-70ff48b900f2 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.252-0500 I INDEX [conn110] Index build completed: 897e9b7a-2abe-4d65-ae44-a882083a36bf
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.959-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.984-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.270-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.961-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.985-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.271-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.970-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 9394c730-169d-4334-acb1-d601e7a9073f: test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb ( 9ebef7d4-7d4f-451f-8aca-70ff48b900f2 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.987-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.271-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (b5be7a35-d18e-47c4-bde0-c1502d32e3e4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 2288), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.978-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:46.990-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 46ac8fbe-6f12-4f83-82e9-c43b7502005b: test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb ( 9ebef7d4-7d4f-451f-8aca-70ff48b900f2 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.271-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (b5be7a35-d18e-47c4-bde0-c1502d32e3e4).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.978-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.006-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.271-0500 I STORAGE [conn46] renameCollection: renaming collection df3d59f1-fcdc-49bf-99d1-83876386957e from test5_fsmdb0.tmp.agg_out.28dd3038-a0bb-48c6-a184-01bfb4aea532 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.978-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: e370142a-1cd0-4ce7-a76c-46059d5da773: test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573 (1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.006-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.271-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (b5be7a35-d18e-47c4-bde0-c1502d32e3e4)'. Ident: 'index-744-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 2288)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.978-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.006-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: fb2529d9-8d8c-4b4d-88a0-219e8fa7bbd3: test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573 (1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.271-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (b5be7a35-d18e-47c4-bde0-c1502d32e3e4)'. Ident: 'index-747-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 2288)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.979-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.006-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.271-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-741-8224331490264904478, commit timestamp: Timestamp(1574796766, 2288)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.981-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.006-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.271-0500 I INDEX [conn114] Registering index build: c707d719-ac7b-4c50-8a69-2b8a968a45d3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.988-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: e370142a-1cd0-4ce7-a76c-46059d5da773: test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573 ( 1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.009-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.271-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1796832914828820522, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6530506257994387871, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796766075), clusterTime: Timestamp(1574796766, 5) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796766, 6), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 195ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.996-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.018-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: fb2529d9-8d8c-4b4d-88a0-219e8fa7bbd3: test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573 ( 1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.273-0500 I COMMAND [conn71] CMD: dropIndexes test5_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.996-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.025-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.289-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.996-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 63f5c28e-5976-41e9-9af4-565042c7a2ae: test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1 (5eda5599-db9e-42df-a0cb-c9cba6cb5ea7 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.025-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.289-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.996-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.025-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: d09bf2de-9a61-4e25-b4a3-2c8d37e0668d: test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1 (5eda5599-db9e-42df-a0cb-c9cba6cb5ea7 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.289-0500 I STORAGE [conn114] Index build initialized: c707d719-ac7b-4c50-8a69-2b8a968a45d3: test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b (7ae1c0e8-6a07-4767-872a-447ad75a3c9a ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:46.997-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.026-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.289-0500 I INDEX [conn114] Waiting for index build to complete: c707d719-ac7b-4c50-8a69-2b8a968a45d3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.000-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.026-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.289-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.004-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 63f5c28e-5976-41e9-9af4-565042c7a2ae: test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1 ( 5eda5599-db9e-42df-a0cb-c9cba6cb5ea7 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.028-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.289-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (df3d59f1-fcdc-49bf-99d1-83876386957e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 3534), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.028-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.030-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: d09bf2de-9a61-4e25-b4a3-2c8d37e0668d: test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1 ( 5eda5599-db9e-42df-a0cb-c9cba6cb5ea7 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.289-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (df3d59f1-fcdc-49bf-99d1-83876386957e).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.028-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.049-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.289-0500 I STORAGE [conn112] renameCollection: renaming collection 1d6fce01-9df6-4152-8d74-4ed9a632908b from test5_fsmdb0.tmp.agg_out.ebd934b6-dd79-4931-82b4-7ca5abc5a647 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.028-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 0f7b991d-3b88-4206-8205-b16fe6cf5977: test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29 (30a9dce2-217b-4655-a7de-fceb42e20721 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.049-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.289-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (df3d59f1-fcdc-49bf-99d1-83876386957e)'. Ident: 'index-746-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 3534)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.028-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.049-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: e2133cbb-54bc-4929-976c-6d40817485a7: test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29 (30a9dce2-217b-4655-a7de-fceb42e20721 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.289-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (df3d59f1-fcdc-49bf-99d1-83876386957e)'. Ident: 'index-751-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 3534)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.029-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.049-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.289-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-743-8224331490264904478, commit timestamp: Timestamp(1574796766, 3534)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.032-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.049-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.289-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.034-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 0f7b991d-3b88-4206-8205-b16fe6cf5977: test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29 ( 30a9dce2-217b-4655-a7de-fceb42e20721 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.051-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.289-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (1d6fce01-9df6-4152-8d74-4ed9a632908b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 3535), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.050-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.055-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: e2133cbb-54bc-4929-976c-6d40817485a7: test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29 ( 30a9dce2-217b-4655-a7de-fceb42e20721 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.289-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6065834303229393316, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6512556180496504454, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796766074), clusterTime: Timestamp(1574796766, 5) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796766, 6), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 214ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.050-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.072-0500 I INDEX [ReplWriterWorker-12] index build: starting on test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.289-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (1d6fce01-9df6-4152-8d74-4ed9a632908b).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.050-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: bc94dcb3-afbe-4511-9e08-f6b77674aa78: test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45 (c5c04a31-df85-4197-9177-405c8855e420 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.072-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.290-0500 I STORAGE [conn108] renameCollection: renaming collection 22157587-4a84-46d6-b062-436bedaef575 from test5_fsmdb0.tmp.agg_out.b0aa36f3-fd0a-40f1-8d24-2a9ea961377d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.050-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.072-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: c59b7d3e-a523-4e09-b7ea-c4edc7f2e509: test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45 (c5c04a31-df85-4197-9177-405c8855e420 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.290-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1d6fce01-9df6-4152-8d74-4ed9a632908b)'. Ident: 'index-745-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 3535)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.051-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.072-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.290-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1d6fce01-9df6-4152-8d74-4ed9a632908b)'. Ident: 'index-753-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 3535)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.053-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.073-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.290-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-742-8224331490264904478, commit timestamp: Timestamp(1574796766, 3535)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.057-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: bc94dcb3-afbe-4511-9e08-f6b77674aa78: test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45 ( c5c04a31-df85-4197-9177-405c8855e420 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.076-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.290-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.061-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573 (1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd) to test5_fsmdb0.agg_out and drop 7ae1c0e8-6a07-4767-872a-447ad75a3c9a.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.079-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: c59b7d3e-a523-4e09-b7ea-c4edc7f2e509: test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45 ( c5c04a31-df85-4197-9177-405c8855e420 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.290-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5917198811540977493, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7015020852316308079, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796766165), clusterTime: Timestamp(1574796766, 1015) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796766, 1015), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 124ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.061-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (7ae1c0e8-6a07-4767-872a-447ad75a3c9a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 5329), t: 1 } and commit timestamp Timestamp(1574796766, 5329)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.084-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573 (1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd) to test5_fsmdb0.agg_out and drop 7ae1c0e8-6a07-4767-872a-447ad75a3c9a.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.290-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (22157587-4a84-46d6-b062-436bedaef575) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 3536), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.062-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (7ae1c0e8-6a07-4767-872a-447ad75a3c9a).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.084-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (7ae1c0e8-6a07-4767-872a-447ad75a3c9a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 5329), t: 1 } and commit timestamp Timestamp(1574796766, 5329)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.290-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (22157587-4a84-46d6-b062-436bedaef575).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.062-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd from test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.084-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (7ae1c0e8-6a07-4767-872a-447ad75a3c9a).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.290-0500 I STORAGE [conn110] renameCollection: renaming collection c3763a64-0bfd-4e76-821d-698d988201e3 from test5_fsmdb0.tmp.agg_out.78cd3514-0696-4355-b355-43fcc05f282d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.062-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (7ae1c0e8-6a07-4767-872a-447ad75a3c9a)'. Ident: 'index-774--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 5329)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.084-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd from test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.290-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (22157587-4a84-46d6-b062-436bedaef575)'. Ident: 'index-756-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 3536)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.062-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (7ae1c0e8-6a07-4767-872a-447ad75a3c9a)'. Ident: 'index-779--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 5329)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.084-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (7ae1c0e8-6a07-4767-872a-447ad75a3c9a)'. Ident: 'index-774--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 5329)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.290-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (22157587-4a84-46d6-b062-436bedaef575)'. Ident: 'index-759-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 3536)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.062-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-773--8000595249233899911, commit timestamp: Timestamp(1574796766, 5329)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.084-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (7ae1c0e8-6a07-4767-872a-447ad75a3c9a)'. Ident: 'index-779--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 5329)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.290-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-754-8224331490264904478, commit timestamp: Timestamp(1574796766, 3536)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.066-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2 with provided UUID: 523e7d40-0c97-439c-800f-6040151003ad and options: { uuid: UUID("523e7d40-0c97-439c-800f-6040151003ad"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.084-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-773--4104909142373009110, commit timestamp: Timestamp(1574796766, 5329)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.290-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.081-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.089-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2 with provided UUID: 523e7d40-0c97-439c-800f-6040151003ad and options: { uuid: UUID("523e7d40-0c97-439c-800f-6040151003ad"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.290-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2587834191144793588, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4355122302148444486, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796766118), clusterTime: Timestamp(1574796766, 511) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796766, 511), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 171ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.084-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1 (5eda5599-db9e-42df-a0cb-c9cba6cb5ea7) to test5_fsmdb0.agg_out and drop 1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.105-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.084-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 6203), t: 1 } and commit timestamp Timestamp(1574796766, 6203)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.109-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1 (5eda5599-db9e-42df-a0cb-c9cba6cb5ea7) to test5_fsmdb0.agg_out and drop 1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.291-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.084-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.109-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 6203), t: 1 } and commit timestamp Timestamp(1574796766, 6203)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.291-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb with generated UUID: 9ebef7d4-7d4f-451f-8aca-70ff48b900f2 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.084-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 5eda5599-db9e-42df-a0cb-c9cba6cb5ea7 from test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.109-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.292-0500 I COMMAND [conn65] CMD: dropIndexes test5_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.084-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd)'. Ident: 'index-778--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 6203)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.109-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 5eda5599-db9e-42df-a0cb-c9cba6cb5ea7 from test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.810-0500 I COMMAND [conn65] command test5_fsmdb0.agg_out appName: "tid:4" command: dropIndexes { dropIndexes: "agg_out", index: { flag: 1.0 }, writeConcern: { w: 1, wtimeout: 0 }, allowImplicitCollectionCreation: false, shardVersion: [ Timestamp(0, 0), ObjectId('000000000000000000000000') ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796766, 3536), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"can't find index with key: { flag: 1.0 }" errName:IndexNotFound errCode:27 reslen:424 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } protocol:op_msg 518ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.084-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd)'. Ident: 'index-789--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 6203)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.109-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd)'. Ident: 'index-778--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 6203)'
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:49.978-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796766, 6332), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2978ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.810-0500 I COMMAND [conn216] command test5_fsmdb0.agg_out command: listIndexes { listIndexes: "agg_out", databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, $clusterTime: { clusterTime: Timestamp(1574796766, 3537), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:495 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 518400 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 518ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.084-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-777--8000595249233899911, commit timestamp: Timestamp(1574796766, 6203)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.109-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd)'. Ident: 'index-789--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 6203)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.811-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573 with generated UUID: 1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.085-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb (9ebef7d4-7d4f-451f-8aca-70ff48b900f2) to test5_fsmdb0.agg_out and drop 5eda5599-db9e-42df-a0cb-c9cba6cb5ea7.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.109-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-777--4104909142373009110, commit timestamp: Timestamp(1574796766, 6203)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.812-0500 I COMMAND [conn220] command test5_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796766, 3536), lsid: { id: UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") }, $clusterTime: { clusterTime: Timestamp(1574796766, 3536), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test5_fsmdb0" } numYields:0 reslen:753 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 3 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 425852 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 430ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.085-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.agg_out (5eda5599-db9e-42df-a0cb-c9cba6cb5ea7) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 6204), t: 1 } and commit timestamp Timestamp(1574796766, 6204)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.110-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb (9ebef7d4-7d4f-451f-8aca-70ff48b900f2) to test5_fsmdb0.agg_out and drop 5eda5599-db9e-42df-a0cb-c9cba6cb5ea7.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.812-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.085-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.agg_out (5eda5599-db9e-42df-a0cb-c9cba6cb5ea7).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.111-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (5eda5599-db9e-42df-a0cb-c9cba6cb5ea7) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 6204), t: 1 } and commit timestamp Timestamp(1574796766, 6204)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.813-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1 with generated UUID: 5eda5599-db9e-42df-a0cb-c9cba6cb5ea7 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.085-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection 9ebef7d4-7d4f-451f-8aca-70ff48b900f2 from test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.111-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (5eda5599-db9e-42df-a0cb-c9cba6cb5ea7).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.813-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45 with generated UUID: c5c04a31-df85-4197-9177-405c8855e420 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.085-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (5eda5599-db9e-42df-a0cb-c9cba6cb5ea7)'. Ident: 'index-782--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 6204)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.111-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 9ebef7d4-7d4f-451f-8aca-70ff48b900f2 from test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.823-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: c707d719-ac7b-4c50-8a69-2b8a968a45d3: test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b ( 7ae1c0e8-6a07-4767-872a-447ad75a3c9a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.085-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (5eda5599-db9e-42df-a0cb-c9cba6cb5ea7)'. Ident: 'index-791--8000595249233899911', commit timestamp: 'Timestamp(1574796766, 6204)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.111-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (5eda5599-db9e-42df-a0cb-c9cba6cb5ea7)'. Ident: 'index-782--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 6204)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.823-0500 I INDEX [conn114] Index build completed: c707d719-ac7b-4c50-8a69-2b8a968a45d3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.085-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-781--8000595249233899911, commit timestamp: Timestamp(1574796766, 6204)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.090-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450 with provided UUID: f13e5025-eda7-4d47-878b-356d28a65c54 and options: { uuid: UUID("f13e5025-eda7-4d47-878b-356d28a65c54"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.823-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796766, 2288), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 139 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 552ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.111-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (5eda5599-db9e-42df-a0cb-c9cba6cb5ea7)'. Ident: 'index-791--4104909142373009110', commit timestamp: 'Timestamp(1574796766, 6204)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.106-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.837-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.111-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-781--4104909142373009110, commit timestamp: Timestamp(1574796766, 6204)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.107-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29 (30a9dce2-217b-4655-a7de-fceb42e20721) to test5_fsmdb0.agg_out and drop 9ebef7d4-7d4f-451f-8aca-70ff48b900f2.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.837-0500 I COMMAND [conn110] command test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb appName: "tid:3" command: create { create: "tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb", temp: true, validationLevel: "off", validationAction: "error", databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796766, 3536), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 545ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.114-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796766, 6204) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796766, 6332), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 6357 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 111ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.108-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (9ebef7d4-7d4f-451f-8aca-70ff48b900f2) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796767, 119), t: 1 } and commit timestamp Timestamp(1574796767, 119)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.837-0500 I INDEX [conn110] Registering index build: 08aaec61-9483-4f21-a973-064be13f9276
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.115-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450 with provided UUID: f13e5025-eda7-4d47-878b-356d28a65c54 and options: { uuid: UUID("f13e5025-eda7-4d47-878b-356d28a65c54"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.108-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (9ebef7d4-7d4f-451f-8aca-70ff48b900f2).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.841-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.130-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.108-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 30a9dce2-217b-4655-a7de-fceb42e20721 from test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.842-0500 I INDEX [conn108] Registering index build: 7c03fca9-e71a-4b68-89ed-bd352ccc40c4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.132-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29 (30a9dce2-217b-4655-a7de-fceb42e20721) to test5_fsmdb0.agg_out and drop 9ebef7d4-7d4f-451f-8aca-70ff48b900f2.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.108-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (9ebef7d4-7d4f-451f-8aca-70ff48b900f2)'. Ident: 'index-776--8000595249233899911', commit timestamp: 'Timestamp(1574796767, 119)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.849-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.132-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (9ebef7d4-7d4f-451f-8aca-70ff48b900f2) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796767, 119), t: 1 } and commit timestamp Timestamp(1574796767, 119)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.108-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (9ebef7d4-7d4f-451f-8aca-70ff48b900f2)'. Ident: 'index-787--8000595249233899911', commit timestamp: 'Timestamp(1574796767, 119)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.849-0500 I INDEX [conn112] Registering index build: 38466c77-ee79-4428-86ff-4dd8072deef2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.132-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (9ebef7d4-7d4f-451f-8aca-70ff48b900f2).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.108-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-775--8000595249233899911, commit timestamp: Timestamp(1574796767, 119)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.855-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.132-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 30a9dce2-217b-4655-a7de-fceb42e20721 from test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.110-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8 with provided UUID: f6e35aeb-8619-4d4a-a276-7ede73b9c323 and options: { uuid: UUID("f6e35aeb-8619-4d4a-a276-7ede73b9c323"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.856-0500 I INDEX [conn46] Registering index build: a932f6a3-7aa0-496d-8064-b4e9a638f301
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.132-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (9ebef7d4-7d4f-451f-8aca-70ff48b900f2)'. Ident: 'index-776--4104909142373009110', commit timestamp: 'Timestamp(1574796767, 119)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.126-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.872-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.132-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (9ebef7d4-7d4f-451f-8aca-70ff48b900f2)'. Ident: 'index-787--4104909142373009110', commit timestamp: 'Timestamp(1574796767, 119)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.143-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.872-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.132-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-775--4104909142373009110, commit timestamp: Timestamp(1574796767, 119)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.143-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.872-0500 I STORAGE [conn110] Index build initialized: 08aaec61-9483-4f21-a973-064be13f9276: test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb (9ebef7d4-7d4f-451f-8aca-70ff48b900f2 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.135-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8 with provided UUID: f6e35aeb-8619-4d4a-a276-7ede73b9c323 and options: { uuid: UUID("f6e35aeb-8619-4d4a-a276-7ede73b9c323"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.144-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 27db315c-546e-4c9c-a662-0f8ebff8148d: test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2 (523e7d40-0c97-439c-800f-6040151003ad ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.872-0500 I INDEX [conn110] Waiting for index build to complete: 08aaec61-9483-4f21-a973-064be13f9276
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.149-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.144-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.872-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.167-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.144-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.872-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (c3763a64-0bfd-4e76-821d-698d988201e3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 4045), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.167-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.145-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45 (c5c04a31-df85-4197-9177-405c8855e420) to test5_fsmdb0.agg_out and drop 30a9dce2-217b-4655-a7de-fceb42e20721.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.872-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (c3763a64-0bfd-4e76-821d-698d988201e3).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.167-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 751a311b-d20e-4308-b603-e078e3fa2bbc: test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2 (523e7d40-0c97-439c-800f-6040151003ad ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.147-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.872-0500 I STORAGE [conn114] renameCollection: renaming collection 7ae1c0e8-6a07-4767-872a-447ad75a3c9a from test5_fsmdb0.tmp.agg_out.ddb4bafd-0241-481b-99e1-82601fce144b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.167-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.147-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (30a9dce2-217b-4655-a7de-fceb42e20721) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796767, 240), t: 1 } and commit timestamp Timestamp(1574796767, 240)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.872-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c3763a64-0bfd-4e76-821d-698d988201e3)'. Ident: 'index-750-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 4045)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.167-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.147-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (30a9dce2-217b-4655-a7de-fceb42e20721).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.872-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c3763a64-0bfd-4e76-821d-698d988201e3)'. Ident: 'index-757-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 4045)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.169-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45 (c5c04a31-df85-4197-9177-405c8855e420) to test5_fsmdb0.agg_out and drop 30a9dce2-217b-4655-a7de-fceb42e20721.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.147-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection c5c04a31-df85-4197-9177-405c8855e420 from test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.872-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-748-8224331490264904478, commit timestamp: Timestamp(1574796766, 4045)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.169-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.147-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (30a9dce2-217b-4655-a7de-fceb42e20721)'. Ident: 'index-786--8000595249233899911', commit timestamp: 'Timestamp(1574796767, 240)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.872-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.169-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.agg_out (30a9dce2-217b-4655-a7de-fceb42e20721) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796767, 240), t: 1 } and commit timestamp Timestamp(1574796767, 240)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.147-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (30a9dce2-217b-4655-a7de-fceb42e20721)'. Ident: 'index-793--8000595249233899911', commit timestamp: 'Timestamp(1574796767, 240)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.872-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5403987926524095926, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1006517471639077895, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796766250), clusterTime: Timestamp(1574796766, 1594) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796766, 1594), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 620ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.169-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.agg_out (30a9dce2-217b-4655-a7de-fceb42e20721).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.147-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-785--8000595249233899911, commit timestamp: Timestamp(1574796767, 240)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.873-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.169-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection c5c04a31-df85-4197-9177-405c8855e420 from test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.148-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d with provided UUID: 483a4dbd-f620-4646-aa29-9e1b4d31a179 and options: { uuid: UUID("483a4dbd-f620-4646-aa29-9e1b4d31a179"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.875-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29 with generated UUID: 30a9dce2-217b-4655-a7de-fceb42e20721 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.169-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (30a9dce2-217b-4655-a7de-fceb42e20721)'. Ident: 'index-786--4104909142373009110', commit timestamp: 'Timestamp(1574796767, 240)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.149-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 27db315c-546e-4c9c-a662-0f8ebff8148d: test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2 ( 523e7d40-0c97-439c-800f-6040151003ad ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.883-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.169-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (30a9dce2-217b-4655-a7de-fceb42e20721)'. Ident: 'index-793--4104909142373009110', commit timestamp: 'Timestamp(1574796767, 240)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.165-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.895-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.170-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-785--4104909142373009110, commit timestamp: Timestamp(1574796767, 240)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.166-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5 with provided UUID: d8751d53-ba86-426b-9c5a-83d105b2eab2 and options: { uuid: UUID("d8751d53-ba86-426b-9c5a-83d105b2eab2"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.895-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.170-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d with provided UUID: 483a4dbd-f620-4646-aa29-9e1b4d31a179 and options: { uuid: UUID("483a4dbd-f620-4646-aa29-9e1b4d31a179"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.182-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.895-0500 I STORAGE [conn108] Index build initialized: 7c03fca9-e71a-4b68-89ed-bd352ccc40c4: test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573 (1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.172-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 751a311b-d20e-4308-b603-e078e3fa2bbc: test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2 ( 523e7d40-0c97-439c-800f-6040151003ad ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.197-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.895-0500 I INDEX [conn108] Waiting for index build to complete: 7c03fca9-e71a-4b68-89ed-bd352ccc40c4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.187-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.197-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.896-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 08aaec61-9483-4f21-a973-064be13f9276: test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb ( 9ebef7d4-7d4f-451f-8aca-70ff48b900f2 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.188-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5 with provided UUID: d8751d53-ba86-426b-9c5a-83d105b2eab2 and options: { uuid: UUID("d8751d53-ba86-426b-9c5a-83d105b2eab2"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.197-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 86152f74-b2b6-4fb8-9145-a34a380e207c: test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450 (f13e5025-eda7-4d47-878b-356d28a65c54 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.905-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.201-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.198-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.906-0500 I INDEX [conn114] Registering index build: a4cacb84-d7bd-445a-ba24-46b70aaac316
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.215-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.198-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.920-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.215-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.201-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.920-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.215-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 0d0e3762-e2f2-4cbd-a7bc-166da128ecaa: test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450 (f13e5025-eda7-4d47-878b-356d28a65c54 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.204-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2 (523e7d40-0c97-439c-800f-6040151003ad) to test5_fsmdb0.agg_out and drop c5c04a31-df85-4197-9177-405c8855e420.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.920-0500 I STORAGE [conn112] Index build initialized: 38466c77-ee79-4428-86ff-4dd8072deef2: test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1 (5eda5599-db9e-42df-a0cb-c9cba6cb5ea7 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.216-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.204-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (c5c04a31-df85-4197-9177-405c8855e420) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796767, 746), t: 1 } and commit timestamp Timestamp(1574796767, 746)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.920-0500 I INDEX [conn112] Waiting for index build to complete: 38466c77-ee79-4428-86ff-4dd8072deef2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.216-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.204-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (c5c04a31-df85-4197-9177-405c8855e420).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.920-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.218-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.204-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 523e7d40-0c97-439c-800f-6040151003ad from test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.921-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.221-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 0d0e3762-e2f2-4cbd-a7bc-166da128ecaa: test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450 ( f13e5025-eda7-4d47-878b-356d28a65c54 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.204-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c5c04a31-df85-4197-9177-405c8855e420)'. Ident: 'index-784--8000595249233899911', commit timestamp: 'Timestamp(1574796767, 746)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.923-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.221-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2 (523e7d40-0c97-439c-800f-6040151003ad) to test5_fsmdb0.agg_out and drop c5c04a31-df85-4197-9177-405c8855e420.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.204-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c5c04a31-df85-4197-9177-405c8855e420)'. Ident: 'index-795--8000595249233899911', commit timestamp: 'Timestamp(1574796767, 746)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.931-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 7c03fca9-e71a-4b68-89ed-bd352ccc40c4: test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573 ( 1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.221-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (c5c04a31-df85-4197-9177-405c8855e420) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796767, 746), t: 1 } and commit timestamp Timestamp(1574796767, 746)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.204-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-783--8000595249233899911, commit timestamp: Timestamp(1574796767, 746)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.940-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.221-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (c5c04a31-df85-4197-9177-405c8855e420).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.205-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 86152f74-b2b6-4fb8-9145-a34a380e207c: test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450 ( f13e5025-eda7-4d47-878b-356d28a65c54 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.940-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.221-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 523e7d40-0c97-439c-800f-6040151003ad from test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.205-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38 with provided UUID: 66f94bbb-cecf-4764-9e47-4125dc0d2339 and options: { uuid: UUID("66f94bbb-cecf-4764-9e47-4125dc0d2339"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.940-0500 I STORAGE [conn46] Index build initialized: a932f6a3-7aa0-496d-8064-b4e9a638f301: test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45 (c5c04a31-df85-4197-9177-405c8855e420 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.221-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c5c04a31-df85-4197-9177-405c8855e420)'. Ident: 'index-784--4104909142373009110', commit timestamp: 'Timestamp(1574796767, 746)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.229-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.940-0500 I INDEX [conn46] Waiting for index build to complete: a932f6a3-7aa0-496d-8064-b4e9a638f301
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.221-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c5c04a31-df85-4197-9177-405c8855e420)'. Ident: 'index-795--4104909142373009110', commit timestamp: 'Timestamp(1574796767, 746)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.246-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.940-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.221-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-783--4104909142373009110, commit timestamp: Timestamp(1574796767, 746)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.246-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.941-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.230-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38 with provided UUID: 66f94bbb-cecf-4764-9e47-4125dc0d2339 and options: { uuid: UUID("66f94bbb-cecf-4764-9e47-4125dc0d2339"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.246-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: f8e5a8d5-2620-47b8-b794-ff49e2a7dbfa: test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8 (f6e35aeb-8619-4d4a-a276-7ede73b9c323 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.949-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.244-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.246-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.956-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.261-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.247-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.956-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.261-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.249-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.956-0500 I STORAGE [conn114] Index build initialized: a4cacb84-d7bd-445a-ba24-46b70aaac316: test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29 (30a9dce2-217b-4655-a7de-fceb42e20721 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.261-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 7b074f1b-37d8-4e48-9011-4565b91d7c47: test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8 (f6e35aeb-8619-4d4a-a276-7ede73b9c323 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.253-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: f8e5a8d5-2620-47b8-b794-ff49e2a7dbfa: test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8 ( f6e35aeb-8619-4d4a-a276-7ede73b9c323 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.956-0500 I INDEX [conn108] Index build completed: 7c03fca9-e71a-4b68-89ed-bd352ccc40c4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.262-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.268-0500 I INDEX [ReplWriterWorker-12] index build: starting on test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.956-0500 I INDEX [conn114] Waiting for index build to complete: a4cacb84-d7bd-445a-ba24-46b70aaac316
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.262-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.268-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.956-0500 I INDEX [conn110] Index build completed: 08aaec61-9483-4f21-a973-064be13f9276
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.264-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.268-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: f1baff59-ef18-4051-99fe-05745e3bb8c9: test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d (483a4dbd-f620-4646-aa29-9e1b4d31a179 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.956-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.267-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796767, 1134) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796767, 1134), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2531 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 118ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.268-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.956-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573 appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796766, 3607), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 431 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 114ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.267-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 7b074f1b-37d8-4e48-9011-4565b91d7c47: test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8 ( f6e35aeb-8619-4d4a-a276-7ede73b9c323 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.269-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.956-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.281-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.271-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.957-0500 I COMMAND [conn110] command test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796766, 3607), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 119ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.281-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:47.273-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: f1baff59-ef18-4051-99fe-05745e3bb8c9: test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d ( 483a4dbd-f620-4646-aa29-9e1b4d31a179 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.957-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 38466c77-ee79-4428-86ff-4dd8072deef2: test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1 ( 5eda5599-db9e-42df-a0cb-c9cba6cb5ea7 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.281-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 2b623833-3fba-4b33-baf6-e4bddd544e77: test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d (483a4dbd-f620-4646-aa29-9e1b4d31a179 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:49.980-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450 (f13e5025-eda7-4d47-878b-356d28a65c54) to test5_fsmdb0.agg_out and drop 523e7d40-0c97-439c-800f-6040151003ad.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.957-0500 I INDEX [conn112] Index build completed: 38466c77-ee79-4428-86ff-4dd8072deef2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.281-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:49.980-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (523e7d40-0c97-439c-800f-6040151003ad) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796767, 1254), t: 1 } and commit timestamp Timestamp(1574796767, 1254)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.957-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796766, 3607), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 108ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.282-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:49.980-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (523e7d40-0c97-439c-800f-6040151003ad).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.958-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20004
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.284-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:49.980-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection f13e5025-eda7-4d47-878b-356d28a65c54 from test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.958-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:47.285-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 2b623833-3fba-4b33-baf6-e4bddd544e77: test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d ( 483a4dbd-f620-4646-aa29-9e1b4d31a179 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:49.980-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (523e7d40-0c97-439c-800f-6040151003ad)'. Ident: 'index-798--8000595249233899911', commit timestamp: 'Timestamp(1574796767, 1254)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.959-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:49.982-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450 (f13e5025-eda7-4d47-878b-356d28a65c54) to test5_fsmdb0.agg_out and drop 523e7d40-0c97-439c-800f-6040151003ad.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:49.980-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (523e7d40-0c97-439c-800f-6040151003ad)'. Ident: 'index-803--8000595249233899911', commit timestamp: 'Timestamp(1574796767, 1254)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.961-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:49.982-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (523e7d40-0c97-439c-800f-6040151003ad) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796767, 1254), t: 1 } and commit timestamp Timestamp(1574796767, 1254)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:49.980-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-797--8000595249233899911, commit timestamp: Timestamp(1574796767, 1254)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.964-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:49.982-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (523e7d40-0c97-439c-800f-6040151003ad).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.967-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: a4cacb84-d7bd-445a-ba24-46b70aaac316: test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29 ( 30a9dce2-217b-4655-a7de-fceb42e20721 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:49.982-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection f13e5025-eda7-4d47-878b-356d28a65c54 from test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.967-0500 I INDEX [conn114] Index build completed: a4cacb84-d7bd-445a-ba24-46b70aaac316
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:49.982-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (523e7d40-0c97-439c-800f-6040151003ad)'. Ident: 'index-798--4104909142373009110', commit timestamp: 'Timestamp(1574796767, 1254)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.972-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: a932f6a3-7aa0-496d-8064-b4e9a638f301: test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45 ( c5c04a31-df85-4197-9177-405c8855e420 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:49.982-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (523e7d40-0c97-439c-800f-6040151003ad)'. Ident: 'index-803--4104909142373009110', commit timestamp: 'Timestamp(1574796767, 1254)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.972-0500 I INDEX [conn46] Index build completed: a932f6a3-7aa0-496d-8064-b4e9a638f301
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:49.982-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-797--4104909142373009110, commit timestamp: Timestamp(1574796767, 1254)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.972-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45 appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.870aee04-544b-428d-9338-a16827452c45", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796766, 3927), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 115ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.977-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.977-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (7ae1c0e8-6a07-4767-872a-447ad75a3c9a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 5329), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.977-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (7ae1c0e8-6a07-4767-872a-447ad75a3c9a).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.977-0500 I STORAGE [conn112] renameCollection: renaming collection 1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd from test5_fsmdb0.tmp.agg_out.6a90a180-ea40-4e3a-9be0-74af7ae17573 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.977-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (7ae1c0e8-6a07-4767-872a-447ad75a3c9a)'. Ident: 'index-762-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 5329)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.977-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (7ae1c0e8-6a07-4767-872a-447ad75a3c9a)'. Ident: 'index-763-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 5329)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.977-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-761-8224331490264904478, commit timestamp: Timestamp(1574796766, 5329)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.978-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3279395354571786045, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2578340908521281191, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796766291), clusterTime: Timestamp(1574796766, 3536) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796766, 3536), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 686ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.981-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2 with generated UUID: 523e7d40-0c97-439c-800f-6040151003ad and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.997-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.997-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.998-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 6203), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.998-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.998-0500 I STORAGE [conn108] renameCollection: renaming collection 5eda5599-db9e-42df-a0cb-c9cba6cb5ea7 from test5_fsmdb0.tmp.agg_out.73d0ddaa-c490-4bed-8998-9c45872989a1 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.998-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd)'. Ident: 'index-770-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 6203)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.998-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1d0f9f71-0ae2-4c48-8a3d-814ddddd69dd)'. Ident: 'index-775-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 6203)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.998-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-766-8224331490264904478, commit timestamp: Timestamp(1574796766, 6203)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.998-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.998-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1309542366876383247, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5545567144424132249, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796766811), clusterTime: Timestamp(1574796766, 3537) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796766, 3538), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 185ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.998-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (5eda5599-db9e-42df-a0cb-c9cba6cb5ea7) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796766, 6204), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.998-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (5eda5599-db9e-42df-a0cb-c9cba6cb5ea7).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.998-0500 I STORAGE [conn110] renameCollection: renaming collection 9ebef7d4-7d4f-451f-8aca-70ff48b900f2 from test5_fsmdb0.tmp.agg_out.880821ab-00da-457a-a5ba-b6956aa2f2bb to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.998-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (5eda5599-db9e-42df-a0cb-c9cba6cb5ea7)'. Ident: 'index-771-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 6204)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.998-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (5eda5599-db9e-42df-a0cb-c9cba6cb5ea7)'. Ident: 'index-779-8224331490264904478', commit timestamp: 'Timestamp(1574796766, 6204)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.998-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-767-8224331490264904478, commit timestamp: Timestamp(1574796766, 6204)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.998-0500 I INDEX [conn112] Registering index build: 4f053bf3-0525-4f0d-998b-b2d91cab3572
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:46.998-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1003460177614264289, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8560814196474684058, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796766290), clusterTime: Timestamp(1574796766, 3535) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796766, 3536), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 707ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.001-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450 with generated UUID: f13e5025-eda7-4d47-878b-356d28a65c54 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.021-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.021-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.022-0500 I STORAGE [conn112] Index build initialized: 4f053bf3-0525-4f0d-998b-b2d91cab3572: test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2 (523e7d40-0c97-439c-800f-6040151003ad ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.022-0500 I INDEX [conn112] Waiting for index build to complete: 4f053bf3-0525-4f0d-998b-b2d91cab3572
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.029-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.029-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.029-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (9ebef7d4-7d4f-451f-8aca-70ff48b900f2) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796767, 119), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.029-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (9ebef7d4-7d4f-451f-8aca-70ff48b900f2).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.029-0500 I STORAGE [conn110] renameCollection: renaming collection 30a9dce2-217b-4655-a7de-fceb42e20721 from test5_fsmdb0.tmp.agg_out.0f054fac-8b7c-417a-b739-4e336ceeac29 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.029-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (9ebef7d4-7d4f-451f-8aca-70ff48b900f2)'. Ident: 'index-769-8224331490264904478', commit timestamp: 'Timestamp(1574796767, 119)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.029-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (9ebef7d4-7d4f-451f-8aca-70ff48b900f2)'. Ident: 'index-773-8224331490264904478', commit timestamp: 'Timestamp(1574796767, 119)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.029-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-765-8224331490264904478, commit timestamp: Timestamp(1574796767, 119)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.029-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.029-0500 I INDEX [conn114] Registering index build: e818ea2b-0b37-4970-b674-17d6f3f519bb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.030-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7641851353352587757, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4251586742264867614, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796766874), clusterTime: Timestamp(1574796766, 4045) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796766, 4045), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 154ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.030-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.030-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8 with generated UUID: f6e35aeb-8619-4d4a-a276-7ede73b9c323 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.040-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.056-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.056-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.056-0500 I STORAGE [conn114] Index build initialized: e818ea2b-0b37-4970-b674-17d6f3f519bb: test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450 (f13e5025-eda7-4d47-878b-356d28a65c54 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.056-0500 I INDEX [conn114] Waiting for index build to complete: e818ea2b-0b37-4970-b674-17d6f3f519bb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.057-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 4f053bf3-0525-4f0d-998b-b2d91cab3572: test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2 ( 523e7d40-0c97-439c-800f-6040151003ad ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.057-0500 I INDEX [conn112] Index build completed: 4f053bf3-0525-4f0d-998b-b2d91cab3572
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.065-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.066-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.066-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (30a9dce2-217b-4655-a7de-fceb42e20721) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796767, 240), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.066-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (30a9dce2-217b-4655-a7de-fceb42e20721).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.066-0500 I STORAGE [conn46] renameCollection: renaming collection c5c04a31-df85-4197-9177-405c8855e420 from test5_fsmdb0.tmp.agg_out.870aee04-544b-428d-9338-a16827452c45 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.066-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (30a9dce2-217b-4655-a7de-fceb42e20721)'. Ident: 'index-778-8224331490264904478', commit timestamp: 'Timestamp(1574796767, 240)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.066-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (30a9dce2-217b-4655-a7de-fceb42e20721)'. Ident: 'index-783-8224331490264904478', commit timestamp: 'Timestamp(1574796767, 240)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.066-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-776-8224331490264904478, commit timestamp: Timestamp(1574796767, 240)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.066-0500 I INDEX [conn110] Registering index build: 1848d775-f328-4367-bd90-f41ed2e244b5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.066-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d with generated UUID: 483a4dbd-f620-4646-aa29-9e1b4d31a179 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.066-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.066-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7389132354434926172, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3852130328620077576, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796766812), clusterTime: Timestamp(1574796766, 3538) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796766, 3538), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 253ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.067-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.069-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5 with generated UUID: d8751d53-ba86-426b-9c5a-83d105b2eab2 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.070-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.093-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: e818ea2b-0b37-4970-b674-17d6f3f519bb: test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450 ( f13e5025-eda7-4d47-878b-356d28a65c54 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.101-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.101-0500 I INDEX [conn108] Registering index build: fa3b7d35-179b-4a41-b583-a1bfb6554507
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.110-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.110-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.110-0500 I STORAGE [conn110] Index build initialized: 1848d775-f328-4367-bd90-f41ed2e244b5: test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8 (f6e35aeb-8619-4d4a-a276-7ede73b9c323 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.110-0500 I INDEX [conn110] Waiting for index build to complete: 1848d775-f328-4367-bd90-f41ed2e244b5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.110-0500 I INDEX [conn114] Index build completed: e818ea2b-0b37-4970-b674-17d6f3f519bb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.117-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.117-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.117-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (c5c04a31-df85-4197-9177-405c8855e420) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796767, 746), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.117-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (c5c04a31-df85-4197-9177-405c8855e420).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.117-0500 I STORAGE [conn112] renameCollection: renaming collection 523e7d40-0c97-439c-800f-6040151003ad from test5_fsmdb0.tmp.agg_out.4576996a-fb49-4b82-8726-b7d80bd0f8f2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.117-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c5c04a31-df85-4197-9177-405c8855e420)'. Ident: 'index-772-8224331490264904478', commit timestamp: 'Timestamp(1574796767, 746)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.117-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c5c04a31-df85-4197-9177-405c8855e420)'. Ident: 'index-781-8224331490264904478', commit timestamp: 'Timestamp(1574796767, 746)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.117-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-768-8224331490264904478, commit timestamp: Timestamp(1574796767, 746)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.117-0500 I INDEX [conn46] Registering index build: 72f7dd39-e0d4-4239-b8d3-6589562fed47
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.117-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.117-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8630276360338963883, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6429172654327780347, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796766979), clusterTime: Timestamp(1574796766, 5585) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796766, 5585), signature: { hash: BinData(0, 57BF6B1A868423C8EA11438BEC7D5A18524312DF), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 137ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.118-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.120-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38 with generated UUID: 66f94bbb-cecf-4764-9e47-4125dc0d2339 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.129-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.145-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.145-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.145-0500 I STORAGE [conn108] Index build initialized: fa3b7d35-179b-4a41-b583-a1bfb6554507: test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d (483a4dbd-f620-4646-aa29-9e1b4d31a179 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.145-0500 I INDEX [conn108] Waiting for index build to complete: fa3b7d35-179b-4a41-b583-a1bfb6554507
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.145-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.148-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 1848d775-f328-4367-bd90-f41ed2e244b5: test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8 ( f6e35aeb-8619-4d4a-a276-7ede73b9c323 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.156-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.156-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.166-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.173-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.173-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.173-0500 I STORAGE [conn46] Index build initialized: 72f7dd39-e0d4-4239-b8d3-6589562fed47: test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5 (d8751d53-ba86-426b-9c5a-83d105b2eab2 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.173-0500 I INDEX [conn46] Waiting for index build to complete: 72f7dd39-e0d4-4239-b8d3-6589562fed47
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.173-0500 I INDEX [conn110] Index build completed: 1848d775-f328-4367-bd90-f41ed2e244b5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.173-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.173-0500 I COMMAND [conn110] command test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796767, 240), signature: { hash: BinData(0, 325022EB1807F7410FDD844C6DDE65A97AA5B6D1), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 108 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 106ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.173-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (523e7d40-0c97-439c-800f-6040151003ad) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796767, 1254), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:47.174-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: fa3b7d35-179b-4a41-b583-a1bfb6554507: test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d ( 483a4dbd-f620-4646-aa29-9e1b4d31a179 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:49.977-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (523e7d40-0c97-439c-800f-6040151003ad).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:49.977-0500 I INDEX [conn108] Index build completed: fa3b7d35-179b-4a41-b583-a1bfb6554507
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:49.977-0500 I STORAGE [conn114] renameCollection: renaming collection f13e5025-eda7-4d47-878b-356d28a65c54 from test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:49.978-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796767, 692), signature: { hash: BinData(0, 325022EB1807F7410FDD844C6DDE65A97AA5B6D1), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 7273 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2876ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:49.978-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (523e7d40-0c97-439c-800f-6040151003ad)'. Ident: 'index-786-8224331490264904478', commit timestamp: 'Timestamp(1574796767, 1254)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:49.978-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (523e7d40-0c97-439c-800f-6040151003ad)'. Ident: 'index-787-8224331490264904478', commit timestamp: 'Timestamp(1574796767, 1254)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:49.978-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-785-8224331490264904478, commit timestamp: Timestamp(1574796767, 1254)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:49.978-0500 I INDEX [conn112] Registering index build: 29aa517d-38d8-43ca-bc73-b9165ab64a28
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:49.978-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:49.978-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450 appName: "tid:3" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.f51cecc0-2541-4ddb-8df0-9dedb0bfa450", to: "test5_fsmdb0.agg_out", collectionOptions: { validationLevel: "off", validationAction: "error" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796767, 1250), signature: { hash: BinData(0, 325022EB1807F7410FDD844C6DDE65A97AA5B6D1), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 25212 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2830ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:49.978-0500 W STORAGE [IndexBuildsCoordinatorMongod-4] failed to create WiredTiger bulk cursor: Device or resource busy
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:49.978-0500 W STORAGE [IndexBuildsCoordinatorMongod-4] falling back to non-bulk cursor for index table:index-805-8224331490264904478
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:49.978-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:49.978-0500 I COMMAND [conn220] command test5_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796767, 1134), lsid: { id: UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") }, $clusterTime: { clusterTime: Timestamp(1574796767, 1134), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796767, 1134). Collection minimum timestamp is Timestamp(1574796767, 1252)" errName:SnapshotUnavailable errCode:246 reslen:602 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2709659 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 2709ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:49.978-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:49.978-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2059845406662048409, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8808720606489621799, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796767000), clusterTime: Timestamp(1574796766, 6332) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796767, 116), signature: { hash: BinData(0, 325022EB1807F7410FDD844C6DDE65A97AA5B6D1), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2977ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:49.982-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0 with generated UUID: 3e68fcf9-7ffb-47a3-ab4e-e455b2334361 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.002-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.002-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.002-0500 I STORAGE [conn112] Index build initialized: 29aa517d-38d8-43ca-bc73-b9165ab64a28: test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38 (66f94bbb-cecf-4764-9e47-4125dc0d2339 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.002-0500 I INDEX [conn112] Waiting for index build to complete: 29aa517d-38d8-43ca-bc73-b9165ab64a28
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.002-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.010-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.010-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.013-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.263-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.264-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (f13e5025-eda7-4d47-878b-356d28a65c54) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 301), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.264-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (f13e5025-eda7-4d47-878b-356d28a65c54).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.264-0500 I STORAGE [conn110] renameCollection: renaming collection f6e35aeb-8619-4d4a-a276-7ede73b9c323 from test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.264-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f13e5025-eda7-4d47-878b-356d28a65c54)'. Ident: 'index-790-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 301)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.264-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f13e5025-eda7-4d47-878b-356d28a65c54)'. Ident: 'index-791-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 301)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.264-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-788-8224331490264904478, commit timestamp: Timestamp(1574796770, 301)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.264-0500 I COMMAND [conn110] command test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8 appName: "tid:0" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8", to: "test5_fsmdb0.agg_out", collectionOptions: { validationLevel: "off", validationAction: "error" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796770, 297), signature: { hash: BinData(0, 2D4CEA42A39FF3DCCCEE0B94B5C39FC8830744DB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 260073 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 260ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.264-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.264-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (f6e35aeb-8619-4d4a-a276-7ede73b9c323) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 302), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.264-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (f6e35aeb-8619-4d4a-a276-7ede73b9c323).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.264-0500 I STORAGE [conn114] renameCollection: renaming collection 483a4dbd-f620-4646-aa29-9e1b4d31a179 from test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.264-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f6e35aeb-8619-4d4a-a276-7ede73b9c323)'. Ident: 'index-794-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 302)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.264-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f6e35aeb-8619-4d4a-a276-7ede73b9c323)'. Ident: 'index-795-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 302)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.264-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-792-8224331490264904478, commit timestamp: Timestamp(1574796770, 302)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.264-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d appName: "tid:2" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d", to: "test5_fsmdb0.agg_out", collectionOptions: { validationLevel: "off", validationAction: "error" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796770, 297), signature: { hash: BinData(0, 2D4CEA42A39FF3DCCCEE0B94B5C39FC8830744DB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 260245 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 260ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.264-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 162604039285040950, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6009111655107937472, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796767001), clusterTime: Timestamp(1574796766, 6204) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796767, 117), signature: { hash: BinData(0, 325022EB1807F7410FDD844C6DDE65A97AA5B6D1), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 27346 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3262ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.264-0500 I INDEX [conn108] Registering index build: 2504aba2-5715-42ec-865a-f41eae75d2f4
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.265-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3863701461676681654, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6019275846053333207, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796767031), clusterTime: Timestamp(1574796767, 183) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796767, 236), signature: { hash: BinData(0, 325022EB1807F7410FDD844C6DDE65A97AA5B6D1), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3232ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.265-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 72f7dd39-e0d4-4239-b8d3-6589562fed47: test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5 ( d8751d53-ba86-426b-9c5a-83d105b2eab2 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:50.265-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796766, 6204), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3264ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:50.265-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796767, 183), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3233ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.266-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 29aa517d-38d8-43ca-bc73-b9165ab64a28: test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38 ( 66f94bbb-cecf-4764-9e47-4125dc0d2339 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.268-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923 with generated UUID: d3e58507-f835-4ed2-88cb-d4b5fb6e675d and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.268-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099 with generated UUID: d069e3d2-f32b-476d-bdd9-6bf404b89570 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.281-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.281-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.281-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: d4a7761e-4645-417f-b9ea-dcf6040bd345: test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5 (d8751d53-ba86-426b-9c5a-83d105b2eab2 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.281-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.281-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.282-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0 with provided UUID: 3e68fcf9-7ffb-47a3-ab4e-e455b2334361 and options: { uuid: UUID("3e68fcf9-7ffb-47a3-ab4e-e455b2334361"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.283-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.293-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.293-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.293-0500 I STORAGE [conn108] Index build initialized: 2504aba2-5715-42ec-865a-f41eae75d2f4: test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0 (3e68fcf9-7ffb-47a3-ab4e-e455b2334361 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.293-0500 I INDEX [conn108] Waiting for index build to complete: 2504aba2-5715-42ec-865a-f41eae75d2f4
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.293-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.293-0500 I INDEX [conn46] Index build completed: 72f7dd39-e0d4-4239-b8d3-6589562fed47
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.293-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: d4a7761e-4645-417f-b9ea-dcf6040bd345: test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5 ( d8751d53-ba86-426b-9c5a-83d105b2eab2 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.293-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5 appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796767, 746), signature: { hash: BinData(0, 325022EB1807F7410FDD844C6DDE65A97AA5B6D1), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 87 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 3175ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.293-0500 I INDEX [conn112] Index build completed: 29aa517d-38d8-43ca-bc73-b9165ab64a28
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.293-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38 appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796767, 1250), signature: { hash: BinData(0, 325022EB1807F7410FDD844C6DDE65A97AA5B6D1), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 2821365 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 3136ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.296-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.296-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.296-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 65f827ad-6d47-46ed-aac3-6ddd32ad12db: test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5 (d8751d53-ba86-426b-9c5a-83d105b2eab2 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.300-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.300-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:50.326-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796767, 241), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3257ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:50.360-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796767, 746), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3241ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.297-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.331-0500 I INDEX [ReplWriterWorker-12] index build: starting on test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.300-0500 I INDEX [conn114] Registering index build: 64ee00da-8d51-45db-8661-7deaf2eb6066
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:50.394-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796769, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 413ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:50.431-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796770, 302), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 164ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.331-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.331-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 5bc6c138-028f-411a-98ae-45797d7d68dc: test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38 (66f94bbb-cecf-4764-9e47-4125dc0d2339 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.331-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:50.550-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796770, 1570), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 187ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:50.469-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796770, 302), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 203ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.297-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.332-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:50.613-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796770, 1947), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:812 protocol:op_msg 217ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.308-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:50.510-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796770, 937), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 181ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.299-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.334-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8 (f6e35aeb-8619-4d4a-a276-7ede73b9c323) to test5_fsmdb0.agg_out and drop f13e5025-eda7-4d47-878b-356d28a65c54.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.308-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:50.614-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796770, 2452), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:812 protocol:op_msg 181ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:53.189-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796770, 4035), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2618ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.301-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 65f827ad-6d47-46ed-aac3-6ddd32ad12db: test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5 ( d8751d53-ba86-426b-9c5a-83d105b2eab2 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.334-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.317-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:50.662-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796770, 2957), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:812 protocol:op_msg 191ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.301-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0 with provided UUID: 3e68fcf9-7ffb-47a3-ab4e-e455b2334361 and options: { uuid: UUID("3e68fcf9-7ffb-47a3-ab4e-e455b2334361"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.334-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (f13e5025-eda7-4d47-878b-356d28a65c54) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 301), t: 1 } and commit timestamp Timestamp(1574796770, 301)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.324-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.314-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:53.189-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796770, 3462), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:812 protocol:op_msg 2678ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.334-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (f13e5025-eda7-4d47-878b-356d28a65c54).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.324-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.381-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796770, 245) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796770, 245), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 373ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.334-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection f6e35aeb-8619-4d4a-a276-7ede73b9c323 from test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.325-0500 I STORAGE [conn114] Index build initialized: 64ee00da-8d51-45db-8661-7deaf2eb6066: test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923 (d3e58507-f835-4ed2-88cb-d4b5fb6e675d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.397-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.334-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f13e5025-eda7-4d47-878b-356d28a65c54)'. Ident: 'index-800--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 301)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.334-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f13e5025-eda7-4d47-878b-356d28a65c54)'. Ident: 'index-809--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 301)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.397-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.334-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-799--8000595249233899911, commit timestamp: Timestamp(1574796770, 301)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.398-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: bd466199-4068-4a6a-bacb-82c6bd85696d: test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38 (66f94bbb-cecf-4764-9e47-4125dc0d2339 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.335-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d (483a4dbd-f620-4646-aa29-9e1b4d31a179) to test5_fsmdb0.agg_out and drop f6e35aeb-8619-4d4a-a276-7ede73b9c323.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.398-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.335-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (f6e35aeb-8619-4d4a-a276-7ede73b9c323) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 302), t: 1 } and commit timestamp Timestamp(1574796770, 302)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.398-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.325-0500 I INDEX [conn114] Waiting for index build to complete: 64ee00da-8d51-45db-8661-7deaf2eb6066
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.335-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (f6e35aeb-8619-4d4a-a276-7ede73b9c323).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.400-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8 (f6e35aeb-8619-4d4a-a276-7ede73b9c323) to test5_fsmdb0.agg_out and drop f13e5025-eda7-4d47-878b-356d28a65c54.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.325-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.335-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection 483a4dbd-f620-4646-aa29-9e1b4d31a179 from test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.401-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.325-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (483a4dbd-f620-4646-aa29-9e1b4d31a179) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 873), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.335-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f6e35aeb-8619-4d4a-a276-7ede73b9c323)'. Ident: 'index-802--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 302)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.402-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (f13e5025-eda7-4d47-878b-356d28a65c54) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 301), t: 1 } and commit timestamp Timestamp(1574796770, 301)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.325-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (483a4dbd-f620-4646-aa29-9e1b4d31a179).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.335-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f6e35aeb-8619-4d4a-a276-7ede73b9c323)'. Ident: 'index-813--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 302)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.402-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (f13e5025-eda7-4d47-878b-356d28a65c54).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.325-0500 I STORAGE [conn112] renameCollection: renaming collection d8751d53-ba86-426b-9c5a-83d105b2eab2 from test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.335-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-801--8000595249233899911, commit timestamp: Timestamp(1574796770, 302)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.402-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection f6e35aeb-8619-4d4a-a276-7ede73b9c323 from test5_fsmdb0.tmp.agg_out.05650f20-794b-4694-9ccb-b547f69cedf8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.325-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (483a4dbd-f620-4646-aa29-9e1b4d31a179)'. Ident: 'index-798-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 873)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.336-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 5bc6c138-028f-411a-98ae-45797d7d68dc: test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38 ( 66f94bbb-cecf-4764-9e47-4125dc0d2339 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.402-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f13e5025-eda7-4d47-878b-356d28a65c54)'. Ident: 'index-800--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 301)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.325-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (483a4dbd-f620-4646-aa29-9e1b4d31a179)'. Ident: 'index-801-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 873)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.336-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923 with provided UUID: d3e58507-f835-4ed2-88cb-d4b5fb6e675d and options: { uuid: UUID("d3e58507-f835-4ed2-88cb-d4b5fb6e675d"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.402-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f13e5025-eda7-4d47-878b-356d28a65c54)'. Ident: 'index-809--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 301)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.325-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-796-8224331490264904478, commit timestamp: Timestamp(1574796770, 873)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.350-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.402-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-799--4104909142373009110, commit timestamp: Timestamp(1574796770, 301)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.325-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 2504aba2-5715-42ec-865a-f41eae75d2f4: test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0 ( 3e68fcf9-7ffb-47a3-ab4e-e455b2334361 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.351-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099 with provided UUID: d069e3d2-f32b-476d-bdd9-6bf404b89570 and options: { uuid: UUID("d069e3d2-f32b-476d-bdd9-6bf404b89570"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.402-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d (483a4dbd-f620-4646-aa29-9e1b4d31a179) to test5_fsmdb0.agg_out and drop f6e35aeb-8619-4d4a-a276-7ede73b9c323.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.325-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.364-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.402-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (f6e35aeb-8619-4d4a-a276-7ede73b9c323) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 302), t: 1 } and commit timestamp Timestamp(1574796770, 302)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.325-0500 I INDEX [conn110] Registering index build: 899b81b0-d374-4ee7-be8d-2de2f38e41a8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.387-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.402-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (f6e35aeb-8619-4d4a-a276-7ede73b9c323).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.325-0500 I INDEX [conn108] Index build completed: 2504aba2-5715-42ec-865a-f41eae75d2f4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.387-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.402-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 483a4dbd-f620-4646-aa29-9e1b4d31a179 from test5_fsmdb0.tmp.agg_out.caeaff1d-77b9-43d7-9a98-008a6036514d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.325-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0 appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796770, 297), signature: { hash: BinData(0, 2D4CEA42A39FF3DCCCEE0B94B5C39FC8830744DB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 253606 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 314ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.387-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 3bb86a6e-5f03-4cdb-bb6e-5cc7da4fe1a5: test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0 (3e68fcf9-7ffb-47a3-ab4e-e455b2334361 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.403-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f6e35aeb-8619-4d4a-a276-7ede73b9c323)'. Ident: 'index-802--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 302)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.325-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4753107806201087761, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7013345726967757457, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796767068), clusterTime: Timestamp(1574796767, 241) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796767, 241), signature: { hash: BinData(0, 325022EB1807F7410FDD844C6DDE65A97AA5B6D1), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3256ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.387-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.403-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f6e35aeb-8619-4d4a-a276-7ede73b9c323)'. Ident: 'index-813--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 302)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.326-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.388-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.403-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-801--4104909142373009110, commit timestamp: Timestamp(1574796770, 302)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.329-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c with generated UUID: bea5127c-ae7d-40cf-90e4-68c91b29020f and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.389-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5 (d8751d53-ba86-426b-9c5a-83d105b2eab2) to test5_fsmdb0.agg_out and drop 483a4dbd-f620-4646-aa29-9e1b4d31a179.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.403-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: bd466199-4068-4a6a-bacb-82c6bd85696d: test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38 ( 66f94bbb-cecf-4764-9e47-4125dc0d2339 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.335-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.390-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.403-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923 with provided UUID: d3e58507-f835-4ed2-88cb-d4b5fb6e675d and options: { uuid: UUID("d3e58507-f835-4ed2-88cb-d4b5fb6e675d"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.350-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.391-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.agg_out (483a4dbd-f620-4646-aa29-9e1b4d31a179) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 873), t: 1 } and commit timestamp Timestamp(1574796770, 873)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.416-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.350-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.391-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.agg_out (483a4dbd-f620-4646-aa29-9e1b4d31a179).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.417-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099 with provided UUID: d069e3d2-f32b-476d-bdd9-6bf404b89570 and options: { uuid: UUID("d069e3d2-f32b-476d-bdd9-6bf404b89570"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.350-0500 I STORAGE [conn110] Index build initialized: 899b81b0-d374-4ee7-be8d-2de2f38e41a8: test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099 (d069e3d2-f32b-476d-bdd9-6bf404b89570 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.391-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection d8751d53-ba86-426b-9c5a-83d105b2eab2 from test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.433-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.350-0500 I INDEX [conn110] Waiting for index build to complete: 899b81b0-d374-4ee7-be8d-2de2f38e41a8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.391-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (483a4dbd-f620-4646-aa29-9e1b4d31a179)'. Ident: 'index-806--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 873)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.465-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.351-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 64ee00da-8d51-45db-8661-7deaf2eb6066: test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923 ( d3e58507-f835-4ed2-88cb-d4b5fb6e675d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.391-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (483a4dbd-f620-4646-aa29-9e1b4d31a179)'. Ident: 'index-815--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 873)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.465-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.351-0500 I INDEX [conn114] Index build completed: 64ee00da-8d51-45db-8661-7deaf2eb6066
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.391-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-805--8000595249233899911, commit timestamp: Timestamp(1574796770, 873)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.465-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: be420134-e402-4f30-8cda-5a6760ba2cf4: test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0 (3e68fcf9-7ffb-47a3-ab4e-e455b2334361 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.359-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.393-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 3bb86a6e-5f03-4cdb-bb6e-5cc7da4fe1a5: test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0 ( 3e68fcf9-7ffb-47a3-ab4e-e455b2334361 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.465-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.359-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.393-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c with provided UUID: bea5127c-ae7d-40cf-90e4-68c91b29020f and options: { uuid: UUID("bea5127c-ae7d-40cf-90e4-68c91b29020f"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.466-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.359-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (d8751d53-ba86-426b-9c5a-83d105b2eab2) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 1506), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.409-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.467-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5 (d8751d53-ba86-426b-9c5a-83d105b2eab2) to test5_fsmdb0.agg_out and drop 483a4dbd-f620-4646-aa29-9e1b4d31a179.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.359-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (d8751d53-ba86-426b-9c5a-83d105b2eab2).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.428-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.468-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.359-0500 I STORAGE [conn46] renameCollection: renaming collection 66f94bbb-cecf-4764-9e47-4125dc0d2339 from test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.428-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.468-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (483a4dbd-f620-4646-aa29-9e1b4d31a179) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 873), t: 1 } and commit timestamp Timestamp(1574796770, 873)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.359-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d8751d53-ba86-426b-9c5a-83d105b2eab2)'. Ident: 'index-800-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 1506)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.428-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: ee5d6e5f-b52b-4ea1-9254-9b9fed7f1cea: test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923 (d3e58507-f835-4ed2-88cb-d4b5fb6e675d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.468-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (483a4dbd-f620-4646-aa29-9e1b4d31a179).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.359-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d8751d53-ba86-426b-9c5a-83d105b2eab2)'. Ident: 'index-805-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 1506)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.428-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.468-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection d8751d53-ba86-426b-9c5a-83d105b2eab2 from test5_fsmdb0.tmp.agg_out.07715626-6e59-4d6d-94fb-a0ec9b61cfb5 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.359-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-797-8224331490264904478, commit timestamp: Timestamp(1574796770, 1506)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.429-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.468-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (483a4dbd-f620-4646-aa29-9e1b4d31a179)'. Ident: 'index-806--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 873)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.359-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.430-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.468-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (483a4dbd-f620-4646-aa29-9e1b4d31a179)'. Ident: 'index-815--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 873)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.359-0500 I INDEX [conn108] Registering index build: 14a2ce52-e334-47c6-943a-2927fd6bcc7f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.433-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38 (66f94bbb-cecf-4764-9e47-4125dc0d2339) to test5_fsmdb0.agg_out and drop d8751d53-ba86-426b-9c5a-83d105b2eab2.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.468-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-805--4104909142373009110, commit timestamp: Timestamp(1574796770, 873)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.360-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7401361376651054796, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3176157160209187070, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796767119), clusterTime: Timestamp(1574796767, 746) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796767, 746), signature: { hash: BinData(0, 325022EB1807F7410FDD844C6DDE65A97AA5B6D1), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796765, 2533), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3240ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.433-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (d8751d53-ba86-426b-9c5a-83d105b2eab2) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 1506), t: 1 } and commit timestamp Timestamp(1574796770, 1506)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.471-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: be420134-e402-4f30-8cda-5a6760ba2cf4: test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0 ( 3e68fcf9-7ffb-47a3-ab4e-e455b2334361 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.360-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.433-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (d8751d53-ba86-426b-9c5a-83d105b2eab2).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.472-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c with provided UUID: bea5127c-ae7d-40cf-90e4-68c91b29020f and options: { uuid: UUID("bea5127c-ae7d-40cf-90e4-68c91b29020f"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.364-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732 with generated UUID: 6a75d84e-cb47-465a-a2d7-e86138bf6b35 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.433-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection 66f94bbb-cecf-4764-9e47-4125dc0d2339 from test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.487-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.371-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.433-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d8751d53-ba86-426b-9c5a-83d105b2eab2)'. Ident: 'index-808--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 1506)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.506-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.386-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.433-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d8751d53-ba86-426b-9c5a-83d105b2eab2)'. Ident: 'index-817--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 1506)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.506-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.386-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.433-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-807--8000595249233899911, commit timestamp: Timestamp(1574796770, 1506)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.506-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 375fc3a8-5b1c-4262-a37c-11631584f65b: test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923 (d3e58507-f835-4ed2-88cb-d4b5fb6e675d ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.386-0500 I STORAGE [conn108] Index build initialized: 14a2ce52-e334-47c6-943a-2927fd6bcc7f: test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c (bea5127c-ae7d-40cf-90e4-68c91b29020f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.433-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: ee5d6e5f-b52b-4ea1-9254-9b9fed7f1cea: test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923 ( d3e58507-f835-4ed2-88cb-d4b5fb6e675d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.506-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.386-0500 I INDEX [conn108] Waiting for index build to complete: 14a2ce52-e334-47c6-943a-2927fd6bcc7f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.437-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732 with provided UUID: 6a75d84e-cb47-465a-a2d7-e86138bf6b35 and options: { uuid: UUID("6a75d84e-cb47-465a-a2d7-e86138bf6b35"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.507-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.387-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 899b81b0-d374-4ee7-be8d-2de2f38e41a8: test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099 ( d069e3d2-f32b-476d-bdd9-6bf404b89570 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.451-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.509-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.387-0500 I INDEX [conn110] Index build completed: 899b81b0-d374-4ee7-be8d-2de2f38e41a8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.469-0500 I INDEX [ReplWriterWorker-12] index build: starting on test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.510-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38 (66f94bbb-cecf-4764-9e47-4125dc0d2339) to test5_fsmdb0.agg_out and drop d8751d53-ba86-426b-9c5a-83d105b2eab2.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.393-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.469-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.510-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test5_fsmdb0.agg_out (d8751d53-ba86-426b-9c5a-83d105b2eab2) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 1506), t: 1 } and commit timestamp Timestamp(1574796770, 1506)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.393-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.470-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: d8d2d9d7-bd15-4979-a72d-0a7d0721e08e: test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099 (d069e3d2-f32b-476d-bdd9-6bf404b89570 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.510-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test5_fsmdb0.agg_out (d8751d53-ba86-426b-9c5a-83d105b2eab2).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.393-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (66f94bbb-cecf-4764-9e47-4125dc0d2339) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 1883), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.470-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.510-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection 66f94bbb-cecf-4764-9e47-4125dc0d2339 from test5_fsmdb0.tmp.agg_out.346bc2a9-b009-4b97-b49c-9ebfbd513a38 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.393-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (66f94bbb-cecf-4764-9e47-4125dc0d2339).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.470-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.510-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d8751d53-ba86-426b-9c5a-83d105b2eab2)'. Ident: 'index-808--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 1506)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.393-0500 I STORAGE [conn112] renameCollection: renaming collection 3e68fcf9-7ffb-47a3-ab4e-e455b2334361 from test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.472-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0 (3e68fcf9-7ffb-47a3-ab4e-e455b2334361) to test5_fsmdb0.agg_out and drop 66f94bbb-cecf-4764-9e47-4125dc0d2339.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.510-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d8751d53-ba86-426b-9c5a-83d105b2eab2)'. Ident: 'index-817--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 1506)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.393-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (66f94bbb-cecf-4764-9e47-4125dc0d2339)'. Ident: 'index-804-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 1883)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.473-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.510-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-807--4104909142373009110, commit timestamp: Timestamp(1574796770, 1506)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.393-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (66f94bbb-cecf-4764-9e47-4125dc0d2339)'. Ident: 'index-807-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 1883)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.474-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (66f94bbb-cecf-4764-9e47-4125dc0d2339) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 1883), t: 1 } and commit timestamp Timestamp(1574796770, 1883)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.512-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 375fc3a8-5b1c-4262-a37c-11631584f65b: test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923 ( d3e58507-f835-4ed2-88cb-d4b5fb6e675d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.393-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-802-8224331490264904478, commit timestamp: Timestamp(1574796770, 1883)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.474-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (66f94bbb-cecf-4764-9e47-4125dc0d2339).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.514-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732 with provided UUID: 6a75d84e-cb47-465a-a2d7-e86138bf6b35 and options: { uuid: UUID("6a75d84e-cb47-465a-a2d7-e86138bf6b35"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.393-0500 I INDEX [conn46] Registering index build: 66ae541a-37db-4254-9150-87c667a665b6
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.474-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 3e68fcf9-7ffb-47a3-ab4e-e455b2334361 from test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.530-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.393-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.474-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (66f94bbb-cecf-4764-9e47-4125dc0d2339)'. Ident: 'index-812--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 1883)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.547-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.394-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4989435630477417536, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3982689720699735689, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796769980), clusterTime: Timestamp(1574796769, 1) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796769, 1), signature: { hash: BinData(0, 241DA2D56C32EE8C8A9D6A780954473E4D508BB8), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 412ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.474-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (66f94bbb-cecf-4764-9e47-4125dc0d2339)'. Ident: 'index-821--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 1883)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.547-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.394-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.474-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-811--8000595249233899911, commit timestamp: Timestamp(1574796770, 1883)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.547-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 14752706-8ef2-4d62-ae02-12bb656dba54: test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099 (d069e3d2-f32b-476d-bdd9-6bf404b89570 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.397-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c with generated UUID: c5fd24ed-c87d-4675-b236-6d13d07f9b30 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.476-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: d8d2d9d7-bd15-4979-a72d-0a7d0721e08e: test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099 ( d069e3d2-f32b-476d-bdd9-6bf404b89570 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.548-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.403-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.477-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c with provided UUID: c5fd24ed-c87d-4675-b236-6d13d07f9b30 and options: { uuid: UUID("c5fd24ed-c87d-4675-b236-6d13d07f9b30"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.548-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.419-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.494-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.549-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0 (3e68fcf9-7ffb-47a3-ab4e-e455b2334361) to test5_fsmdb0.agg_out and drop 66f94bbb-cecf-4764-9e47-4125dc0d2339.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.419-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.514-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.551-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.419-0500 I STORAGE [conn46] Index build initialized: 66ae541a-37db-4254-9150-87c667a665b6: test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732 (6a75d84e-cb47-465a-a2d7-e86138bf6b35 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.514-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.551-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (66f94bbb-cecf-4764-9e47-4125dc0d2339) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 1883), t: 1 } and commit timestamp Timestamp(1574796770, 1883)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.419-0500 I INDEX [conn46] Waiting for index build to complete: 66ae541a-37db-4254-9150-87c667a665b6
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.514-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 9df8293b-b99c-4740-908b-af15163dcee0: test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c (bea5127c-ae7d-40cf-90e4-68c91b29020f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.551-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (66f94bbb-cecf-4764-9e47-4125dc0d2339).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.422-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 14a2ce52-e334-47c6-943a-2927fd6bcc7f: test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c ( bea5127c-ae7d-40cf-90e4-68c91b29020f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.514-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.551-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 3e68fcf9-7ffb-47a3-ab4e-e455b2334361 from test5_fsmdb0.tmp.agg_out.dc4b92d8-895d-4fbe-a697-7d553ff0cfc0 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.422-0500 I INDEX [conn108] Index build completed: 14a2ce52-e334-47c6-943a-2927fd6bcc7f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.514-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.551-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (66f94bbb-cecf-4764-9e47-4125dc0d2339)'. Ident: 'index-812--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 1883)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.429-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.516-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923 (d3e58507-f835-4ed2-88cb-d4b5fb6e675d) to test5_fsmdb0.agg_out and drop 3e68fcf9-7ffb-47a3-ab4e-e455b2334361.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.551-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (66f94bbb-cecf-4764-9e47-4125dc0d2339)'. Ident: 'index-821--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 1883)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.430-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.517-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.551-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-811--4104909142373009110, commit timestamp: Timestamp(1574796770, 1883)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.430-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (3e68fcf9-7ffb-47a3-ab4e-e455b2334361) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 2388), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.517-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (3e68fcf9-7ffb-47a3-ab4e-e455b2334361) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 2388), t: 1 } and commit timestamp Timestamp(1574796770, 2388)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.553-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 14752706-8ef2-4d62-ae02-12bb656dba54: test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099 ( d069e3d2-f32b-476d-bdd9-6bf404b89570 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.430-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (3e68fcf9-7ffb-47a3-ab4e-e455b2334361).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.517-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (3e68fcf9-7ffb-47a3-ab4e-e455b2334361).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.554-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796770, 1883) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796770, 1947), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2589 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 156ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.430-0500 I STORAGE [conn114] renameCollection: renaming collection d3e58507-f835-4ed2-88cb-d4b5fb6e675d from test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.517-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection d3e58507-f835-4ed2-88cb-d4b5fb6e675d from test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.555-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c with provided UUID: c5fd24ed-c87d-4675-b236-6d13d07f9b30 and options: { uuid: UUID("c5fd24ed-c87d-4675-b236-6d13d07f9b30"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.430-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3e68fcf9-7ffb-47a3-ab4e-e455b2334361)'. Ident: 'index-810-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 2388)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.517-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3e68fcf9-7ffb-47a3-ab4e-e455b2334361)'. Ident: 'index-820--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 2388)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.569-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.430-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3e68fcf9-7ffb-47a3-ab4e-e455b2334361)'. Ident: 'index-811-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 2388)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.517-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3e68fcf9-7ffb-47a3-ab4e-e455b2334361)'. Ident: 'index-827--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 2388)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.589-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.430-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-808-8224331490264904478, commit timestamp: Timestamp(1574796770, 2388)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.517-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-819--8000595249233899911, commit timestamp: Timestamp(1574796770, 2388)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.589-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.430-0500 I INDEX [conn112] Registering index build: 075a59dc-0e6c-4f1b-8c3d-3169efd3cdbe
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.521-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 9df8293b-b99c-4740-908b-af15163dcee0: test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c ( bea5127c-ae7d-40cf-90e4-68c91b29020f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.589-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 2f015639-99d7-45c0-a617-2461899cb313: test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c (bea5127c-ae7d-40cf-90e4-68c91b29020f ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.430-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.536-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.589-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.430-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5202032324144391196, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7559103763532264512, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796770266), clusterTime: Timestamp(1574796770, 302) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796770, 302), signature: { hash: BinData(0, 2D4CEA42A39FF3DCCCEE0B94B5C39FC8830744DB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 163ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.536-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.590-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.431-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.536-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 211cdbaa-4700-4279-a72c-557f04ae9281: test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732 (6a75d84e-cb47-465a-a2d7-e86138bf6b35 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.591-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923 (d3e58507-f835-4ed2-88cb-d4b5fb6e675d) to test5_fsmdb0.agg_out and drop 3e68fcf9-7ffb-47a3-ab4e-e455b2334361.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.433-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.536-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.592-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.433-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac with generated UUID: 484ec805-da24-4f47-9c0c-f472702cd523 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.537-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.593-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (3e68fcf9-7ffb-47a3-ab4e-e455b2334361) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 2388), t: 1 } and commit timestamp Timestamp(1574796770, 2388)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.444-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 66ae541a-37db-4254-9150-87c667a665b6: test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732 ( 6a75d84e-cb47-465a-a2d7-e86138bf6b35 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.538-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.593-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (3e68fcf9-7ffb-47a3-ab4e-e455b2334361).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.461-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.539-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac with provided UUID: 484ec805-da24-4f47-9c0c-f472702cd523 and options: { uuid: UUID("484ec805-da24-4f47-9c0c-f472702cd523"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.593-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection d3e58507-f835-4ed2-88cb-d4b5fb6e675d from test5_fsmdb0.tmp.agg_out.27b030c3-bcf5-4d62-8b95-11ad8e60b923 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.461-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.542-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 211cdbaa-4700-4279-a72c-557f04ae9281: test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732 ( 6a75d84e-cb47-465a-a2d7-e86138bf6b35 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.593-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3e68fcf9-7ffb-47a3-ab4e-e455b2334361)'. Ident: 'index-820--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 2388)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.461-0500 I STORAGE [conn112] Index build initialized: 075a59dc-0e6c-4f1b-8c3d-3169efd3cdbe: test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c (c5fd24ed-c87d-4675-b236-6d13d07f9b30 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.556-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.593-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3e68fcf9-7ffb-47a3-ab4e-e455b2334361)'. Ident: 'index-827--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 2388)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.461-0500 I INDEX [conn112] Waiting for index build to complete: 075a59dc-0e6c-4f1b-8c3d-3169efd3cdbe
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.560-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099 (d069e3d2-f32b-476d-bdd9-6bf404b89570) to test5_fsmdb0.agg_out and drop d3e58507-f835-4ed2-88cb-d4b5fb6e675d.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.593-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-819--4104909142373009110, commit timestamp: Timestamp(1574796770, 2388)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.461-0500 I INDEX [conn46] Index build completed: 66ae541a-37db-4254-9150-87c667a665b6
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.560-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (d3e58507-f835-4ed2-88cb-d4b5fb6e675d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 2893), t: 1 } and commit timestamp Timestamp(1574796770, 2893)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.596-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 2f015639-99d7-45c0-a617-2461899cb313: test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c ( bea5127c-ae7d-40cf-90e4-68c91b29020f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.468-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.560-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (d3e58507-f835-4ed2-88cb-d4b5fb6e675d).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.611-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.468-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.560-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection d069e3d2-f32b-476d-bdd9-6bf404b89570 from test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.611-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.469-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (d3e58507-f835-4ed2-88cb-d4b5fb6e675d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 2893), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.560-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d3e58507-f835-4ed2-88cb-d4b5fb6e675d)'. Ident: 'index-824--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 2893)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.611-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 614385aa-4f7c-40ee-bde0-973a729ee62e: test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732 (6a75d84e-cb47-465a-a2d7-e86138bf6b35 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.469-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (d3e58507-f835-4ed2-88cb-d4b5fb6e675d).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.560-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d3e58507-f835-4ed2-88cb-d4b5fb6e675d)'. Ident: 'index-831--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 2893)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.611-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.469-0500 I STORAGE [conn110] renameCollection: renaming collection d069e3d2-f32b-476d-bdd9-6bf404b89570 from test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.560-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-823--8000595249233899911, commit timestamp: Timestamp(1574796770, 2893)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.611-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.469-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d3e58507-f835-4ed2-88cb-d4b5fb6e675d)'. Ident: 'index-815-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 2893)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.564-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 with provided UUID: 5fc4ee05-b299-454c-9fbe-e9d283cc99ce and options: { uuid: UUID("5fc4ee05-b299-454c-9fbe-e9d283cc99ce"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.613-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac with provided UUID: 484ec805-da24-4f47-9c0c-f472702cd523 and options: { uuid: UUID("484ec805-da24-4f47-9c0c-f472702cd523"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.469-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d3e58507-f835-4ed2-88cb-d4b5fb6e675d)'. Ident: 'index-817-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 2893)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.579-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.614-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.469-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-812-8224331490264904478, commit timestamp: Timestamp(1574796770, 2893)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.599-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.622-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 614385aa-4f7c-40ee-bde0-973a729ee62e: test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732 ( 6a75d84e-cb47-465a-a2d7-e86138bf6b35 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.469-0500 I INDEX [conn114] Registering index build: 8308cbfc-e2aa-4c58-823a-a5a2b15d2617
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.599-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.630-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.469-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.599-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 6d8de2d8-4055-410e-94c4-eea417a5f915: test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c (c5fd24ed-c87d-4675-b236-6d13d07f9b30 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.644-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099 (d069e3d2-f32b-476d-bdd9-6bf404b89570) to test5_fsmdb0.agg_out and drop d3e58507-f835-4ed2-88cb-d4b5fb6e675d.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.469-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4948518991020857980, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6293249942100942111, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796770266), clusterTime: Timestamp(1574796770, 302) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796770, 302), signature: { hash: BinData(0, 2D4CEA42A39FF3DCCCEE0B94B5C39FC8830744DB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 201ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.599-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.644-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (d3e58507-f835-4ed2-88cb-d4b5fb6e675d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 2893), t: 1 } and commit timestamp Timestamp(1574796770, 2893)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.469-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.600-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.644-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (d3e58507-f835-4ed2-88cb-d4b5fb6e675d).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.472-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 with generated UUID: 5fc4ee05-b299-454c-9fbe-e9d283cc99ce and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.601-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c (bea5127c-ae7d-40cf-90e4-68c91b29020f) to test5_fsmdb0.agg_out and drop d069e3d2-f32b-476d-bdd9-6bf404b89570.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.644-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection d069e3d2-f32b-476d-bdd9-6bf404b89570 from test5_fsmdb0.tmp.agg_out.bc008c48-5f88-4fb1-bb31-f5b7e2bc2099 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.483-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.602-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.644-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d3e58507-f835-4ed2-88cb-d4b5fb6e675d)'. Ident: 'index-824--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 2893)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.498-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.603-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (d069e3d2-f32b-476d-bdd9-6bf404b89570) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 3398), t: 1 } and commit timestamp Timestamp(1574796770, 3398)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.644-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d3e58507-f835-4ed2-88cb-d4b5fb6e675d)'. Ident: 'index-831--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 2893)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.498-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.603-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (d069e3d2-f32b-476d-bdd9-6bf404b89570).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.644-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-823--4104909142373009110, commit timestamp: Timestamp(1574796770, 2893)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.498-0500 I STORAGE [conn114] Index build initialized: 8308cbfc-e2aa-4c58-823a-a5a2b15d2617: test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac (484ec805-da24-4f47-9c0c-f472702cd523 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.603-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection bea5127c-ae7d-40cf-90e4-68c91b29020f from test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.647-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 with provided UUID: 5fc4ee05-b299-454c-9fbe-e9d283cc99ce and options: { uuid: UUID("5fc4ee05-b299-454c-9fbe-e9d283cc99ce"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.499-0500 I INDEX [conn114] Waiting for index build to complete: 8308cbfc-e2aa-4c58-823a-a5a2b15d2617
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.603-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d069e3d2-f32b-476d-bdd9-6bf404b89570)'. Ident: 'index-826--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 3398)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.663-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.500-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 075a59dc-0e6c-4f1b-8c3d-3169efd3cdbe: test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c ( c5fd24ed-c87d-4675-b236-6d13d07f9b30 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.603-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d069e3d2-f32b-476d-bdd9-6bf404b89570)'. Ident: 'index-835--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 3398)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.683-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.500-0500 I INDEX [conn112] Index build completed: 075a59dc-0e6c-4f1b-8c3d-3169efd3cdbe
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.603-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-825--8000595249233899911, commit timestamp: Timestamp(1574796770, 3398)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.683-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.509-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.605-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 with provided UUID: 5e8cc86c-d251-4056-9320-176e36a8c77a and options: { uuid: UUID("5e8cc86c-d251-4056-9320-176e36a8c77a"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.683-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 8ed64617-98c0-4a51-bc5a-261967530482: test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c (c5fd24ed-c87d-4675-b236-6d13d07f9b30 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.509-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.606-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 6d8de2d8-4055-410e-94c4-eea417a5f915: test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c ( c5fd24ed-c87d-4675-b236-6d13d07f9b30 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.683-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.509-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (d069e3d2-f32b-476d-bdd9-6bf404b89570) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 3398), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.622-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.684-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.509-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (d069e3d2-f32b-476d-bdd9-6bf404b89570).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.640-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.685-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c (bea5127c-ae7d-40cf-90e4-68c91b29020f) to test5_fsmdb0.agg_out and drop d069e3d2-f32b-476d-bdd9-6bf404b89570.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.509-0500 I STORAGE [conn108] renameCollection: renaming collection bea5127c-ae7d-40cf-90e4-68c91b29020f from test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.640-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.686-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.509-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d069e3d2-f32b-476d-bdd9-6bf404b89570)'. Ident: 'index-816-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 3398)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.640-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 7bb82b49-d90c-4cbc-95d2-fbb6ab254f34: test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac (484ec805-da24-4f47-9c0c-f472702cd523 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.686-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (d069e3d2-f32b-476d-bdd9-6bf404b89570) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 3398), t: 1 } and commit timestamp Timestamp(1574796770, 3398)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.509-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d069e3d2-f32b-476d-bdd9-6bf404b89570)'. Ident: 'index-819-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 3398)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.641-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.686-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (d069e3d2-f32b-476d-bdd9-6bf404b89570).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.509-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-813-8224331490264904478, commit timestamp: Timestamp(1574796770, 3398)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.641-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.686-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection bea5127c-ae7d-40cf-90e4-68c91b29020f from test5_fsmdb0.tmp.agg_out.efcc9a94-f261-46e3-8c88-c25c5dbf759c to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.509-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.642-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732 (6a75d84e-cb47-465a-a2d7-e86138bf6b35) to test5_fsmdb0.agg_out and drop bea5127c-ae7d-40cf-90e4-68c91b29020f.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.687-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d069e3d2-f32b-476d-bdd9-6bf404b89570)'. Ident: 'index-826--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 3398)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.509-0500 I INDEX [conn46] Registering index build: b4fcb1a8-df09-45f4-bde9-ca11580927f2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.643-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.687-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d069e3d2-f32b-476d-bdd9-6bf404b89570)'. Ident: 'index-835--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 3398)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.509-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5318665891946806842, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2071926563669653720, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796770328), clusterTime: Timestamp(1574796770, 937) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796770, 1001), signature: { hash: BinData(0, 2D4CEA42A39FF3DCCCEE0B94B5C39FC8830744DB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 180ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.644-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (bea5127c-ae7d-40cf-90e4-68c91b29020f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 3903), t: 1 } and commit timestamp Timestamp(1574796770, 3903)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.687-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-825--4104909142373009110, commit timestamp: Timestamp(1574796770, 3398)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.510-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.644-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (bea5127c-ae7d-40cf-90e4-68c91b29020f).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.688-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 8ed64617-98c0-4a51-bc5a-261967530482: test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c ( c5fd24ed-c87d-4675-b236-6d13d07f9b30 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.512-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 with generated UUID: 5e8cc86c-d251-4056-9320-176e36a8c77a and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.644-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 6a75d84e-cb47-465a-a2d7-e86138bf6b35 from test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.689-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 with provided UUID: 5e8cc86c-d251-4056-9320-176e36a8c77a and options: { uuid: UUID("5e8cc86c-d251-4056-9320-176e36a8c77a"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.521-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.644-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bea5127c-ae7d-40cf-90e4-68c91b29020f)'. Ident: 'index-830--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 3903)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.705-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.539-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.644-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bea5127c-ae7d-40cf-90e4-68c91b29020f)'. Ident: 'index-839--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 3903)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.724-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.539-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.644-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-829--8000595249233899911, commit timestamp: Timestamp(1574796770, 3903)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.724-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.539-0500 I STORAGE [conn46] Index build initialized: b4fcb1a8-df09-45f4-bde9-ca11580927f2: test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 (5fc4ee05-b299-454c-9fbe-e9d283cc99ce ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.648-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 7bb82b49-d90c-4cbc-95d2-fbb6ab254f34: test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac ( 484ec805-da24-4f47-9c0c-f472702cd523 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.724-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: eaebfeb6-ffb1-4d19-9480-0b15d012bf37: test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac (484ec805-da24-4f47-9c0c-f472702cd523 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.539-0500 I INDEX [conn46] Waiting for index build to complete: b4fcb1a8-df09-45f4-bde9-ca11580927f2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.662-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.724-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.540-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 8308cbfc-e2aa-4c58-823a-a5a2b15d2617: test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac ( 484ec805-da24-4f47-9c0c-f472702cd523 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.662-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.725-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.540-0500 I INDEX [conn114] Index build completed: 8308cbfc-e2aa-4c58-823a-a5a2b15d2617
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.662-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 8cf03f5b-2edf-4fba-b42c-cfdd56a49444: test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 (5fc4ee05-b299-454c-9fbe-e9d283cc99ce ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.726-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732 (6a75d84e-cb47-465a-a2d7-e86138bf6b35) to test5_fsmdb0.agg_out and drop bea5127c-ae7d-40cf-90e4-68c91b29020f.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.548-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.662-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.727-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.549-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.663-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.728-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (bea5127c-ae7d-40cf-90e4-68c91b29020f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 3903), t: 1 } and commit timestamp Timestamp(1574796770, 3903)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.549-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (bea5127c-ae7d-40cf-90e4-68c91b29020f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 3903), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.666-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.728-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (bea5127c-ae7d-40cf-90e4-68c91b29020f).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.549-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (bea5127c-ae7d-40cf-90e4-68c91b29020f).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.667-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 8cf03f5b-2edf-4fba-b42c-cfdd56a49444: test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 ( 5fc4ee05-b299-454c-9fbe-e9d283cc99ce ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.728-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection 6a75d84e-cb47-465a-a2d7-e86138bf6b35 from test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.549-0500 I STORAGE [conn110] renameCollection: renaming collection 6a75d84e-cb47-465a-a2d7-e86138bf6b35 from test5_fsmdb0.tmp.agg_out.a4ddae18-50d7-4c7a-ba2e-ce06fd0b7732 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.670-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86 with provided UUID: e4feacf1-9e47-4f02-abe4-4194aa3303df and options: { uuid: UUID("e4feacf1-9e47-4f02-abe4-4194aa3303df"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.728-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bea5127c-ae7d-40cf-90e4-68c91b29020f)'. Ident: 'index-830--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 3903)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.549-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bea5127c-ae7d-40cf-90e4-68c91b29020f)'. Ident: 'index-822-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 3903)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.684-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.728-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bea5127c-ae7d-40cf-90e4-68c91b29020f)'. Ident: 'index-839--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 3903)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.549-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bea5127c-ae7d-40cf-90e4-68c91b29020f)'. Ident: 'index-823-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 3903)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.704-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.728-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-829--4104909142373009110, commit timestamp: Timestamp(1574796770, 3903)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.549-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-820-8224331490264904478, commit timestamp: Timestamp(1574796770, 3903)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.704-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.730-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: eaebfeb6-ffb1-4d19-9480-0b15d012bf37: test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac ( 484ec805-da24-4f47-9c0c-f472702cd523 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.549-0500 I INDEX [conn112] Registering index build: 417f6795-7ecb-4c3b-a2ad-1df76304eddd
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.704-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 466013e2-fbde-482d-9520-31b664b67056: test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 (5e8cc86c-d251-4056-9320-176e36a8c77a ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.745-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.549-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.704-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.745-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.549-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6334940727548565581, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8366603882378107422, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796770362), clusterTime: Timestamp(1574796770, 1570) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796770, 1698), signature: { hash: BinData(0, 2D4CEA42A39FF3DCCCEE0B94B5C39FC8830744DB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 186ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.705-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.745-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: ad5437bc-ac05-411d-a377-f7c9a8fb0e79: test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 (5fc4ee05-b299-454c-9fbe-e9d283cc99ce ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.550-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.707-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.745-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.560-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.709-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 466013e2-fbde-482d-9520-31b664b67056: test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 ( 5e8cc86c-d251-4056-9320-176e36a8c77a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.745-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.569-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.712-0500 I COMMAND [ReplWriterWorker-15] CMD: drop test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.748-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.569-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.712-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c (c5fd24ed-c87d-4675-b236-6d13d07f9b30) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 5232), t: 1 } and commit timestamp Timestamp(1574796770, 5232)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.752-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: ad5437bc-ac05-411d-a377-f7c9a8fb0e79: test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 ( 5fc4ee05-b299-454c-9fbe-e9d283cc99ce ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.569-0500 I STORAGE [conn112] Index build initialized: 417f6795-7ecb-4c3b-a2ad-1df76304eddd: test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 (5e8cc86c-d251-4056-9320-176e36a8c77a ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.712-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c (c5fd24ed-c87d-4675-b236-6d13d07f9b30).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.753-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796770, 4035) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796770, 4163), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 1665 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 177ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.569-0500 I INDEX [conn112] Waiting for index build to complete: 417f6795-7ecb-4c3b-a2ad-1df76304eddd
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.712-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c (c5fd24ed-c87d-4675-b236-6d13d07f9b30)'. Ident: 'index-838--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 5232)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.753-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86 with provided UUID: e4feacf1-9e47-4f02-abe4-4194aa3303df and options: { uuid: UUID("e4feacf1-9e47-4f02-abe4-4194aa3303df"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.569-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.712-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c (c5fd24ed-c87d-4675-b236-6d13d07f9b30)'. Ident: 'index-847--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 5232)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.768-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.569-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: b4fcb1a8-df09-45f4-bde9-ca11580927f2: test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 ( 5fc4ee05-b299-454c-9fbe-e9d283cc99ce ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.712-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c'. Ident: collection-837--8000595249233899911, commit timestamp: Timestamp(1574796770, 5232)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.784-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.570-0500 I INDEX [conn46] Index build completed: b4fcb1a8-df09-45f4-bde9-ca11580927f2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.715-0500 I COMMAND [ReplWriterWorker-4] CMD: drop test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.784-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.570-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.715-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac (484ec805-da24-4f47-9c0c-f472702cd523) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 5361), t: 1 } and commit timestamp Timestamp(1574796770, 5361)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.784-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 6152041e-413b-41c4-aeca-885f9d61a2ea: test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 (5e8cc86c-d251-4056-9320-176e36a8c77a ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.572-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86 with generated UUID: e4feacf1-9e47-4f02-abe4-4194aa3303df and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.715-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac (484ec805-da24-4f47-9c0c-f472702cd523).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.784-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.574-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.715-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac (484ec805-da24-4f47-9c0c-f472702cd523)'. Ident: 'index-844--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 5361)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.784-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.584-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 417f6795-7ecb-4c3b-a2ad-1df76304eddd: test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 ( 5e8cc86c-d251-4056-9320-176e36a8c77a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.715-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac (484ec805-da24-4f47-9c0c-f472702cd523)'. Ident: 'index-851--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 5361)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.788-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: drain applied 64 side writes (inserted: 64, deleted: 0) for '_id_hashed' in 1 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.584-0500 I INDEX [conn112] Index build completed: 417f6795-7ecb-4c3b-a2ad-1df76304eddd
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.715-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac'. Ident: collection-843--8000595249233899911, commit timestamp: Timestamp(1574796770, 5361)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.788-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.593-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.719-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5 with provided UUID: 381e3f8c-9478-4ca9-907a-f585d558d08f and options: { uuid: UUID("381e3f8c-9478-4ca9-907a-f585d558d08f"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.788-0500 I COMMAND [ReplWriterWorker-14] CMD: drop test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.593-0500 I INDEX [conn110] Registering index build: 68c08f52-933a-4ef0-99a1-10a0dad21209
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.732-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.788-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c (c5fd24ed-c87d-4675-b236-6d13d07f9b30) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 5232), t: 1 } and commit timestamp Timestamp(1574796770, 5232)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.593-0500 I COMMAND [conn108] CMD: drop test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.746-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.789-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c (c5fd24ed-c87d-4675-b236-6d13d07f9b30).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.612-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.746-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.789-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c (c5fd24ed-c87d-4675-b236-6d13d07f9b30)'. Ident: 'index-838--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 5232)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.612-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.746-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: ebb7474e-119f-433e-8335-de4b0e1ab6e1: test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86 (e4feacf1-9e47-4f02-abe4-4194aa3303df ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.789-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c (c5fd24ed-c87d-4675-b236-6d13d07f9b30)'. Ident: 'index-847--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 5232)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.612-0500 I STORAGE [conn110] Index build initialized: 68c08f52-933a-4ef0-99a1-10a0dad21209: test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86 (e4feacf1-9e47-4f02-abe4-4194aa3303df ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.746-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.789-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c'. Ident: collection-837--4104909142373009110, commit timestamp: Timestamp(1574796770, 5232)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.612-0500 I INDEX [conn110] Waiting for index build to complete: 68c08f52-933a-4ef0-99a1-10a0dad21209
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.747-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.791-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 6152041e-413b-41c4-aeca-885f9d61a2ea: test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 ( 5e8cc86c-d251-4056-9320-176e36a8c77a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.612-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c (c5fd24ed-c87d-4675-b236-6d13d07f9b30) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.748-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61 with provided UUID: 4db3102d-eaad-4c21-a6bd-16884034ad49 and options: { uuid: UUID("4db3102d-eaad-4c21-a6bd-16884034ad49"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.791-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.612-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c (c5fd24ed-c87d-4675-b236-6d13d07f9b30).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.749-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.792-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac (484ec805-da24-4f47-9c0c-f472702cd523) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 5361), t: 1 } and commit timestamp Timestamp(1574796770, 5361)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.612-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c (c5fd24ed-c87d-4675-b236-6d13d07f9b30)'. Ident: 'index-830-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 5232)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.757-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: ebb7474e-119f-433e-8335-de4b0e1ab6e1: test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86 ( e4feacf1-9e47-4f02-abe4-4194aa3303df ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.792-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac (484ec805-da24-4f47-9c0c-f472702cd523).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.612-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c (c5fd24ed-c87d-4675-b236-6d13d07f9b30)'. Ident: 'index-831-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 5232)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.764-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.792-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac (484ec805-da24-4f47-9c0c-f472702cd523)'. Ident: 'index-844--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 5361)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.612-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c'. Ident: collection-828-8224331490264904478, commit timestamp: Timestamp(1574796770, 5232)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.768-0500 I COMMAND [ReplWriterWorker-4] CMD: drop test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.792-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac (484ec805-da24-4f47-9c0c-f472702cd523)'. Ident: 'index-851--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 5361)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.612-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.768-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 (5fc4ee05-b299-454c-9fbe-e9d283cc99ce) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 5919), t: 1 } and commit timestamp Timestamp(1574796770, 5919)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.792-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac'. Ident: collection-843--4104909142373009110, commit timestamp: Timestamp(1574796770, 5361)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.613-0500 I COMMAND [conn70] command test5_fsmdb0.agg_out appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8871003894538698348, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4208410176295381409, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796770395), clusterTime: Timestamp(1574796770, 1947) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796770, 2011), signature: { hash: BinData(0, 2D4CEA42A39FF3DCCCEE0B94B5C39FC8830744DB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.0a7e5ae2-b723-4caa-b856-2a9f2f79453c\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:982 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 216ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.768-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 (5fc4ee05-b299-454c-9fbe-e9d283cc99ce).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.795-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5 with provided UUID: 381e3f8c-9478-4ca9-907a-f585d558d08f and options: { uuid: UUID("381e3f8c-9478-4ca9-907a-f585d558d08f"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.613-0500 I COMMAND [conn114] CMD: drop test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.768-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 (5fc4ee05-b299-454c-9fbe-e9d283cc99ce)'. Ident: 'index-846--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 5919)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.806-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.613-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac (484ec805-da24-4f47-9c0c-f472702cd523) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.768-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 (5fc4ee05-b299-454c-9fbe-e9d283cc99ce)'. Ident: 'index-853--8000595249233899911', commit timestamp: 'Timestamp(1574796770, 5919)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.818-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.613-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac (484ec805-da24-4f47-9c0c-f472702cd523).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:50.768-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5'. Ident: collection-845--8000595249233899911, commit timestamp: Timestamp(1574796770, 5919)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.818-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.613-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac (484ec805-da24-4f47-9c0c-f472702cd523)'. Ident: 'index-834-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 5361)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.191-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5 with provided UUID: 4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5 and options: { uuid: UUID("4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.818-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 63d587ae-203f-4bde-a146-49c95d3852b0: test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86 (e4feacf1-9e47-4f02-abe4-4194aa3303df ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.613-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac (484ec805-da24-4f47-9c0c-f472702cd523)'. Ident: 'index-835-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 5361)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.206-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.818-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.613-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac'. Ident: collection-832-8224331490264904478, commit timestamp: Timestamp(1574796770, 5361)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.819-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.613-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.819-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61 with provided UUID: 4db3102d-eaad-4c21-a6bd-16884034ad49 and options: { uuid: UUID("4db3102d-eaad-4c21-a6bd-16884034ad49"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.613-0500 I COMMAND [conn65] command test5_fsmdb0.agg_out appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6842069930874349902, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8205694601620934133, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796770432), clusterTime: Timestamp(1574796770, 2452) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796770, 2516), signature: { hash: BinData(0, 2D4CEA42A39FF3DCCCEE0B94B5C39FC8830744DB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.3e88a175-149a-4f9a-8d7f-ff7ba706c4ac\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:982 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 180ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.821-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.615-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5 with generated UUID: 381e3f8c-9478-4ca9-907a-f585d558d08f and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.827-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 63d587ae-203f-4bde-a146-49c95d3852b0: test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86 ( e4feacf1-9e47-4f02-abe4-4194aa3303df ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.616-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.832-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.616-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61 with generated UUID: 4db3102d-eaad-4c21-a6bd-16884034ad49 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.836-0500 I COMMAND [ReplWriterWorker-0] CMD: drop test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.623-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 68c08f52-933a-4ef0-99a1-10a0dad21209: test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86 ( e4feacf1-9e47-4f02-abe4-4194aa3303df ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.836-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 (5fc4ee05-b299-454c-9fbe-e9d283cc99ce) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796770, 5919), t: 1 } and commit timestamp Timestamp(1574796770, 5919)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.623-0500 I INDEX [conn110] Index build completed: 68c08f52-933a-4ef0-99a1-10a0dad21209
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.836-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 (5fc4ee05-b299-454c-9fbe-e9d283cc99ce).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.639-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.836-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 (5fc4ee05-b299-454c-9fbe-e9d283cc99ce)'. Ident: 'index-846--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 5919)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.644-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.836-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 (5fc4ee05-b299-454c-9fbe-e9d283cc99ce)'. Ident: 'index-853--4104909142373009110', commit timestamp: 'Timestamp(1574796770, 5919)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.645-0500 I INDEX [conn108] Registering index build: 9a03cdce-cbfb-4b0f-8f5c-c3bb17ae8962
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:50.836-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5'. Ident: collection-845--4104909142373009110, commit timestamp: Timestamp(1574796770, 5919)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.645-0500 I INDEX [conn114] Registering index build: 4549e5cd-3407-473f-8fed-202b7ba03af5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.208-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5 with provided UUID: 4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5 and options: { uuid: UUID("4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.645-0500 I COMMAND [conn46] CMD: drop test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.661-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.661-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.661-0500 I STORAGE [conn108] Index build initialized: 9a03cdce-cbfb-4b0f-8f5c-c3bb17ae8962: test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61 (4db3102d-eaad-4c21-a6bd-16884034ad49 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.661-0500 I INDEX [conn108] Waiting for index build to complete: 9a03cdce-cbfb-4b0f-8f5c-c3bb17ae8962
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.661-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 (5fc4ee05-b299-454c-9fbe-e9d283cc99ce) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.661-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 (5fc4ee05-b299-454c-9fbe-e9d283cc99ce).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.661-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 (5fc4ee05-b299-454c-9fbe-e9d283cc99ce)'. Ident: 'index-838-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 5919)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.661-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5 (5fc4ee05-b299-454c-9fbe-e9d283cc99ce)'. Ident: 'index-839-8224331490264904478', commit timestamp: 'Timestamp(1574796770, 5919)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.661-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5'. Ident: collection-836-8224331490264904478, commit timestamp: Timestamp(1574796770, 5919)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.661-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.662-0500 I COMMAND [conn68] command test5_fsmdb0.agg_out appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3366924864888052496, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5177518333284118661, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796770470), clusterTime: Timestamp(1574796770, 2957) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796770, 3021), signature: { hash: BinData(0, 2D4CEA42A39FF3DCCCEE0B94B5C39FC8830744DB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.8b6b42b2-2c6a-4885-9844-bddfcdfac2f5\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:982 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 190ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.662-0500 I COMMAND [conn112] CMD: drop test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.662-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.665-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5 with generated UUID: 4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.677-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.692-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.692-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.692-0500 I STORAGE [conn114] Index build initialized: 4549e5cd-3407-473f-8fed-202b7ba03af5: test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5 (381e3f8c-9478-4ca9-907a-f585d558d08f ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.692-0500 I INDEX [conn114] Waiting for index build to complete: 4549e5cd-3407-473f-8fed-202b7ba03af5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.692-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 (5e8cc86c-d251-4056-9320-176e36a8c77a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.693-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 9a03cdce-cbfb-4b0f-8f5c-c3bb17ae8962: test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61 ( 4db3102d-eaad-4c21-a6bd-16884034ad49 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:50.700-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.187-0500 I INDEX [conn108] Index build completed: 9a03cdce-cbfb-4b0f-8f5c-c3bb17ae8962
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.187-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 (5e8cc86c-d251-4056-9320-176e36a8c77a).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.187-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 (5e8cc86c-d251-4056-9320-176e36a8c77a)'. Ident: 'index-842-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 1)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.187-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 (5e8cc86c-d251-4056-9320-176e36a8c77a)'. Ident: 'index-843-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 1)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.188-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87'. Ident: collection-840-8224331490264904478, commit timestamp: Timestamp(1574796773, 1)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.188-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61 appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796770, 5673), signature: { hash: BinData(0, 2D4CEA42A39FF3DCCCEE0B94B5C39FC8830744DB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 5021 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2547ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.188-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5 appName: "tid:0" command: create { create: "tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5", temp: true, validationLevel: "off", validationAction: "warn", databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796770, 6111), signature: { hash: BinData(0, 2D4CEA42A39FF3DCCCEE0B94B5C39FC8830744DB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2522ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.188-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 command: drop { drop: "tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87", databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, $clusterTime: { clusterTime: Timestamp(1574796770, 5983), signature: { hash: BinData(0, 2D4CEA42A39FF3DCCCEE0B94B5C39FC8830744DB), keyId: 6763700092420489256 } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:420 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2525ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.188-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.188-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (6a75d84e-cb47-465a-a2d7-e86138bf6b35) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 2), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.188-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (6a75d84e-cb47-465a-a2d7-e86138bf6b35).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.188-0500 I STORAGE [conn110] renameCollection: renaming collection e4feacf1-9e47-4f02-abe4-4194aa3303df from test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.188-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (6a75d84e-cb47-465a-a2d7-e86138bf6b35)'. Ident: 'index-826-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 2)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.188-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (6a75d84e-cb47-465a-a2d7-e86138bf6b35)'. Ident: 'index-827-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 2)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.188-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-824-8224331490264904478, commit timestamp: Timestamp(1574796773, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.188-0500 I INDEX [conn108] Registering index build: 64594f3e-0a53-498a-8a51-6338de2760ab
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.188-0500 I COMMAND [conn110] command test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86 appName: "tid:1" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86", to: "test5_fsmdb0.agg_out", collectionOptions: { validationLevel: "off", validationAction: "warn" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796770, 6356), signature: { hash: BinData(0, 2D4CEA42A39FF3DCCCEE0B94B5C39FC8830744DB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 2516005 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2516ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.188-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.188-0500 I COMMAND [conn67] command test5_fsmdb0.agg_out appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7252831880849888981, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7184553067146200549, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796770511), clusterTime: Timestamp(1574796770, 3462) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796770, 3462), signature: { hash: BinData(0, 2D4CEA42A39FF3DCCCEE0B94B5C39FC8830744DB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:982 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2676ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.188-0500 I COMMAND [conn220] command test5_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796770, 4035), lsid: { id: UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") }, $clusterTime: { clusterTime: Timestamp(1574796770, 4163), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796770, 4035). Collection minimum timestamp is Timestamp(1574796770, 6359)" errName:SnapshotUnavailable errCode:246 reslen:602 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2433855 } }, Collection: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 17 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 2434ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.188-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1673549747043795363, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7716538562295668687, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796770570), clusterTime: Timestamp(1574796770, 4035) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796770, 4163), signature: { hash: BinData(0, 2D4CEA42A39FF3DCCCEE0B94B5C39FC8830744DB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2617ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.189-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.191-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337 with generated UUID: 6579dda9-84dc-4b43-ae15-ac891bb07a42 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.192-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676 with generated UUID: b854d2d3-d612-411f-873c-68987851fbde and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.199-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.224-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.225-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.225-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.226-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 4d986ac0-da38-476f-8b41-0d00e97d9da5: test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61 (4db3102d-eaad-4c21-a6bd-16884034ad49 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.226-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.226-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.227-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.227-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.227-0500 I STORAGE [conn108] Index build initialized: 64594f3e-0a53-498a-8a51-6338de2760ab: test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5 (4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.227-0500 I INDEX [conn108] Waiting for index build to complete: 64594f3e-0a53-498a-8a51-6338de2760ab
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.228-0500 I COMMAND [ReplWriterWorker-8] CMD: drop test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.228-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 (5e8cc86c-d251-4056-9320-176e36a8c77a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 1), t: 1 } and commit timestamp Timestamp(1574796773, 1)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.228-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 (5e8cc86c-d251-4056-9320-176e36a8c77a).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.228-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 (5e8cc86c-d251-4056-9320-176e36a8c77a)'. Ident: 'index-850--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 1)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.228-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 (5e8cc86c-d251-4056-9320-176e36a8c77a)'. Ident: 'index-857--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 1)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.228-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87'. Ident: collection-849--8000595249233899911, commit timestamp: Timestamp(1574796773, 1)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.229-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86 (e4feacf1-9e47-4f02-abe4-4194aa3303df) to test5_fsmdb0.agg_out and drop 6a75d84e-cb47-465a-a2d7-e86138bf6b35.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.229-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.231-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 4549e5cd-3407-473f-8fed-202b7ba03af5: test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5 ( 381e3f8c-9478-4ca9-907a-f585d558d08f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.231-0500 I INDEX [conn114] Index build completed: 4549e5cd-3407-473f-8fed-202b7ba03af5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.231-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5 appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796770, 5737), signature: { hash: BinData(0, 2D4CEA42A39FF3DCCCEE0B94B5C39FC8830744DB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 405 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2585ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.238-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.242-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.242-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.242-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 3d97b061-2be3-40f0-a211-91d0e1b5d103: test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61 (4db3102d-eaad-4c21-a6bd-16884034ad49 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.242-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.242-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.244-0500 I COMMAND [ReplWriterWorker-5] CMD: drop test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.244-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 (5e8cc86c-d251-4056-9320-176e36a8c77a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 1), t: 1 } and commit timestamp Timestamp(1574796773, 1)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.244-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 (5e8cc86c-d251-4056-9320-176e36a8c77a).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.244-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 (5e8cc86c-d251-4056-9320-176e36a8c77a)'. Ident: 'index-850--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 1)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.244-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87 (5e8cc86c-d251-4056-9320-176e36a8c77a)'. Ident: 'index-857--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 1)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.244-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.615e9b3c-5f9e-4698-a1c5-20f969ffcc87'. Ident: collection-849--4104909142373009110, commit timestamp: Timestamp(1574796773, 1)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.244-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.247-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.247-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.248-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (e4feacf1-9e47-4f02-abe4-4194aa3303df) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 508), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.248-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (e4feacf1-9e47-4f02-abe4-4194aa3303df).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.248-0500 I STORAGE [conn46] renameCollection: renaming collection 4db3102d-eaad-4c21-a6bd-16884034ad49 from test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.248-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e4feacf1-9e47-4f02-abe4-4194aa3303df)'. Ident: 'index-846-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 508)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.248-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e4feacf1-9e47-4f02-abe4-4194aa3303df)'. Ident: 'index-847-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 508)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.248-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-845-8224331490264904478, commit timestamp: Timestamp(1574796773, 508)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.248-0500 I INDEX [conn110] Registering index build: 67967407-7212-43bd-8642-65654c5428ac
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.248-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.248-0500 I INDEX [conn112] Registering index build: 165beaaf-f8bf-4945-91a2-4861bae8a06c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.248-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6089892252679279782, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3269849649115822486, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796770615), clusterTime: Timestamp(1574796770, 5361) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796770, 5489), signature: { hash: BinData(0, 2D4CEA42A39FF3DCCCEE0B94B5C39FC8830744DB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2632ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:53.248-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796770, 5361), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2633ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.248-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.251-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20 with generated UUID: 2bdd1f07-4539-490c-be19-956dfe8b90de and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.258-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.275-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.275-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.276-0500 I STORAGE [conn110] Index build initialized: 67967407-7212-43bd-8642-65654c5428ac: test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676 (b854d2d3-d612-411f-873c-68987851fbde ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.276-0500 I INDEX [conn110] Waiting for index build to complete: 67967407-7212-43bd-8642-65654c5428ac
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.276-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.278-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 64594f3e-0a53-498a-8a51-6338de2760ab: test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5 ( 4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.287-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.287-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.297-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.305-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.305-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.305-0500 I STORAGE [conn112] Index build initialized: 165beaaf-f8bf-4945-91a2-4861bae8a06c: test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337 (6579dda9-84dc-4b43-ae15-ac891bb07a42 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.305-0500 I INDEX [conn112] Waiting for index build to complete: 165beaaf-f8bf-4945-91a2-4861bae8a06c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.305-0500 I INDEX [conn108] Index build completed: 64594f3e-0a53-498a-8a51-6338de2760ab
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.305-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.305-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796773, 2), signature: { hash: BinData(0, 848089E374A89995AC49B1E2978B3077A0938EA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 76 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 117ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.306-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (4db3102d-eaad-4c21-a6bd-16884034ad49) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 1016), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.306-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (4db3102d-eaad-4c21-a6bd-16884034ad49).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.306-0500 I STORAGE [conn46] renameCollection: renaming collection 381e3f8c-9478-4ca9-907a-f585d558d08f from test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.306-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (4db3102d-eaad-4c21-a6bd-16884034ad49)'. Ident: 'index-852-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 1016)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.306-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (4db3102d-eaad-4c21-a6bd-16884034ad49)'. Ident: 'index-853-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 1016)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.306-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-850-8224331490264904478, commit timestamp: Timestamp(1574796773, 1016)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.306-0500 I INDEX [conn114] Registering index build: 4fd60e90-1d47-4703-95d5-4ef4fa8ec268
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.306-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.306-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7235881013425862967, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 321633942914453669, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796770614), clusterTime: Timestamp(1574796770, 5360) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796770, 5489), signature: { hash: BinData(0, 2D4CEA42A39FF3DCCCEE0B94B5C39FC8830744DB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2691ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:53.306-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796770, 5360), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2692ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:53.349-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796770, 5983), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2685ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.451-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 3d97b061-2be3-40f0-a211-91d0e1b5d103: test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61 ( 4db3102d-eaad-4c21-a6bd-16884034ad49 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.537-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (6a75d84e-cb47-465a-a2d7-e86138bf6b35) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 2), t: 1 } and commit timestamp Timestamp(1574796773, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.306-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 67967407-7212-43bd-8642-65654c5428ac: test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676 ( b854d2d3-d612-411f-873c-68987851fbde ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:53.386-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796773, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 195ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:53.387-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796773, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 197ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.539-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86 (e4feacf1-9e47-4f02-abe4-4194aa3303df) to test5_fsmdb0.agg_out and drop 6a75d84e-cb47-465a-a2d7-e86138bf6b35.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.307-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.537-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (6a75d84e-cb47-465a-a2d7-e86138bf6b35).
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:53.495-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796773, 1016), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 187ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:53.441-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796773, 508), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 191ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.539-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test5_fsmdb0.agg_out (6a75d84e-cb47-465a-a2d7-e86138bf6b35) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 2), t: 1 } and commit timestamp Timestamp(1574796773, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.309-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f with generated UUID: 13dd6a3b-20f8-4643-8de2-4042973d3402 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.537-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection e4feacf1-9e47-4f02-abe4-4194aa3303df from test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:53.573-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796773, 2529), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 185ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:53.537-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796773, 1523), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 186ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.539-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test5_fsmdb0.agg_out (6a75d84e-cb47-465a-a2d7-e86138bf6b35).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.317-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.537-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (6a75d84e-cb47-465a-a2d7-e86138bf6b35)'. Ident: 'index-834--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 2)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:53.574-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796773, 2529), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 185ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.539-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection e4feacf1-9e47-4f02-abe4-4194aa3303df from test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.334-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.537-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (6a75d84e-cb47-465a-a2d7-e86138bf6b35)'. Ident: 'index-841--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 2)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:53.619-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796773, 3035), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 176ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.539-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (6a75d84e-cb47-465a-a2d7-e86138bf6b35)'. Ident: 'index-834--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 2)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.334-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.537-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-833--8000595249233899911, commit timestamp: Timestamp(1574796773, 2)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:56.485-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796773, 4044), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:815 protocol:op_msg 2948ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.539-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (6a75d84e-cb47-465a-a2d7-e86138bf6b35)'. Ident: 'index-841--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 2)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.334-0500 I STORAGE [conn114] Index build initialized: 4fd60e90-1d47-4703-95d5-4ef4fa8ec268: test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20 (2bdd1f07-4539-490c-be19-956dfe8b90de ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.537-0500 I REPL [ReplWriterWorker-10] applied op: command { op: "c", ns: "test5_fsmdb0.$cmd", ui: UUID("e4feacf1-9e47-4f02-abe4-4194aa3303df"), o: { renameCollection: "test5_fsmdb0.tmp.agg_out.3ca557f3-d445-43f3-b852-402e85e63e86", to: "test5_fsmdb0.agg_out", stayTemp: false, dropTarget: UUID("6a75d84e-cb47-465a-a2d7-e86138bf6b35") }, o2: { numRecords: 500 }, ts: Timestamp(1574796773, 2), t: 1, wall: new Date(1574796773188), v: 2 }, took 309ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.539-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-833--4104909142373009110, commit timestamp: Timestamp(1574796773, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.334-0500 I INDEX [conn114] Waiting for index build to complete: 4fd60e90-1d47-4703-95d5-4ef4fa8ec268
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.538-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337 with provided UUID: 6579dda9-84dc-4b43-ae15-ac891bb07a42 and options: { uuid: UUID("6579dda9-84dc-4b43-ae15-ac891bb07a42"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.541-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796773, 2) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796773, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 350ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.335-0500 I INDEX [conn110] Index build completed: 67967407-7212-43bd-8642-65654c5428ac
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.539-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 4d986ac0-da38-476f-8b41-0d00e97d9da5: test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61 ( 4db3102d-eaad-4c21-a6bd-16884034ad49 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.555-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337 with provided UUID: 6579dda9-84dc-4b43-ae15-ac891bb07a42 and options: { uuid: UUID("6579dda9-84dc-4b43-ae15-ac891bb07a42"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.335-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.553-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.571-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.337-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 165beaaf-f8bf-4945-91a2-4861bae8a06c: test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337 ( 6579dda9-84dc-4b43-ae15-ac891bb07a42 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.554-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676 with provided UUID: b854d2d3-d612-411f-873c-68987851fbde and options: { uuid: UUID("b854d2d3-d612-411f-873c-68987851fbde"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.572-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676 with provided UUID: b854d2d3-d612-411f-873c-68987851fbde and options: { uuid: UUID("b854d2d3-d612-411f-873c-68987851fbde"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.337-0500 I INDEX [conn112] Index build completed: 165beaaf-f8bf-4945-91a2-4861bae8a06c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.569-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.588-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.345-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.587-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.607-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.345-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.587-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.607-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.348-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.587-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 47b4f574-a225-4836-a609-cbfb30be2fc0: test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5 (381e3f8c-9478-4ca9-907a-f585d558d08f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.607-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 26cff53b-0ffc-4e82-ab82-90527ae8e7e7: test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5 (381e3f8c-9478-4ca9-907a-f585d558d08f ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.348-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.587-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.607-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.348-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (381e3f8c-9478-4ca9-907a-f585d558d08f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 1523), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.587-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.608-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.348-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (381e3f8c-9478-4ca9-907a-f585d558d08f).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.590-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.610-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.348-0500 I STORAGE [conn110] renameCollection: renaming collection 4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5 from test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.591-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61 (4db3102d-eaad-4c21-a6bd-16884034ad49) to test5_fsmdb0.agg_out and drop e4feacf1-9e47-4f02-abe4-4194aa3303df.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.612-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 26cff53b-0ffc-4e82-ab82-90527ae8e7e7: test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5 ( 381e3f8c-9478-4ca9-907a-f585d558d08f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.348-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (381e3f8c-9478-4ca9-907a-f585d558d08f)'. Ident: 'index-851-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 1523)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.591-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (e4feacf1-9e47-4f02-abe4-4194aa3303df) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 508), t: 1 } and commit timestamp Timestamp(1574796773, 508)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.612-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61 (4db3102d-eaad-4c21-a6bd-16884034ad49) to test5_fsmdb0.agg_out and drop e4feacf1-9e47-4f02-abe4-4194aa3303df.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.348-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (381e3f8c-9478-4ca9-907a-f585d558d08f)'. Ident: 'index-855-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 1523)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.591-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (e4feacf1-9e47-4f02-abe4-4194aa3303df).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.612-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (e4feacf1-9e47-4f02-abe4-4194aa3303df) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 508), t: 1 } and commit timestamp Timestamp(1574796773, 508)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.348-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-849-8224331490264904478, commit timestamp: Timestamp(1574796773, 1523)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.591-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection 4db3102d-eaad-4c21-a6bd-16884034ad49 from test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.612-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (e4feacf1-9e47-4f02-abe4-4194aa3303df).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.348-0500 I INDEX [conn108] Registering index build: c9b4733b-1469-4b31-ab89-9cd025b0e73b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.591-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e4feacf1-9e47-4f02-abe4-4194aa3303df)'. Ident: 'index-856--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 508)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.612-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 4db3102d-eaad-4c21-a6bd-16884034ad49 from test5_fsmdb0.tmp.agg_out.eadb835d-1997-4771-9a97-973059303c61 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.349-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5172219841990470125, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4224729375304989187, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796770663), clusterTime: Timestamp(1574796770, 5983) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796770, 6047), signature: { hash: BinData(0, 2D4CEA42A39FF3DCCCEE0B94B5C39FC8830744DB), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2684ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.591-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e4feacf1-9e47-4f02-abe4-4194aa3303df)'. Ident: 'index-861--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 508)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.612-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e4feacf1-9e47-4f02-abe4-4194aa3303df)'. Ident: 'index-856--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 508)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.349-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 4fd60e90-1d47-4703-95d5-4ef4fa8ec268: test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20 ( 2bdd1f07-4539-490c-be19-956dfe8b90de ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.591-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-855--8000595249233899911, commit timestamp: Timestamp(1574796773, 508)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.612-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e4feacf1-9e47-4f02-abe4-4194aa3303df)'. Ident: 'index-861--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 508)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.352-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4 with generated UUID: c60c27da-283d-44b7-bd71-64091fc8c070 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.592-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20 with provided UUID: 2bdd1f07-4539-490c-be19-956dfe8b90de and options: { uuid: UUID("2bdd1f07-4539-490c-be19-956dfe8b90de"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.612-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-855--4104909142373009110, commit timestamp: Timestamp(1574796773, 508)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.373-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.593-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 47b4f574-a225-4836-a609-cbfb30be2fc0: test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5 ( 381e3f8c-9478-4ca9-907a-f585d558d08f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.613-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20 with provided UUID: 2bdd1f07-4539-490c-be19-956dfe8b90de and options: { uuid: UUID("2bdd1f07-4539-490c-be19-956dfe8b90de"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.373-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.607-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.628-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.373-0500 I STORAGE [conn108] Index build initialized: c9b4733b-1469-4b31-ab89-9cd025b0e73b: test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f (13dd6a3b-20f8-4643-8de2-4042973d3402 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.626-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.647-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.373-0500 I INDEX [conn108] Waiting for index build to complete: c9b4733b-1469-4b31-ab89-9cd025b0e73b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.626-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.647-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.373-0500 I INDEX [conn114] Index build completed: 4fd60e90-1d47-4703-95d5-4ef4fa8ec268
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.626-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: c9d79f58-378c-483a-a29e-511a80a4f3c0: test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5 (4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.647-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 41406076-4eae-4656-9ce1-4b8367157eb5: test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5 (4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.373-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.626-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.647-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.382-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.627-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.648-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.383-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.629-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.650-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.385-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.633-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: c9d79f58-378c-483a-a29e-511a80a4f3c0: test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5 ( 4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.656-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 41406076-4eae-4656-9ce1-4b8367157eb5: test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5 ( 4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.386-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.648-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.670-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.386-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 2528), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.648-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.670-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.386-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.648-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: cbbe8c43-57dc-42d5-9879-37e95cbd2204: test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676 (b854d2d3-d612-411f-873c-68987851fbde ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.670-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 424f4e06-b0c2-4a1d-9423-dfc7a11c52a4: test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676 (b854d2d3-d612-411f-873c-68987851fbde ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.386-0500 I STORAGE [conn112] renameCollection: renaming collection b854d2d3-d612-411f-873c-68987851fbde from test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.648-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.670-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.386-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5)'. Ident: 'index-858-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 2528)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.649-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.671-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.386-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5)'. Ident: 'index-859-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 2528)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.650-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5 (381e3f8c-9478-4ca9-907a-f585d558d08f) to test5_fsmdb0.agg_out and drop 4db3102d-eaad-4c21-a6bd-16884034ad49.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.672-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5 (381e3f8c-9478-4ca9-907a-f585d558d08f) to test5_fsmdb0.agg_out and drop 4db3102d-eaad-4c21-a6bd-16884034ad49.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.386-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-856-8224331490264904478, commit timestamp: Timestamp(1574796773, 2528)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.652-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.673-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.386-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.652-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (4db3102d-eaad-4c21-a6bd-16884034ad49) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 1016), t: 1 } and commit timestamp Timestamp(1574796773, 1016)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.673-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (4db3102d-eaad-4c21-a6bd-16884034ad49) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 1016), t: 1 } and commit timestamp Timestamp(1574796773, 1016)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.386-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3850457771753719760, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8700795699406547808, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796773190), clusterTime: Timestamp(1574796773, 2) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796773, 2), signature: { hash: BinData(0, 848089E374A89995AC49B1E2978B3077A0938EA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 194ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.652-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (4db3102d-eaad-4c21-a6bd-16884034ad49).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.673-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (4db3102d-eaad-4c21-a6bd-16884034ad49).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.386-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (b854d2d3-d612-411f-873c-68987851fbde) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 2529), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.652-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 381e3f8c-9478-4ca9-907a-f585d558d08f from test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.673-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 381e3f8c-9478-4ca9-907a-f585d558d08f from test5_fsmdb0.tmp.agg_out.8f408bbd-c182-4e6a-8dfd-db327f4d39b5 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.386-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (b854d2d3-d612-411f-873c-68987851fbde).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.652-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (4db3102d-eaad-4c21-a6bd-16884034ad49)'. Ident: 'index-864--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 1016)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.673-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (4db3102d-eaad-4c21-a6bd-16884034ad49)'. Ident: 'index-864--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 1016)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.386-0500 I STORAGE [conn46] renameCollection: renaming collection 6579dda9-84dc-4b43-ae15-ac891bb07a42 from test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.652-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (4db3102d-eaad-4c21-a6bd-16884034ad49)'. Ident: 'index-867--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 1016)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.673-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (4db3102d-eaad-4c21-a6bd-16884034ad49)'. Ident: 'index-867--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 1016)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.386-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (b854d2d3-d612-411f-873c-68987851fbde)'. Ident: 'index-864-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 2529)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.652-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-863--8000595249233899911, commit timestamp: Timestamp(1574796773, 1016)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.673-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-863--4104909142373009110, commit timestamp: Timestamp(1574796773, 1016)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.386-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (b854d2d3-d612-411f-873c-68987851fbde)'. Ident: 'index-865-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 2529)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.653-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f with provided UUID: 13dd6a3b-20f8-4643-8de2-4042973d3402 and options: { uuid: UUID("13dd6a3b-20f8-4643-8de2-4042973d3402"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.674-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f with provided UUID: 13dd6a3b-20f8-4643-8de2-4042973d3402 and options: { uuid: UUID("13dd6a3b-20f8-4643-8de2-4042973d3402"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.386-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-861-8224331490264904478, commit timestamp: Timestamp(1574796773, 2529)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.653-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: cbbe8c43-57dc-42d5-9879-37e95cbd2204: test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676 ( b854d2d3-d612-411f-873c-68987851fbde ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.675-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 424f4e06-b0c2-4a1d-9423-dfc7a11c52a4: test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676 ( b854d2d3-d612-411f-873c-68987851fbde ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.386-0500 I INDEX [conn110] Registering index build: c699b652-ee21-4161-9ace-dbbd2d50ccdd
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.668-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.690-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.387-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 539349494211393613, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7184830931321571104, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796773190), clusterTime: Timestamp(1574796773, 2) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796773, 2), signature: { hash: BinData(0, 848089E374A89995AC49B1E2978B3077A0938EA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 195ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.686-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.709-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.387-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: c9b4733b-1469-4b31-ab89-9cd025b0e73b: test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f ( 13dd6a3b-20f8-4643-8de2-4042973d3402 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.686-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.709-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.389-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2 with generated UUID: dfec2bc8-7bc8-4998-ba54-0412af661900 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.686-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 072f54c1-654d-4251-8430-c63ed14bd5d9: test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337 (6579dda9-84dc-4b43-ae15-ac891bb07a42 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.709-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 4e07f9a9-5296-4505-87f8-56c1e9ab38bc: test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337 (6579dda9-84dc-4b43-ae15-ac891bb07a42 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.389-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d with generated UUID: 0d23b1d3-21a6-41eb-b127-bc997c5789cf and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.686-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.709-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.419-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.687-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.710-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.419-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.690-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.712-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.420-0500 I STORAGE [conn110] Index build initialized: c699b652-ee21-4161-9ace-dbbd2d50ccdd: test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4 (c60c27da-283d-44b7-bd71-64091fc8c070 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.693-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 072f54c1-654d-4251-8430-c63ed14bd5d9: test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337 ( 6579dda9-84dc-4b43-ae15-ac891bb07a42 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.715-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 4e07f9a9-5296-4505-87f8-56c1e9ab38bc: test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337 ( 6579dda9-84dc-4b43-ae15-ac891bb07a42 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.420-0500 I INDEX [conn110] Waiting for index build to complete: c699b652-ee21-4161-9ace-dbbd2d50ccdd
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.707-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.729-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.420-0500 I INDEX [conn108] Index build completed: c9b4733b-1469-4b31-ab89-9cd025b0e73b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.707-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.729-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.420-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.707-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 813891a1-6d2a-4f25-a3ea-d055319e8545: test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20 (2bdd1f07-4539-490c-be19-956dfe8b90de ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.729-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: b2a84348-caeb-4f52-85bd-06759127993b: test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20 (2bdd1f07-4539-490c-be19-956dfe8b90de ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.428-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.708-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.730-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.435-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.708-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.730-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.435-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.709-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5 (4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5) to test5_fsmdb0.agg_out and drop 381e3f8c-9478-4ca9-907a-f585d558d08f.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.731-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5 (4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5) to test5_fsmdb0.agg_out and drop 381e3f8c-9478-4ca9-907a-f585d558d08f.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.440-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.711-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.733-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.440-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.711-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (381e3f8c-9478-4ca9-907a-f585d558d08f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 1523), t: 1 } and commit timestamp Timestamp(1574796773, 1523)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.733-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (381e3f8c-9478-4ca9-907a-f585d558d08f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 1523), t: 1 } and commit timestamp Timestamp(1574796773, 1523)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.440-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (6579dda9-84dc-4b43-ae15-ac891bb07a42) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 3035), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.711-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (381e3f8c-9478-4ca9-907a-f585d558d08f).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.733-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (381e3f8c-9478-4ca9-907a-f585d558d08f).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.440-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (6579dda9-84dc-4b43-ae15-ac891bb07a42).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.711-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5 from test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.733-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5 from test5_fsmdb0.tmp.agg_out.918b9909-0efe-4e13-bb6b-63c17520acb5 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.441-0500 I STORAGE [conn114] renameCollection: renaming collection 2bdd1f07-4539-490c-be19-956dfe8b90de from test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.711-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (381e3f8c-9478-4ca9-907a-f585d558d08f)'. Ident: 'index-860--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 1523)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.733-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (381e3f8c-9478-4ca9-907a-f585d558d08f)'. Ident: 'index-860--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 1523)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.441-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (6579dda9-84dc-4b43-ae15-ac891bb07a42)'. Ident: 'index-863-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 3035)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.711-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (381e3f8c-9478-4ca9-907a-f585d558d08f)'. Ident: 'index-873--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 1523)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.733-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (381e3f8c-9478-4ca9-907a-f585d558d08f)'. Ident: 'index-873--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 1523)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.441-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (6579dda9-84dc-4b43-ae15-ac891bb07a42)'. Ident: 'index-869-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 3035)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.711-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-859--8000595249233899911, commit timestamp: Timestamp(1574796773, 1523)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.733-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-859--4104909142373009110, commit timestamp: Timestamp(1574796773, 1523)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.441-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-860-8224331490264904478, commit timestamp: Timestamp(1574796773, 3035)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.713-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 813891a1-6d2a-4f25-a3ea-d055319e8545: test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20 ( 2bdd1f07-4539-490c-be19-956dfe8b90de ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.734-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: b2a84348-caeb-4f52-85bd-06759127993b: test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20 ( 2bdd1f07-4539-490c-be19-956dfe8b90de ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.441-0500 I INDEX [conn46] Registering index build: d5e287b5-6755-4c9a-b868-67ad0f578795
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.715-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4 with provided UUID: c60c27da-283d-44b7-bd71-64091fc8c070 and options: { uuid: UUID("c60c27da-283d-44b7-bd71-64091fc8c070"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.736-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4 with provided UUID: c60c27da-283d-44b7-bd71-64091fc8c070 and options: { uuid: UUID("c60c27da-283d-44b7-bd71-64091fc8c070"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.441-0500 I INDEX [conn112] Registering index build: 88596d1c-53c6-433e-b42f-836a21bd7dd5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.730-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.750-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.441-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 427052639665307351, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3497539670535496059, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796773250), clusterTime: Timestamp(1574796773, 508) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796773, 508), signature: { hash: BinData(0, 848089E374A89995AC49B1E2978B3077A0938EA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 190ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.751-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.770-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.444-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205 with generated UUID: e4f57038-53a5-45ef-a5e3-85c6aaad8527 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.751-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.770-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.444-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: c699b652-ee21-4161-9ace-dbbd2d50ccdd: test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4 ( c60c27da-283d-44b7-bd71-64091fc8c070 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.751-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: bf2de939-f121-4249-8057-3206a368b896: test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f (13dd6a3b-20f8-4643-8de2-4042973d3402 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.770-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 489dbe94-20bc-462f-b7bc-edc4a9055f15: test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f (13dd6a3b-20f8-4643-8de2-4042973d3402 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.467-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.752-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.771-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.467-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.752-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.771-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.467-0500 I STORAGE [conn46] Index build initialized: d5e287b5-6755-4c9a-b868-67ad0f578795: test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d (0d23b1d3-21a6-41eb-b127-bc997c5789cf ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.753-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676 (b854d2d3-d612-411f-873c-68987851fbde) to test5_fsmdb0.agg_out and drop 4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.772-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676 (b854d2d3-d612-411f-873c-68987851fbde) to test5_fsmdb0.agg_out and drop 4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.467-0500 I INDEX [conn46] Waiting for index build to complete: d5e287b5-6755-4c9a-b868-67ad0f578795
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.755-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.773-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.467-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.755-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 2528), t: 1 } and commit timestamp Timestamp(1574796773, 2528)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.774-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 2528), t: 1 } and commit timestamp Timestamp(1574796773, 2528)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.467-0500 I INDEX [conn110] Index build completed: c699b652-ee21-4161-9ace-dbbd2d50ccdd
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.755-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.774-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.475-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.755-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection b854d2d3-d612-411f-873c-68987851fbde from test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.774-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection b854d2d3-d612-411f-873c-68987851fbde from test5_fsmdb0.tmp.agg_out.d1cb94b3-a6b0-4dca-822a-3998a4eb6676 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.475-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.755-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5)'. Ident: 'index-866--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 2528)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.774-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5)'. Ident: 'index-866--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 2528)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.486-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.755-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5)'. Ident: 'index-877--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 2528)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.774-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (4be4bebe-0f0a-45b7-bbc7-76c3bf1180b5)'. Ident: 'index-877--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 2528)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.493-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.755-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-865--8000595249233899911, commit timestamp: Timestamp(1574796773, 2528)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.774-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-865--4104909142373009110, commit timestamp: Timestamp(1574796773, 2528)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.493-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.756-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: bf2de939-f121-4249-8057-3206a368b896: test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f ( 13dd6a3b-20f8-4643-8de2-4042973d3402 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.774-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337 (6579dda9-84dc-4b43-ae15-ac891bb07a42) to test5_fsmdb0.agg_out and drop b854d2d3-d612-411f-873c-68987851fbde.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.494-0500 I STORAGE [conn112] Index build initialized: 88596d1c-53c6-433e-b42f-836a21bd7dd5: test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2 (dfec2bc8-7bc8-4998-ba54-0412af661900 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.756-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337 (6579dda9-84dc-4b43-ae15-ac891bb07a42) to test5_fsmdb0.agg_out and drop b854d2d3-d612-411f-873c-68987851fbde.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.775-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.agg_out (b854d2d3-d612-411f-873c-68987851fbde) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 2529), t: 1 } and commit timestamp Timestamp(1574796773, 2529)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.494-0500 I INDEX [conn112] Waiting for index build to complete: 88596d1c-53c6-433e-b42f-836a21bd7dd5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.756-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.agg_out (b854d2d3-d612-411f-873c-68987851fbde) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 2529), t: 1 } and commit timestamp Timestamp(1574796773, 2529)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.775-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.agg_out (b854d2d3-d612-411f-873c-68987851fbde).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.494-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.756-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.agg_out (b854d2d3-d612-411f-873c-68987851fbde).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.775-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection 6579dda9-84dc-4b43-ae15-ac891bb07a42 from test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.494-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (2bdd1f07-4539-490c-be19-956dfe8b90de) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 3541), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.756-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection 6579dda9-84dc-4b43-ae15-ac891bb07a42 from test5_fsmdb0.tmp.agg_out.9039189d-2af4-447d-8211-633cf633d337 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.775-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (b854d2d3-d612-411f-873c-68987851fbde)'. Ident: 'index-872--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 2529)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.494-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (2bdd1f07-4539-490c-be19-956dfe8b90de).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.756-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (b854d2d3-d612-411f-873c-68987851fbde)'. Ident: 'index-872--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 2529)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.775-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (b854d2d3-d612-411f-873c-68987851fbde)'. Ident: 'index-879--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 2529)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.494-0500 I STORAGE [conn114] renameCollection: renaming collection 13dd6a3b-20f8-4643-8de2-4042973d3402 from test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.756-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (b854d2d3-d612-411f-873c-68987851fbde)'. Ident: 'index-879--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 2529)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.775-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-871--4104909142373009110, commit timestamp: Timestamp(1574796773, 2529)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.494-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (2bdd1f07-4539-490c-be19-956dfe8b90de)'. Ident: 'index-868-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 3541)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.756-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-871--8000595249233899911, commit timestamp: Timestamp(1574796773, 2529)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.775-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2 with provided UUID: dfec2bc8-7bc8-4998-ba54-0412af661900 and options: { uuid: UUID("dfec2bc8-7bc8-4998-ba54-0412af661900"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.494-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (2bdd1f07-4539-490c-be19-956dfe8b90de)'. Ident: 'index-871-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 3541)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.757-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2 with provided UUID: dfec2bc8-7bc8-4998-ba54-0412af661900 and options: { uuid: UUID("dfec2bc8-7bc8-4998-ba54-0412af661900"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.776-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 489dbe94-20bc-462f-b7bc-edc4a9055f15: test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f ( 13dd6a3b-20f8-4643-8de2-4042973d3402 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.494-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-866-8224331490264904478, commit timestamp: Timestamp(1574796773, 3541)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.771-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.789-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.494-0500 I INDEX [conn108] Registering index build: 0f876952-1535-4cdc-8ebf-13218d3efaa0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.772-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d with provided UUID: 0d23b1d3-21a6-41eb-b127-bc997c5789cf and options: { uuid: UUID("0d23b1d3-21a6-41eb-b127-bc997c5789cf"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.790-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d with provided UUID: 0d23b1d3-21a6-41eb-b127-bc997c5789cf and options: { uuid: UUID("0d23b1d3-21a6-41eb-b127-bc997c5789cf"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.494-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.787-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.803-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.494-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7046226615424606497, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3592451348208855880, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796773307), clusterTime: Timestamp(1574796773, 1016) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796773, 1016), signature: { hash: BinData(0, 848089E374A89995AC49B1E2978B3077A0938EA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 186ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.806-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.812-0500 I STORAGE [TimestampMonitor] Removing drop-pending idents with drop timestamps before timestamp Timestamp(1574796736, 5057)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.496-0500 I COMMAND [conn70] CMD: dropIndexes test5_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.806-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.812-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-346--4104909142373009110 (ns: test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 15)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.514-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: d5e287b5-6755-4c9a-b868-67ad0f578795: test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d ( 0d23b1d3-21a6-41eb-b127-bc997c5789cf ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.806-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: fd6fde34-9864-4322-8ca2-4039b4a07868: test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4 (c60c27da-283d-44b7-bd71-64091fc8c070 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.815-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-347--4104909142373009110 (ns: test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 15)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.514-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.806-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.823-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.525-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.807-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.823-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.535-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.807-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20 (2bdd1f07-4539-490c-be19-956dfe8b90de) to test5_fsmdb0.agg_out and drop 6579dda9-84dc-4b43-ae15-ac891bb07a42.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.823-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: f843c250-96aa-4b39-8afc-f79ff6aab1b3: test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4 (c60c27da-283d-44b7-bd71-64091fc8c070 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.535-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.808-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.824-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.536-0500 I STORAGE [conn108] Index build initialized: 0f876952-1535-4cdc-8ebf-13218d3efaa0: test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205 (e4f57038-53a5-45ef-a5e3-85c6aaad8527 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.809-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (6579dda9-84dc-4b43-ae15-ac891bb07a42) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 3035), t: 1 } and commit timestamp Timestamp(1574796773, 3035)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.824-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-345--4104909142373009110 (ns: test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 15)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.536-0500 I INDEX [conn108] Waiting for index build to complete: 0f876952-1535-4cdc-8ebf-13218d3efaa0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.809-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (6579dda9-84dc-4b43-ae15-ac891bb07a42).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.825-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.536-0500 I INDEX [conn46] Index build completed: d5e287b5-6755-4c9a-b868-67ad0f578795
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.809-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 2bdd1f07-4539-490c-be19-956dfe8b90de from test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.825-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20 (2bdd1f07-4539-490c-be19-956dfe8b90de) to test5_fsmdb0.agg_out and drop 6579dda9-84dc-4b43-ae15-ac891bb07a42.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.536-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.809-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (6579dda9-84dc-4b43-ae15-ac891bb07a42)'. Ident: 'index-870--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 3035)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.826-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-350--4104909142373009110 (ns: config.cache.chunks.test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 23)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.536-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796773, 3032), signature: { hash: BinData(0, 848089E374A89995AC49B1E2978B3077A0938EA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 5009 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 100ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.809-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (6579dda9-84dc-4b43-ae15-ac891bb07a42)'. Ident: 'index-883--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 3035)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.829-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.536-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (13dd6a3b-20f8-4643-8de2-4042973d3402) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 4045), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.809-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-869--8000595249233899911, commit timestamp: Timestamp(1574796773, 3035)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.830-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (6579dda9-84dc-4b43-ae15-ac891bb07a42) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 3035), t: 1 } and commit timestamp Timestamp(1574796773, 3035)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.536-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (13dd6a3b-20f8-4643-8de2-4042973d3402).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.809-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205 with provided UUID: e4f57038-53a5-45ef-a5e3-85c6aaad8527 and options: { uuid: UUID("e4f57038-53a5-45ef-a5e3-85c6aaad8527"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.830-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (6579dda9-84dc-4b43-ae15-ac891bb07a42).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.536-0500 I STORAGE [conn110] renameCollection: renaming collection c60c27da-283d-44b7-bd71-64091fc8c070 from test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.810-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: fd6fde34-9864-4322-8ca2-4039b4a07868: test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4 ( c60c27da-283d-44b7-bd71-64091fc8c070 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.830-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 2bdd1f07-4539-490c-be19-956dfe8b90de from test5_fsmdb0.tmp.agg_out.4aac1564-8d54-4611-a9b6-63377c1a9b20 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.536-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 88596d1c-53c6-433e-b42f-836a21bd7dd5: test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2 ( dfec2bc8-7bc8-4998-ba54-0412af661900 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.816-0500 I STORAGE [TimestampMonitor] Removing drop-pending idents with drop timestamps before timestamp Timestamp(1574796736, 5123)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.830-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (6579dda9-84dc-4b43-ae15-ac891bb07a42)'. Ident: 'index-870--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 3035)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.536-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (13dd6a3b-20f8-4643-8de2-4042973d3402)'. Ident: 'index-874-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 4045)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.816-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-346--8000595249233899911 (ns: test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 15)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.830-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (6579dda9-84dc-4b43-ae15-ac891bb07a42)'. Ident: 'index-883--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 3035)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.536-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (13dd6a3b-20f8-4643-8de2-4042973d3402)'. Ident: 'index-875-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 4045)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.820-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-347--8000595249233899911 (ns: test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 15)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.830-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-869--4104909142373009110, commit timestamp: Timestamp(1574796773, 3035)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.536-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-872-8224331490264904478, commit timestamp: Timestamp(1574796773, 4045)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.826-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.831-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205 with provided UUID: e4f57038-53a5-45ef-a5e3-85c6aaad8527 and options: { uuid: UUID("e4f57038-53a5-45ef-a5e3-85c6aaad8527"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.536-0500 I INDEX [conn112] Index build completed: 88596d1c-53c6-433e-b42f-836a21bd7dd5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.827-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-345--8000595249233899911 (ns: test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 15)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.832-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-351--4104909142373009110 (ns: config.cache.chunks.test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 23)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.536-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.831-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-350--8000595249233899911 (ns: config.cache.chunks.test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 23)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.834-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: f843c250-96aa-4b39-8afc-f79ff6aab1b3: test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4 ( c60c27da-283d-44b7-bd71-64091fc8c070 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.536-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2 appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796773, 3032), signature: { hash: BinData(0, 848089E374A89995AC49B1E2978B3077A0938EA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 12695 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 108ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.833-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-351--8000595249233899911 (ns: config.cache.chunks.test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 23)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.842-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-349--4104909142373009110 (ns: config.cache.chunks.test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 23)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.536-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6614608521762958681, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3700406975284006225, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796773350), clusterTime: Timestamp(1574796773, 1523) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796773, 1523), signature: { hash: BinData(0, 848089E374A89995AC49B1E2978B3077A0938EA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 185ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.835-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-349--8000595249233899911 (ns: config.cache.chunks.test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 23)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.849-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.537-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.853-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.866-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.539-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.853-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.866-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.540-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 with generated UUID: b3230e5b-1c1e-4384-9cf4-934fca48c76e and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.853-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 4e554b3e-00a9-47ef-b8c6-aca7f9f89db0: test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d (0d23b1d3-21a6-41eb-b127-bc997c5789cf ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.866-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 4333a0ad-3452-4160-a0ff-2d91a12c5014: test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d (0d23b1d3-21a6-41eb-b127-bc997c5789cf ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.541-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf with generated UUID: 947a4634-0d07-411f-9af4-fbfaed20ef5f and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.853-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.866-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.542-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 0f876952-1535-4cdc-8ebf-13218d3efaa0: test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205 ( e4f57038-53a5-45ef-a5e3-85c6aaad8527 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.853-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.867-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.542-0500 I INDEX [conn108] Index build completed: 0f876952-1535-4cdc-8ebf-13218d3efaa0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.855-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f (13dd6a3b-20f8-4643-8de2-4042973d3402) to test5_fsmdb0.agg_out and drop 2bdd1f07-4539-490c-be19-956dfe8b90de.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.868-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f (13dd6a3b-20f8-4643-8de2-4042973d3402) to test5_fsmdb0.agg_out and drop 2bdd1f07-4539-490c-be19-956dfe8b90de.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.565-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.856-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.869-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.572-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.856-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (2bdd1f07-4539-490c-be19-956dfe8b90de) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 3541), t: 1 } and commit timestamp Timestamp(1574796773, 3541)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.869-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (2bdd1f07-4539-490c-be19-956dfe8b90de) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 3541), t: 1 } and commit timestamp Timestamp(1574796773, 3541)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.572-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.856-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (2bdd1f07-4539-490c-be19-956dfe8b90de).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.869-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (2bdd1f07-4539-490c-be19-956dfe8b90de).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.572-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (c60c27da-283d-44b7-bd71-64091fc8c070) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 5307), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.856-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 13dd6a3b-20f8-4643-8de2-4042973d3402 from test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.869-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 13dd6a3b-20f8-4643-8de2-4042973d3402 from test5_fsmdb0.tmp.agg_out.a9b1fea1-bcc4-4362-9f77-a517ab7e278f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.572-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (c60c27da-283d-44b7-bd71-64091fc8c070).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.856-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (2bdd1f07-4539-490c-be19-956dfe8b90de)'. Ident: 'index-876--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 3541)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.869-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (2bdd1f07-4539-490c-be19-956dfe8b90de)'. Ident: 'index-876--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 3541)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.572-0500 I STORAGE [conn110] renameCollection: renaming collection 0d23b1d3-21a6-41eb-b127-bc997c5789cf from test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.856-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (2bdd1f07-4539-490c-be19-956dfe8b90de)'. Ident: 'index-885--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 3541)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.869-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (2bdd1f07-4539-490c-be19-956dfe8b90de)'. Ident: 'index-885--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 3541)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.572-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c60c27da-283d-44b7-bd71-64091fc8c070)'. Ident: 'index-878-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 5307)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.856-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-875--8000595249233899911, commit timestamp: Timestamp(1574796773, 3541)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.869-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-875--4104909142373009110, commit timestamp: Timestamp(1574796773, 3541)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.572-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c60c27da-283d-44b7-bd71-64091fc8c070)'. Ident: 'index-879-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 5307)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.859-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 4e554b3e-00a9-47ef-b8c6-aca7f9f89db0: test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d ( 0d23b1d3-21a6-41eb-b127-bc997c5789cf ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.871-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 4333a0ad-3452-4160-a0ff-2d91a12c5014: test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d ( 0d23b1d3-21a6-41eb-b127-bc997c5789cf ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.572-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-876-8224331490264904478, commit timestamp: Timestamp(1574796773, 5307)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.873-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.889-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.573-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.873-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.889-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.573-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (0d23b1d3-21a6-41eb-b127-bc997c5789cf) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 5308), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.873-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 629d4edc-41e9-44ff-89b9-ae4b62589fcd: test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2 (dfec2bc8-7bc8-4998-ba54-0412af661900 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.889-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 808f599d-ee08-4b44-b84f-e20ef941b4b2: test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2 (dfec2bc8-7bc8-4998-ba54-0412af661900 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.573-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (0d23b1d3-21a6-41eb-b127-bc997c5789cf).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.873-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.889-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.573-0500 I STORAGE [conn112] renameCollection: renaming collection dfec2bc8-7bc8-4998-ba54-0412af661900 from test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.874-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.890-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.573-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 841336611198777912, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3461397092144537213, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796773388), clusterTime: Timestamp(1574796773, 2529) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796773, 2529), signature: { hash: BinData(0, 848089E374A89995AC49B1E2978B3077A0938EA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 184ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.876-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.892-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.573-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0d23b1d3-21a6-41eb-b127-bc997c5789cf)'. Ident: 'index-884-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 5308)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.877-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4 (c60c27da-283d-44b7-bd71-64091fc8c070) to test5_fsmdb0.agg_out and drop 13dd6a3b-20f8-4643-8de2-4042973d3402.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.893-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4 (c60c27da-283d-44b7-bd71-64091fc8c070) to test5_fsmdb0.agg_out and drop 13dd6a3b-20f8-4643-8de2-4042973d3402.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.573-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0d23b1d3-21a6-41eb-b127-bc997c5789cf)'. Ident: 'index-885-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 5308)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.877-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (13dd6a3b-20f8-4643-8de2-4042973d3402) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 4045), t: 1 } and commit timestamp Timestamp(1574796773, 4045)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.893-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (13dd6a3b-20f8-4643-8de2-4042973d3402) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 4045), t: 1 } and commit timestamp Timestamp(1574796773, 4045)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.573-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-881-8224331490264904478, commit timestamp: Timestamp(1574796773, 5308)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.877-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (13dd6a3b-20f8-4643-8de2-4042973d3402).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.893-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (13dd6a3b-20f8-4643-8de2-4042973d3402).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.573-0500 I INDEX [conn46] Registering index build: fe38d519-f87c-454d-bc99-c4d0ccbeabac
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.877-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection c60c27da-283d-44b7-bd71-64091fc8c070 from test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.893-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection c60c27da-283d-44b7-bd71-64091fc8c070 from test5_fsmdb0.tmp.agg_out.26d76f6a-46c6-4c71-a8bb-064dda651fd4 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.573-0500 I INDEX [conn114] Registering index build: a03a0bf1-fe4c-44f2-91f7-bfab1f23a966
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.877-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (13dd6a3b-20f8-4643-8de2-4042973d3402)'. Ident: 'index-882--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 4045)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.893-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (13dd6a3b-20f8-4643-8de2-4042973d3402)'. Ident: 'index-882--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 4045)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.573-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3024470084113837006, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 506312164456683855, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796773388), clusterTime: Timestamp(1574796773, 2529) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796773, 2529), signature: { hash: BinData(0, 848089E374A89995AC49B1E2978B3077A0938EA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 184ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.877-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (13dd6a3b-20f8-4643-8de2-4042973d3402)'. Ident: 'index-889--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 4045)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.893-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (13dd6a3b-20f8-4643-8de2-4042973d3402)'. Ident: 'index-889--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 4045)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.576-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c with generated UUID: 3fb64bcc-692d-4f4e-97fe-bfc41162ddf9 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.877-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-881--8000595249233899911, commit timestamp: Timestamp(1574796773, 4045)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.893-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-881--4104909142373009110, commit timestamp: Timestamp(1574796773, 4045)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.576-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b with generated UUID: 77d68908-3b2c-41ce-a53d-ff8cc705dcb3 and options: { temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.879-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 629d4edc-41e9-44ff-89b9-ae4b62589fcd: test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2 ( dfec2bc8-7bc8-4998-ba54-0412af661900 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.895-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 808f599d-ee08-4b44-b84f-e20ef941b4b2: test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2 ( dfec2bc8-7bc8-4998-ba54-0412af661900 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.604-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.894-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.909-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.604-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.894-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.909-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.604-0500 I STORAGE [conn46] Index build initialized: fe38d519-f87c-454d-bc99-c4d0ccbeabac: test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 (b3230e5b-1c1e-4384-9cf4-934fca48c76e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.894-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 4fe6fdbe-5dc6-4dff-9ee6-7bd62cfbb7b9: test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205 (e4f57038-53a5-45ef-a5e3-85c6aaad8527 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.909-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 464a109a-7d87-4ff7-98dc-009cb8523d6a: test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205 (e4f57038-53a5-45ef-a5e3-85c6aaad8527 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.604-0500 I INDEX [conn46] Waiting for index build to complete: fe38d519-f87c-454d-bc99-c4d0ccbeabac
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.894-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.909-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.610-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.895-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.910-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.617-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.897-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.912-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.618-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.898-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 with provided UUID: b3230e5b-1c1e-4384-9cf4-934fca48c76e and options: { uuid: UUID("b3230e5b-1c1e-4384-9cf4-934fca48c76e"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.913-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796773, 4048) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796773, 4178), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 369ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.618-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (dfec2bc8-7bc8-4998-ba54-0412af661900) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 5556), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.898-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 4fe6fdbe-5dc6-4dff-9ee6-7bd62cfbb7b9: test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205 ( e4f57038-53a5-45ef-a5e3-85c6aaad8527 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.914-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 with provided UUID: b3230e5b-1c1e-4384-9cf4-934fca48c76e and options: { uuid: UUID("b3230e5b-1c1e-4384-9cf4-934fca48c76e"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.618-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (dfec2bc8-7bc8-4998-ba54-0412af661900).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.913-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.914-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 464a109a-7d87-4ff7-98dc-009cb8523d6a: test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205 ( e4f57038-53a5-45ef-a5e3-85c6aaad8527 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.618-0500 I STORAGE [conn108] renameCollection: renaming collection e4f57038-53a5-45ef-a5e3-85c6aaad8527 from test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.927-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf with provided UUID: 947a4634-0d07-411f-9af4-fbfaed20ef5f and options: { uuid: UUID("947a4634-0d07-411f-9af4-fbfaed20ef5f"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.925-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.618-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (dfec2bc8-7bc8-4998-ba54-0412af661900)'. Ident: 'index-883-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 5556)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.939-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.940-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf with provided UUID: 947a4634-0d07-411f-9af4-fbfaed20ef5f and options: { uuid: UUID("947a4634-0d07-411f-9af4-fbfaed20ef5f"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.618-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (dfec2bc8-7bc8-4998-ba54-0412af661900)'. Ident: 'index-889-8224331490264904478', commit timestamp: 'Timestamp(1574796773, 5556)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.944-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d (0d23b1d3-21a6-41eb-b127-bc997c5789cf) to test5_fsmdb0.agg_out and drop c60c27da-283d-44b7-bd71-64091fc8c070.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.952-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.618-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-880-8224331490264904478, commit timestamp: Timestamp(1574796773, 5556)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.944-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (c60c27da-283d-44b7-bd71-64091fc8c070) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 5307), t: 1 } and commit timestamp Timestamp(1574796773, 5307)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.957-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d (0d23b1d3-21a6-41eb-b127-bc997c5789cf) to test5_fsmdb0.agg_out and drop c60c27da-283d-44b7-bd71-64091fc8c070.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.618-0500 I INDEX [conn110] Registering index build: 132be239-820f-4df5-a6be-3f003b20d167
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.945-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (c60c27da-283d-44b7-bd71-64091fc8c070).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.957-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (c60c27da-283d-44b7-bd71-64091fc8c070) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 5307), t: 1 } and commit timestamp Timestamp(1574796773, 5307)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.618-0500 I INDEX [conn112] Registering index build: f7e19d26-7dfe-41e7-968b-baced9cb5817
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.945-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection 0d23b1d3-21a6-41eb-b127-bc997c5789cf from test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.957-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (c60c27da-283d-44b7-bd71-64091fc8c070).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.618-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.945-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c60c27da-283d-44b7-bd71-64091fc8c070)'. Ident: 'index-888--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 5307)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.957-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection 0d23b1d3-21a6-41eb-b127-bc997c5789cf from test5_fsmdb0.tmp.agg_out.36b9c844-f38b-4762-a95f-c907f28f1c0d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.618-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3542751822279002005, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6469241343620799050, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796773443), clusterTime: Timestamp(1574796773, 3035) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796773, 3035), signature: { hash: BinData(0, 848089E374A89995AC49B1E2978B3077A0938EA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 174ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.945-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c60c27da-283d-44b7-bd71-64091fc8c070)'. Ident: 'index-895--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 5307)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.957-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c60c27da-283d-44b7-bd71-64091fc8c070)'. Ident: 'index-888--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 5307)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.619-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.945-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-887--8000595249233899911, commit timestamp: Timestamp(1574796773, 5307)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.957-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c60c27da-283d-44b7-bd71-64091fc8c070)'. Ident: 'index-895--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 5307)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.629-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.945-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2 (dfec2bc8-7bc8-4998-ba54-0412af661900) to test5_fsmdb0.agg_out and drop 0d23b1d3-21a6-41eb-b127-bc997c5789cf.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.957-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-887--4104909142373009110, commit timestamp: Timestamp(1574796773, 5307)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.637-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.945-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.agg_out (0d23b1d3-21a6-41eb-b127-bc997c5789cf) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 5308), t: 1 } and commit timestamp Timestamp(1574796773, 5308)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.958-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2 (dfec2bc8-7bc8-4998-ba54-0412af661900) to test5_fsmdb0.agg_out and drop 0d23b1d3-21a6-41eb-b127-bc997c5789cf.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.637-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.945-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.agg_out (0d23b1d3-21a6-41eb-b127-bc997c5789cf).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.958-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test5_fsmdb0.agg_out (0d23b1d3-21a6-41eb-b127-bc997c5789cf) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 5308), t: 1 } and commit timestamp Timestamp(1574796773, 5308)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.637-0500 I STORAGE [conn114] Index build initialized: a03a0bf1-fe4c-44f2-91f7-bfab1f23a966: test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf (947a4634-0d07-411f-9af4-fbfaed20ef5f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.945-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection dfec2bc8-7bc8-4998-ba54-0412af661900 from test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.958-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test5_fsmdb0.agg_out (0d23b1d3-21a6-41eb-b127-bc997c5789cf).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.637-0500 I INDEX [conn114] Waiting for index build to complete: a03a0bf1-fe4c-44f2-91f7-bfab1f23a966
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.945-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0d23b1d3-21a6-41eb-b127-bc997c5789cf)'. Ident: 'index-894--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 5308)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.958-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection dfec2bc8-7bc8-4998-ba54-0412af661900 from test5_fsmdb0.tmp.agg_out.19b63724-30b4-4774-9f26-1dced35c91e2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.637-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.945-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0d23b1d3-21a6-41eb-b127-bc997c5789cf)'. Ident: 'index-899--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 5308)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.958-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0d23b1d3-21a6-41eb-b127-bc997c5789cf)'. Ident: 'index-894--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 5308)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.638-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: fe38d519-f87c-454d-bc99-c4d0ccbeabac: test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 ( b3230e5b-1c1e-4384-9cf4-934fca48c76e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.945-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-893--8000595249233899911, commit timestamp: Timestamp(1574796773, 5308)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.958-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0d23b1d3-21a6-41eb-b127-bc997c5789cf)'. Ident: 'index-899--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 5308)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.638-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.949-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c with provided UUID: 3fb64bcc-692d-4f4e-97fe-bfc41162ddf9 and options: { uuid: UUID("3fb64bcc-692d-4f4e-97fe-bfc41162ddf9"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.958-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-893--4104909142373009110, commit timestamp: Timestamp(1574796773, 5308)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.639-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca with generated UUID: fc9e6bfd-af76-4606-a934-860e20aba03a and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.961-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.962-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c with provided UUID: 3fb64bcc-692d-4f4e-97fe-bfc41162ddf9 and options: { uuid: UUID("3fb64bcc-692d-4f4e-97fe-bfc41162ddf9"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.648-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.962-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b with provided UUID: 77d68908-3b2c-41ce-a53d-ff8cc705dcb3 and options: { uuid: UUID("77d68908-3b2c-41ce-a53d-ff8cc705dcb3"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.977-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.665-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.977-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.978-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b with provided UUID: 77d68908-3b2c-41ce-a53d-ff8cc705dcb3 and options: { uuid: UUID("77d68908-3b2c-41ce-a53d-ff8cc705dcb3"), temp: true, validationLevel: "off", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.665-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.979-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205 (e4f57038-53a5-45ef-a5e3-85c6aaad8527) to test5_fsmdb0.agg_out and drop dfec2bc8-7bc8-4998-ba54-0412af661900.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.993-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.665-0500 I STORAGE [conn110] Index build initialized: 132be239-820f-4df5-a6be-3f003b20d167: test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b (77d68908-3b2c-41ce-a53d-ff8cc705dcb3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.979-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test5_fsmdb0.agg_out (dfec2bc8-7bc8-4998-ba54-0412af661900) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 5556), t: 1 } and commit timestamp Timestamp(1574796773, 5556)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.996-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205 (e4f57038-53a5-45ef-a5e3-85c6aaad8527) to test5_fsmdb0.agg_out and drop dfec2bc8-7bc8-4998-ba54-0412af661900.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.665-0500 I INDEX [conn110] Waiting for index build to complete: 132be239-820f-4df5-a6be-3f003b20d167
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.979-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test5_fsmdb0.agg_out (dfec2bc8-7bc8-4998-ba54-0412af661900).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.996-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (dfec2bc8-7bc8-4998-ba54-0412af661900) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796773, 5556), t: 1 } and commit timestamp Timestamp(1574796773, 5556)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.665-0500 I INDEX [conn46] Index build completed: fe38d519-f87c-454d-bc99-c4d0ccbeabac
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.979-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection e4f57038-53a5-45ef-a5e3-85c6aaad8527 from test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.996-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (dfec2bc8-7bc8-4998-ba54-0412af661900).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.666-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: a03a0bf1-fe4c-44f2-91f7-bfab1f23a966: test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf ( 947a4634-0d07-411f-9af4-fbfaed20ef5f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.980-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (dfec2bc8-7bc8-4998-ba54-0412af661900)'. Ident: 'index-892--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 5556)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.996-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection e4f57038-53a5-45ef-a5e3-85c6aaad8527 from test5_fsmdb0.tmp.agg_out.e5ee05e9-cbe4-4e00-a43c-44b352f5f205 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.672-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.980-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (dfec2bc8-7bc8-4998-ba54-0412af661900)'. Ident: 'index-901--8000595249233899911', commit timestamp: 'Timestamp(1574796773, 5556)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.996-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (dfec2bc8-7bc8-4998-ba54-0412af661900)'. Ident: 'index-892--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 5556)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.672-0500 I INDEX [conn108] Registering index build: 1187f68c-efb3-451d-ac3b-e86edc3c89eb
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.980-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-891--8000595249233899911, commit timestamp: Timestamp(1574796773, 5556)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.996-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (dfec2bc8-7bc8-4998-ba54-0412af661900)'. Ident: 'index-901--4104909142373009110', commit timestamp: 'Timestamp(1574796773, 5556)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.688-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.994-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:53.996-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-891--4104909142373009110, commit timestamp: Timestamp(1574796773, 5556)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.688-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.994-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:54.011-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.688-0500 I STORAGE [conn112] Index build initialized: f7e19d26-7dfe-41e7-968b-baced9cb5817: test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c (3fb64bcc-692d-4f4e-97fe-bfc41162ddf9 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.994-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 5305ab12-624f-4709-861a-9ede23d42ebe: test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 (b3230e5b-1c1e-4384-9cf4-934fca48c76e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:54.011-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.688-0500 I INDEX [conn112] Waiting for index build to complete: f7e19d26-7dfe-41e7-968b-baced9cb5817
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.994-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:54.011-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 62091a30-e44d-403a-bce4-9568b91b74b1: test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 (b3230e5b-1c1e-4384-9cf4-934fca48c76e ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.688-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.995-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:54.011-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.689-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.997-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:54.012-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.698-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:53.998-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca with provided UUID: fc9e6bfd-af76-4606-a934-860e20aba03a and options: { uuid: UUID("fc9e6bfd-af76-4606-a934-860e20aba03a"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:54.014-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:54.001-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 5305ab12-624f-4709-861a-9ede23d42ebe: test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 ( b3230e5b-1c1e-4384-9cf4-934fca48c76e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:53.705-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:54.016-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 62091a30-e44d-403a-bce4-9568b91b74b1: test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 ( b3230e5b-1c1e-4384-9cf4-934fca48c76e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:54.015-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.483-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:54.017-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca with provided UUID: fc9e6bfd-af76-4606-a934-860e20aba03a and options: { uuid: UUID("fc9e6bfd-af76-4606-a934-860e20aba03a"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:54.031-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.483-0500 I STORAGE [conn108] Index build initialized: 1187f68c-efb3-451d-ac3b-e86edc3c89eb: test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca (fc9e6bfd-af76-4606-a934-860e20aba03a ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:54.032-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:54.031-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.483-0500 I INDEX [conn108] Waiting for index build to complete: 1187f68c-efb3-451d-ac3b-e86edc3c89eb
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:54.047-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:54.031-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 20b6bf73-86c9-4b38-84b4-f96ba4b0dfae: test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf (947a4634-0d07-411f-9af4-fbfaed20ef5f ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.483-0500 I INDEX [conn114] Index build completed: a03a0bf1-fe4c-44f2-91f7-bfab1f23a966
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:54.047-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:54.031-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.484-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796773, 5307), signature: { hash: BinData(0, 848089E374A89995AC49B1E2978B3077A0938EA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 14653 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2911ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:54.047-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 9c6311a5-e976-47bd-936b-666663087f69: test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf (947a4634-0d07-411f-9af4-fbfaed20ef5f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:54.032-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.484-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:54.047-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:54.034-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.484-0500 I COMMAND [conn46] command admin.$cmd appName: "tid:3" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99", to: "test5_fsmdb0.agg_out", collectionOptions: { validationLevel: "off", validationAction: "warn" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796773, 6065), signature: { hash: BinData(0, 848089E374A89995AC49B1E2978B3077A0938EA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "admin" } numYields:0 ok:0 errMsg:"collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:614 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 2793123 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2793ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:54.048-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:54.039-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 20b6bf73-86c9-4b38-84b4-f96ba4b0dfae: test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf ( 947a4634-0d07-411f-9af4-fbfaed20ef5f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.484-0500 I COMMAND [conn220] command test5_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796773, 4048), lsid: { id: UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") }, $clusterTime: { clusterTime: Timestamp(1574796773, 4178), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796773, 4048). Collection minimum timestamp is Timestamp(1574796773, 6013)" errName:SnapshotUnavailable errCode:246 reslen:602 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2568859 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 2569ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:54.050-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.484-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:54.054-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 9c6311a5-e976-47bd-936b-666663087f69: test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf ( 947a4634-0d07-411f-9af4-fbfaed20ef5f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.484-0500 I COMMAND [conn114] CMD: drop test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.484-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 (b3230e5b-1c1e-4384-9cf4-934fca48c76e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.484-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 (b3230e5b-1c1e-4384-9cf4-934fca48c76e).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.485-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 (b3230e5b-1c1e-4384-9cf4-934fca48c76e)'. Ident: 'index-895-8224331490264904478', commit timestamp: 'Timestamp(1574796776, 3)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.485-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 (b3230e5b-1c1e-4384-9cf4-934fca48c76e)'. Ident: 'index-897-8224331490264904478', commit timestamp: 'Timestamp(1574796776, 3)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.485-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99'. Ident: collection-893-8224331490264904478, commit timestamp: Timestamp(1574796776, 3)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.485-0500 I COMMAND [conn70] command test5_fsmdb0.agg_out appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3657042777008335311, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7198044759979886107, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796773537), clusterTime: Timestamp(1574796773, 4044) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796773, 4045), signature: { hash: BinData(0, 848089E374A89995AC49B1E2978B3077A0938EA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:985 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 1328 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2946ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.572-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 132be239-820f-4df5-a6be-3f003b20d167: test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b ( 77d68908-3b2c-41ce-a53d-ff8cc705dcb3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.572-0500 I INDEX [conn110] Index build completed: 132be239-820f-4df5-a6be-3f003b20d167
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.572-0500 I COMMAND [conn110] command test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796773, 5556), signature: { hash: BinData(0, 848089E374A89995AC49B1E2978B3077A0938EA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 232 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2953ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.572-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.573-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.576-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.579-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.582-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92 with generated UUID: f13ebb13-8b27-4155-843a-421ecee443d6 and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.583-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 1187f68c-efb3-451d-ac3b-e86edc3c89eb: test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca ( fc9e6bfd-af76-4606-a934-860e20aba03a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.583-0500 I INDEX [conn108] Index build completed: 1187f68c-efb3-451d-ac3b-e86edc3c89eb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.583-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796773, 5628), signature: { hash: BinData(0, 848089E374A89995AC49B1E2978B3077A0938EA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2910ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.585-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: f7e19d26-7dfe-41e7-968b-baced9cb5817: test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c ( 3fb64bcc-692d-4f4e-97fe-bfc41162ddf9 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.585-0500 I INDEX [conn112] Index build completed: f7e19d26-7dfe-41e7-968b-baced9cb5817
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.585-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796773, 5555), signature: { hash: BinData(0, 848089E374A89995AC49B1E2978B3077A0938EA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 7126 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2974ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.590-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.590-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.590-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 51bffcd0-40c6-4347-a752-4411cc7e2d6a: test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b (77d68908-3b2c-41ce-a53d-ff8cc705dcb3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.591-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.592-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.593-0500 I COMMAND [ReplWriterWorker-0] CMD: drop test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.593-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 (b3230e5b-1c1e-4384-9cf4-934fca48c76e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 3), t: 1 } and commit timestamp Timestamp(1574796776, 3)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.593-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 (b3230e5b-1c1e-4384-9cf4-934fca48c76e).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.593-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 (b3230e5b-1c1e-4384-9cf4-934fca48c76e)'. Ident: 'index-906--8000595249233899911', commit timestamp: 'Timestamp(1574796776, 3)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.593-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 (b3230e5b-1c1e-4384-9cf4-934fca48c76e)'. Ident: 'index-913--8000595249233899911', commit timestamp: 'Timestamp(1574796776, 3)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.593-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99'. Ident: collection-905--8000595249233899911, commit timestamp: Timestamp(1574796776, 3)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.595-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.595-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf appName: "tid:0" command: insert { insert: "tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf", bypassDocumentValidation: false, ordered: false, documents: 500, shardVersion: [ Timestamp(0, 0), ObjectId('000000000000000000000000') ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, writeConcern: { w: 1, wtimeout: 0 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796776, 3), signature: { hash: BinData(0, 93EE4854E5575B54E85D4FEF9278082F230517FE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796776, 1), t: 1 } }, $db: "test5_fsmdb0" } ninserted:500 keysInserted:1000 numYields:0 reslen:400 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 8 } }, ReplicationStateTransition: { acquireCount: { w: 8 } }, Global: { acquireCount: { w: 8 } }, Database: { acquireCount: { w: 8 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 90960 } }, Collection: { acquireCount: { w: 8 } }, Mutex: { acquireCount: { r: 1016 } } } flowControl:{ acquireCount: 8 } storage:{ timeWaitingMicros: { schemaLock: 4709 } } protocol:op_msg 106ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.605-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.606-0500 I INDEX [conn110] Registering index build: 763a03dd-6f84-4afa-b7ba-b42f9f5b3536
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.606-0500 I COMMAND [conn46] CMD: drop test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.606-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 51bffcd0-40c6-4347-a752-4411cc7e2d6a: test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b ( 77d68908-3b2c-41ce-a53d-ff8cc705dcb3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.609-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.609-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.610-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 704d4066-7cd9-4ac8-a1e0-619c16d221fb: test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b (77d68908-3b2c-41ce-a53d-ff8cc705dcb3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.610-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.611-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.612-0500 I COMMAND [ReplWriterWorker-11] CMD: drop test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.612-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 (b3230e5b-1c1e-4384-9cf4-934fca48c76e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 3), t: 1 } and commit timestamp Timestamp(1574796776, 3)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.612-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 (b3230e5b-1c1e-4384-9cf4-934fca48c76e).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.612-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 (b3230e5b-1c1e-4384-9cf4-934fca48c76e)'. Ident: 'index-906--4104909142373009110', commit timestamp: 'Timestamp(1574796776, 3)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.612-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99 (b3230e5b-1c1e-4384-9cf4-934fca48c76e)'. Ident: 'index-913--4104909142373009110', commit timestamp: 'Timestamp(1574796776, 3)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.612-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.51ac414b-8725-40ae-81d5-15e379541c99'. Ident: collection-905--4104909142373009110, commit timestamp: Timestamp(1574796776, 3)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.612-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796776, 2) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796776, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 125ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.614-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.615-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.615-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.615-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 5a0f3ad7-ba0d-47fd-ae55-20a316df8adf: test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca (fc9e6bfd-af76-4606-a934-860e20aba03a ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.615-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.616-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.617-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 704d4066-7cd9-4ac8-a1e0-619c16d221fb: test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b ( 77d68908-3b2c-41ce-a53d-ff8cc705dcb3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.619-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.626-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.626-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.626-0500 I STORAGE [conn110] Index build initialized: 763a03dd-6f84-4afa-b7ba-b42f9f5b3536: test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92 (f13ebb13-8b27-4155-843a-421ecee443d6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.626-0500 I INDEX [conn110] Waiting for index build to complete: 763a03dd-6f84-4afa-b7ba-b42f9f5b3536
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.626-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf (947a4634-0d07-411f-9af4-fbfaed20ef5f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.626-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf (947a4634-0d07-411f-9af4-fbfaed20ef5f).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.626-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf (947a4634-0d07-411f-9af4-fbfaed20ef5f)'. Ident: 'index-896-8224331490264904478', commit timestamp: 'Timestamp(1574796776, 2011)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.626-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf (947a4634-0d07-411f-9af4-fbfaed20ef5f)'. Ident: 'index-903-8224331490264904478', commit timestamp: 'Timestamp(1574796776, 2011)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.626-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf'. Ident: collection-894-8224331490264904478, commit timestamp: Timestamp(1574796776, 2011)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.626-0500 I COMMAND [conn68] command test5_fsmdb0.agg_out appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5572713014134034155, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4594563994088025508, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796773540), clusterTime: Timestamp(1574796773, 4048) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796773, 4113), signature: { hash: BinData(0, 848089E374A89995AC49B1E2978B3077A0938EA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:985 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3085ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.626-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.627-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (e4f57038-53a5-45ef-a5e3-85c6aaad8527) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 2012), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.627-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (e4f57038-53a5-45ef-a5e3-85c6aaad8527).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.627-0500 I STORAGE [conn108] renameCollection: renaming collection fc9e6bfd-af76-4606-a934-860e20aba03a from test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.627-0500 I COMMAND [conn112] CMD: drop test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.627-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e4f57038-53a5-45ef-a5e3-85c6aaad8527)'. Ident: 'index-888-8224331490264904478', commit timestamp: 'Timestamp(1574796776, 2012)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.627-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e4f57038-53a5-45ef-a5e3-85c6aaad8527)'. Ident: 'index-891-8224331490264904478', commit timestamp: 'Timestamp(1574796776, 2012)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.627-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-886-8224331490264904478, commit timestamp: Timestamp(1574796776, 2012)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.627-0500 I COMMAND [conn114] CMD: drop test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.627-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:56.627-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796773, 4048), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:815 protocol:op_msg 3086ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.627-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c (3fb64bcc-692d-4f4e-97fe-bfc41162ddf9) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.627-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b (77d68908-3b2c-41ce-a53d-ff8cc705dcb3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.627-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c (3fb64bcc-692d-4f4e-97fe-bfc41162ddf9).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.627-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b (77d68908-3b2c-41ce-a53d-ff8cc705dcb3).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.627-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c (3fb64bcc-692d-4f4e-97fe-bfc41162ddf9)'. Ident: 'index-901-8224331490264904478', commit timestamp: 'Timestamp(1574796776, 2013)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.627-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c (3fb64bcc-692d-4f4e-97fe-bfc41162ddf9)'. Ident: 'index-909-8224331490264904478', commit timestamp: 'Timestamp(1574796776, 2013)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:56.627-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796773, 5560), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2989ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:56.628-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796773, 5308), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:815 protocol:op_msg 3053ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.630-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 5a0f3ad7-ba0d-47fd-ae55-20a316df8adf: test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca ( fc9e6bfd-af76-4606-a934-860e20aba03a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.634-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:56.774-0500 I CONNPOOL [ShardRegistry] Ending idle connection to host localhost:20004 because the pool meets constraints; 2 connections to that host remain open
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.632-0500 I STORAGE [TimestampMonitor] Removing drop-pending idents with drop timestamps before timestamp Timestamp(1574796735, 546)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.646-0500 I STORAGE [TimestampMonitor] Removing drop-pending idents with drop timestamps before timestamp Timestamp(1574796735, 546)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:32:57.781-0500 I NETWORK [conn52] end connection 127.0.0.1:55678 (25 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.627-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b (77d68908-3b2c-41ce-a53d-ff8cc705dcb3)'. Ident: 'index-902-8224331490264904478', commit timestamp: 'Timestamp(1574796776, 2014)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:56.628-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796773, 5372), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:815 protocol:op_msg 3052ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:56.725-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796776, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 144ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.640-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.634-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:56.774-0500 I NETWORK [conn75] end connection 127.0.0.1:46130 (47 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.632-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-198--2310912778499990807 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796715, 7)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.646-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-198--7234316082034423155 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796715, 7)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.627-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c'. Ident: collection-898-8224331490264904478, commit timestamp: Timestamp(1574796776, 2013)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:56.828-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796776, 2011), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 200ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:56.916-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796776, 2014), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 287ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.640-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.634-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: bc0738d2-d719-4ed3-8ef4-3a33041f72bf: test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca (fc9e6bfd-af76-4606-a934-860e20aba03a ): indexes: 1
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:32:57.781-0500 I CONNPOOL [ShardRegistry] Ending idle connection to host localhost:20000 because the pool meets constraints; 1 connections to that host remain open
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.633-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-201--2310912778499990807 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796715, 7)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.647-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-201--7234316082034423155 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796715, 7)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.627-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b (77d68908-3b2c-41ce-a53d-ff8cc705dcb3)'. Ident: 'index-905-8224331490264904478', commit timestamp: 'Timestamp(1574796776, 2014)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:56.859-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796776, 2014), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 230ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:56.916-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796776, 2522), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 189ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.640-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 9a83f314-551a-47cc-8173-251829b64dc7: test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c (3fb64bcc-692d-4f4e-97fe-bfc41162ddf9 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.634-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.634-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-197--2310912778499990807 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796715, 7)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.648-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-197--7234316082034423155 (ns: test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796715, 7)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.627-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b'. Ident: collection-899-8224331490264904478, commit timestamp: Timestamp(1574796776, 2014)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:56.860-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796776, 2014), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 231ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.640-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.635-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.636-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-212--2310912778499990807 (ns: config.cache.chunks.test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796715, 11)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.649-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-212--7234316082034423155 (ns: config.cache.chunks.test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796715, 11)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.627-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1858080185207317736, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6048704785059299724, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796773638), clusterTime: Timestamp(1574796773, 5560) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796773, 5560), signature: { hash: BinData(0, 848089E374A89995AC49B1E2978B3077A0938EA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2988ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:57.044-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796776, 3034), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 214ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.641-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.638-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.637-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-219--2310912778499990807 (ns: config.cache.chunks.test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796715, 11)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.651-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-219--7234316082034423155 (ns: config.cache.chunks.test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796715, 11)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.627-0500 I COMMAND [conn71] command test5_fsmdb0.agg_out appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3702739115166829174, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4355920526779673602, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796773574), clusterTime: Timestamp(1574796773, 5308) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796773, 5436), signature: { hash: BinData(0, 848089E374A89995AC49B1E2978B3077A0938EA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:985 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3051ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.644-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.641-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: bc0738d2-d719-4ed3-8ef4-3a33041f72bf: test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca ( fc9e6bfd-af76-4606-a934-860e20aba03a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.638-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-211--2310912778499990807 (ns: config.cache.chunks.test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796715, 11)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.653-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-211--7234316082034423155 (ns: config.cache.chunks.test3_fsmdb0.agg_out) with drop timestamp Timestamp(1574796715, 11)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.627-0500 I COMMAND [conn67] command test5_fsmdb0.agg_out appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3650755805921048649, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3264644893939718469, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796773575), clusterTime: Timestamp(1574796773, 5372) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796773, 5436), signature: { hash: BinData(0, 848089E374A89995AC49B1E2978B3077A0938EA9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796767, 747), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"warn\" }, new options: { validationLevel: \"moderate\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:985 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3051ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.646-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 9a83f314-551a-47cc-8173-251829b64dc7: test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c ( 3fb64bcc-692d-4f4e-97fe-bfc41162ddf9 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.661-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.639-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-166--2310912778499990807 (ns: test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 16)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.654-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-166--7234316082034423155 (ns: test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 16)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.627-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.648-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92 with provided UUID: f13ebb13-8b27-4155-843a-421ecee443d6 and options: { uuid: UUID("f13ebb13-8b27-4155-843a-421ecee443d6"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.661-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.640-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-167--2310912778499990807 (ns: test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 16)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.655-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-167--7234316082034423155 (ns: test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 16)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.656-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-165--7234316082034423155 (ns: test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 16)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.666-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.661-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 29aacdfd-3bb3-4956-a959-bf2f7c1a7382: test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c (3fb64bcc-692d-4f4e-97fe-bfc41162ddf9 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.641-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-165--2310912778499990807 (ns: test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 16)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.629-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8 with generated UUID: e1fb7d34-785d-46dd-a046-741e30624528 and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.657-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-170--7234316082034423155 (ns: config.cache.chunks.test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 25)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.684-0500 I COMMAND [ReplWriterWorker-1] CMD: drop test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.662-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.642-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-170--2310912778499990807 (ns: config.cache.chunks.test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 25)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.630-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df with generated UUID: 3dfc5a02-f26e-4f15-91ee-766096d1fd09 and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.658-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-171--7234316082034423155 (ns: config.cache.chunks.test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 25)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.684-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf (947a4634-0d07-411f-9af4-fbfaed20ef5f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 2011), t: 1 } and commit timestamp Timestamp(1574796776, 2011)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.662-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.643-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-171--2310912778499990807 (ns: config.cache.chunks.test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 25)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.630-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead with generated UUID: a080f6f9-209a-4d76-b916-1c2b632bacea and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.659-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-169--7234316082034423155 (ns: config.cache.chunks.test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 25)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.684-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf (947a4634-0d07-411f-9af4-fbfaed20ef5f).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.666-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.645-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-169--2310912778499990807 (ns: config.cache.chunks.test3_fsmdb0.fsmcoll0) with drop timestamp Timestamp(1574796715, 25)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.630-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.660-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-234--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 2618)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.684-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf (947a4634-0d07-411f-9af4-fbfaed20ef5f)'. Ident: 'index-908--8000595249233899911', commit timestamp: 'Timestamp(1574796776, 2011)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.671-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 29aacdfd-3bb3-4956-a959-bf2f7c1a7382: test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c ( 3fb64bcc-692d-4f4e-97fe-bfc41162ddf9 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.645-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-234--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 2618)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.630-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119 with generated UUID: 4cc54e7b-47b4-4788-a5b6-787f5e0527ad and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.663-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-235--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 2618)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.684-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf (947a4634-0d07-411f-9af4-fbfaed20ef5f)'. Ident: 'index-917--8000595249233899911', commit timestamp: 'Timestamp(1574796776, 2011)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.673-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92 with provided UUID: f13ebb13-8b27-4155-843a-421ecee443d6 and options: { uuid: UUID("f13ebb13-8b27-4155-843a-421ecee443d6"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.646-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-235--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 2618)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.669-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 763a03dd-6f84-4afa-b7ba-b42f9f5b3536: test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92 ( f13ebb13-8b27-4155-843a-421ecee443d6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.664-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-233--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 2618)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.685-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf'. Ident: collection-907--8000595249233899911, commit timestamp: Timestamp(1574796776, 2011)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.690-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.648-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-233--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 2618)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.670-0500 I INDEX [conn110] Index build completed: 763a03dd-6f84-4afa-b7ba-b42f9f5b3536
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.665-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-238--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 2967)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.685-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca (fc9e6bfd-af76-4606-a934-860e20aba03a) to test5_fsmdb0.agg_out and drop e4f57038-53a5-45ef-a5e3-85c6aaad8527.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.708-0500 I COMMAND [ReplWriterWorker-2] CMD: drop test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.649-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-238--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 2967)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.679-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.666-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-247--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 2967)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.685-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (e4f57038-53a5-45ef-a5e3-85c6aaad8527) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 2012), t: 1 } and commit timestamp Timestamp(1574796776, 2012)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.708-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf (947a4634-0d07-411f-9af4-fbfaed20ef5f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 2011), t: 1 } and commit timestamp Timestamp(1574796776, 2011)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.650-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-247--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 2967)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.679-0500 I INDEX [conn112] Registering index build: 07a8f8a1-4497-475f-bf92-fafd67cc1885
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.667-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-237--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 2967)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.685-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (e4f57038-53a5-45ef-a5e3-85c6aaad8527).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.708-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf (947a4634-0d07-411f-9af4-fbfaed20ef5f).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.651-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-237--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 2967)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.687-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.668-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-246--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 3032)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.685-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection fc9e6bfd-af76-4606-a934-860e20aba03a from test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.708-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf (947a4634-0d07-411f-9af4-fbfaed20ef5f)'. Ident: 'index-908--4104909142373009110', commit timestamp: 'Timestamp(1574796776, 2011)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.653-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-246--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 3032)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.688-0500 I INDEX [conn114] Registering index build: 74f1561a-1a0a-4675-8dc3-99c3d2f1b953
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.669-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-253--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 3032)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.685-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e4f57038-53a5-45ef-a5e3-85c6aaad8527)'. Ident: 'index-898--8000595249233899911', commit timestamp: 'Timestamp(1574796776, 2012)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.708-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf (947a4634-0d07-411f-9af4-fbfaed20ef5f)'. Ident: 'index-917--4104909142373009110', commit timestamp: 'Timestamp(1574796776, 2011)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.655-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-253--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 3032)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.695-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.670-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-245--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 3032)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.685-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e4f57038-53a5-45ef-a5e3-85c6aaad8527)'. Ident: 'index-903--8000595249233899911', commit timestamp: 'Timestamp(1574796776, 2012)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.708-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.5557f527-e659-4b40-8e85-31af8ed626cf'. Ident: collection-907--4104909142373009110, commit timestamp: Timestamp(1574796776, 2011)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.656-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-245--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 3032)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.696-0500 I INDEX [conn108] Registering index build: 6be6c46a-4a63-4039-9009-0210a8901805
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.672-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-242--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 3033)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.685-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-897--8000595249233899911, commit timestamp: Timestamp(1574796776, 2012)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.709-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca (fc9e6bfd-af76-4606-a934-860e20aba03a) to test5_fsmdb0.agg_out and drop e4f57038-53a5-45ef-a5e3-85c6aaad8527.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.657-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-242--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 3033)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.704-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.673-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-251--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 3033)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.692-0500 I COMMAND [ReplWriterWorker-13] CMD: drop test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.709-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (e4f57038-53a5-45ef-a5e3-85c6aaad8527) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 2012), t: 1 } and commit timestamp Timestamp(1574796776, 2012)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.659-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-251--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 3033)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.704-0500 I INDEX [conn46] Registering index build: db044671-936a-43ac-93d1-8918873401ae
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.674-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-241--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 3033)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.692-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c (3fb64bcc-692d-4f4e-97fe-bfc41162ddf9) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 2013), t: 1 } and commit timestamp Timestamp(1574796776, 2013)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.709-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (e4f57038-53a5-45ef-a5e3-85c6aaad8527).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.660-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-241--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796716, 3033)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.724-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.675-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-244--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6) with drop timestamp Timestamp(1574796717, 5)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.692-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c (3fb64bcc-692d-4f4e-97fe-bfc41162ddf9).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.709-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection fc9e6bfd-af76-4606-a934-860e20aba03a from test5_fsmdb0.tmp.agg_out.0c5537b9-a949-457e-80ed-1799e62695ca to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.661-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-244--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6) with drop timestamp Timestamp(1574796717, 5)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.724-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.676-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-255--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6) with drop timestamp Timestamp(1574796717, 5)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.692-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c (3fb64bcc-692d-4f4e-97fe-bfc41162ddf9)'. Ident: 'index-910--8000595249233899911', commit timestamp: 'Timestamp(1574796776, 2013)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.709-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e4f57038-53a5-45ef-a5e3-85c6aaad8527)'. Ident: 'index-898--4104909142373009110', commit timestamp: 'Timestamp(1574796776, 2012)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.663-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-255--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6) with drop timestamp Timestamp(1574796717, 5)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.724-0500 I STORAGE [conn112] Index build initialized: 07a8f8a1-4497-475f-bf92-fafd67cc1885: test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8 (e1fb7d34-785d-46dd-a046-741e30624528 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.677-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-243--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6) with drop timestamp Timestamp(1574796717, 5)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.678-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-240--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796717, 2151)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.709-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e4f57038-53a5-45ef-a5e3-85c6aaad8527)'. Ident: 'index-903--4104909142373009110', commit timestamp: 'Timestamp(1574796776, 2012)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.664-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-243--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.2ff700fc-2f7e-485d-8490-9b8f544d5fe6) with drop timestamp Timestamp(1574796717, 5)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.724-0500 I INDEX [conn112] Waiting for index build to complete: 07a8f8a1-4497-475f-bf92-fafd67cc1885
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.692-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c (3fb64bcc-692d-4f4e-97fe-bfc41162ddf9)'. Ident: 'index-923--8000595249233899911', commit timestamp: 'Timestamp(1574796776, 2013)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.680-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-249--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796717, 2151)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.709-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-897--4104909142373009110, commit timestamp: Timestamp(1574796776, 2012)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.666-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-240--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796717, 2151)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.724-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.692-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c'. Ident: collection-909--8000595249233899911, commit timestamp: Timestamp(1574796776, 2013)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.682-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-239--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796717, 2151)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.709-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.667-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-249--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796717, 2151)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.724-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (fc9e6bfd-af76-4606-a934-860e20aba03a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 2522), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.695-0500 I COMMAND [ReplWriterWorker-3] CMD: drop test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.684-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-260--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717) with drop timestamp Timestamp(1574796717, 2216)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.710-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c (3fb64bcc-692d-4f4e-97fe-bfc41162ddf9) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 2013), t: 1 } and commit timestamp Timestamp(1574796776, 2013)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.668-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-239--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796717, 2151)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.724-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (fc9e6bfd-af76-4606-a934-860e20aba03a).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.696-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b (77d68908-3b2c-41ce-a53d-ff8cc705dcb3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 2014), t: 1 } and commit timestamp Timestamp(1574796776, 2014)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.685-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-271--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717) with drop timestamp Timestamp(1574796717, 2216)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.710-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c (3fb64bcc-692d-4f4e-97fe-bfc41162ddf9).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.669-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-260--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717) with drop timestamp Timestamp(1574796717, 2216)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.724-0500 I STORAGE [conn110] renameCollection: renaming collection f13ebb13-8b27-4155-843a-421ecee443d6 from test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.696-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b (77d68908-3b2c-41ce-a53d-ff8cc705dcb3).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.687-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-259--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717) with drop timestamp Timestamp(1574796717, 2216)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.710-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c (3fb64bcc-692d-4f4e-97fe-bfc41162ddf9)'. Ident: 'index-910--4104909142373009110', commit timestamp: 'Timestamp(1574796776, 2013)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.670-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-271--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717) with drop timestamp Timestamp(1574796717, 2216)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.724-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (fc9e6bfd-af76-4606-a934-860e20aba03a)'. Ident: 'index-908-8224331490264904478', commit timestamp: 'Timestamp(1574796776, 2522)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.696-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b (77d68908-3b2c-41ce-a53d-ff8cc705dcb3)'. Ident: 'index-912--8000595249233899911', commit timestamp: 'Timestamp(1574796776, 2014)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.687-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-258--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05) with drop timestamp Timestamp(1574796717, 2217)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.710-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c (3fb64bcc-692d-4f4e-97fe-bfc41162ddf9)'. Ident: 'index-923--4104909142373009110', commit timestamp: 'Timestamp(1574796776, 2013)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.671-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-259--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.93e2492e-0597-492b-96e8-8c37e6a1b717) with drop timestamp Timestamp(1574796717, 2216)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.724-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (fc9e6bfd-af76-4606-a934-860e20aba03a)'. Ident: 'index-911-8224331490264904478', commit timestamp: 'Timestamp(1574796776, 2522)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.724-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-906-8224331490264904478, commit timestamp: Timestamp(1574796776, 2522)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.688-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-269--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05) with drop timestamp Timestamp(1574796717, 2217)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.710-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.9297fa93-4ac6-48ae-9b10-dcbfe2e9727c'. Ident: collection-909--4104909142373009110, commit timestamp: Timestamp(1574796776, 2013)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.672-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-258--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05) with drop timestamp Timestamp(1574796717, 2217)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.696-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b (77d68908-3b2c-41ce-a53d-ff8cc705dcb3)'. Ident: 'index-919--8000595249233899911', commit timestamp: 'Timestamp(1574796776, 2014)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.724-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.689-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-257--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05) with drop timestamp Timestamp(1574796717, 2217)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.710-0500 I COMMAND [ReplWriterWorker-15] CMD: drop test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.673-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-269--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05) with drop timestamp Timestamp(1574796717, 2217)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.696-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b'. Ident: collection-911--8000595249233899911, commit timestamp: Timestamp(1574796776, 2014)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.725-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6198502077533774016, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4607836781836140136, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796776581), clusterTime: Timestamp(1574796776, 8) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796776, 72), signature: { hash: BinData(0, 93EE4854E5575B54E85D4FEF9278082F230517FE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796776, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 142ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.691-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-262--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996) with drop timestamp Timestamp(1574796717, 2218)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.711-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b (77d68908-3b2c-41ce-a53d-ff8cc705dcb3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 2014), t: 1 } and commit timestamp Timestamp(1574796776, 2014)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.675-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-257--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.adf5d5a0-f4dd-4d11-a399-3e0763bcdd05) with drop timestamp Timestamp(1574796717, 2217)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.697-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8 with provided UUID: e1fb7d34-785d-46dd-a046-741e30624528 and options: { uuid: UUID("e1fb7d34-785d-46dd-a046-741e30624528"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.725-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.693-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-267--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996) with drop timestamp Timestamp(1574796717, 2218)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.711-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b (77d68908-3b2c-41ce-a53d-ff8cc705dcb3).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.676-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-262--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996) with drop timestamp Timestamp(1574796717, 2218)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.714-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.729-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd with generated UUID: 104f4136-0f04-4e91-badd-c62c5209b392 and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.694-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-261--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996) with drop timestamp Timestamp(1574796717, 2218)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.711-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b (77d68908-3b2c-41ce-a53d-ff8cc705dcb3)'. Ident: 'index-912--4104909142373009110', commit timestamp: 'Timestamp(1574796776, 2014)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.678-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-267--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996) with drop timestamp Timestamp(1574796717, 2218)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.716-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df with provided UUID: 3dfc5a02-f26e-4f15-91ee-766096d1fd09 and options: { uuid: UUID("3dfc5a02-f26e-4f15-91ee-766096d1fd09"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.736-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.695-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-264--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796717, 2531)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.711-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b (77d68908-3b2c-41ce-a53d-ff8cc705dcb3)'. Ident: 'index-919--4104909142373009110', commit timestamp: 'Timestamp(1574796776, 2014)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.679-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-261--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.87f0e722-92d7-4a81-b663-4b6504bdd996) with drop timestamp Timestamp(1574796717, 2218)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.732-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.753-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.696-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-275--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796717, 2531)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.711-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.f1c3fef6-7b45-4716-8ab4-3a45a98a2a0b'. Ident: collection-911--4104909142373009110, commit timestamp: Timestamp(1574796776, 2014)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.680-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-264--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796717, 2531)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.733-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead with provided UUID: a080f6f9-209a-4d76-b916-1c2b632bacea and options: { uuid: UUID("a080f6f9-209a-4d76-b916-1c2b632bacea"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.753-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.697-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-263--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796717, 2531)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.715-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8 with provided UUID: e1fb7d34-785d-46dd-a046-741e30624528 and options: { uuid: UUID("e1fb7d34-785d-46dd-a046-741e30624528"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.682-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-275--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796717, 2531)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.750-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.753-0500 I STORAGE [conn114] Index build initialized: 74f1561a-1a0a-4675-8dc3-99c3d2f1b953: test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df (3dfc5a02-f26e-4f15-91ee-766096d1fd09 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.698-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-266--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796718, 506)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.733-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.683-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-263--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796717, 2531)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.770-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.753-0500 I INDEX [conn114] Waiting for index build to complete: 74f1561a-1a0a-4675-8dc3-99c3d2f1b953
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.699-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-273--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796718, 506)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.734-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df with provided UUID: 3dfc5a02-f26e-4f15-91ee-766096d1fd09 and options: { uuid: UUID("3dfc5a02-f26e-4f15-91ee-766096d1fd09"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.684-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-266--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796718, 506)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.770-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.753-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.701-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-265--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796718, 506)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.751-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.687-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-273--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796718, 506)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.770-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 0bc804e3-c98c-4fa6-a8a1-96d9a8195c28: test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92 (f13ebb13-8b27-4155-843a-421ecee443d6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.755-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 07a8f8a1-4497-475f-bf92-fafd67cc1885: test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8 ( e1fb7d34-785d-46dd-a046-741e30624528 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.703-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-278--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796718, 1015)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.752-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead with provided UUID: a080f6f9-209a-4d76-b916-1c2b632bacea and options: { uuid: UUID("a080f6f9-209a-4d76-b916-1c2b632bacea"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.687-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-265--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796718, 506)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.771-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.764-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.704-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-287--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796718, 1015)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.770-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.688-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-278--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796718, 1015)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.771-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.764-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.705-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-277--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796718, 1015)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.787-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.689-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-287--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796718, 1015)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.772-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119 with provided UUID: 4cc54e7b-47b4-4788-a5b6-787f5e0527ad and options: { uuid: UUID("4cc54e7b-47b4-4788-a5b6-787f5e0527ad"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.765-0500 I INDEX [conn110] Registering index build: f58bc9d8-9200-4831-8245-998f9dc1a2d4
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.706-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-280--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.788-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.690-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-277--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796718, 1015)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.775-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.774-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.707-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-289--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.788-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 0726c07a-660e-4387-8fb6-c8add9d11e94: test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92 (f13ebb13-8b27-4155-843a-421ecee443d6 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.691-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-280--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.785-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 0bc804e3-c98c-4fa6-a8a1-96d9a8195c28: test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92 ( f13ebb13-8b27-4155-843a-421ecee443d6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.784-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.708-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-279--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.788-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.693-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-289--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.793-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.784-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.709-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-284--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 507)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.710-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-293--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 507)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.694-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-279--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.807-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92 (f13ebb13-8b27-4155-843a-421ecee443d6) to test5_fsmdb0.agg_out and drop fc9e6bfd-af76-4606-a934-860e20aba03a.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.784-0500 I STORAGE [conn108] Index build initialized: 6be6c46a-4a63-4039-9009-0210a8901805: test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead (a080f6f9-209a-4d76-b916-1c2b632bacea ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.788-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.713-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-283--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 507)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.696-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-284--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 507)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.808-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (fc9e6bfd-af76-4606-a934-860e20aba03a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 2522), t: 1 } and commit timestamp Timestamp(1574796776, 2522)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.784-0500 I INDEX [conn108] Waiting for index build to complete: 6be6c46a-4a63-4039-9009-0210a8901805
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.791-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.713-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-282--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 1012)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.698-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-293--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 507)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.808-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (fc9e6bfd-af76-4606-a934-860e20aba03a).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.784-0500 I INDEX [conn112] Index build completed: 07a8f8a1-4497-475f-bf92-fafd67cc1885
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.794-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 0726c07a-660e-4387-8fb6-c8add9d11e94: test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92 ( f13ebb13-8b27-4155-843a-421ecee443d6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.715-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-295--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 1012)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.699-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-283--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 507)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.808-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection f13ebb13-8b27-4155-843a-421ecee443d6 from test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.784-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.795-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119 with provided UUID: 4cc54e7b-47b4-4788-a5b6-787f5e0527ad and options: { uuid: UUID("4cc54e7b-47b4-4788-a5b6-787f5e0527ad"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.716-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-281--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 1012)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.700-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-282--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 1012)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.808-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (fc9e6bfd-af76-4606-a934-860e20aba03a)'. Ident: 'index-916--8000595249233899911', commit timestamp: 'Timestamp(1574796776, 2522)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.784-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796776, 2084), signature: { hash: BinData(0, 93EE4854E5575B54E85D4FEF9278082F230517FE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796776, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 104ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.813-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.717-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-286--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 1517)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.701-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-295--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 1012)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.808-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (fc9e6bfd-af76-4606-a934-860e20aba03a)'. Ident: 'index-921--8000595249233899911', commit timestamp: 'Timestamp(1574796776, 2522)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.785-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 74f1561a-1a0a-4675-8dc3-99c3d2f1b953: test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df ( 3dfc5a02-f26e-4f15-91ee-766096d1fd09 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:59.803-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796776, 4361), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2940ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.818-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796776, 2212) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796776, 2276), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 185 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 107ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.718-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-297--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 1517)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.703-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-281--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 1012)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.808-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-915--8000595249233899911, commit timestamp: Timestamp(1574796776, 2522)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.786-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.819-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92 (f13ebb13-8b27-4155-843a-421ecee443d6) to test5_fsmdb0.agg_out and drop fc9e6bfd-af76-4606-a934-860e20aba03a.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.719-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-285--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 1517)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.704-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-286--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 1517)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.809-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd with provided UUID: 104f4136-0f04-4e91-badd-c62c5209b392 and options: { uuid: UUID("104f4136-0f04-4e91-badd-c62c5209b392"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.788-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.819-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.agg_out (fc9e6bfd-af76-4606-a934-860e20aba03a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 2522), t: 1 } and commit timestamp Timestamp(1574796776, 2522)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.720-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-292--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2021)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.705-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-297--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 1517)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.827-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.799-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 6be6c46a-4a63-4039-9009-0210a8901805: test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead ( a080f6f9-209a-4d76-b916-1c2b632bacea ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.819-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.agg_out (fc9e6bfd-af76-4606-a934-860e20aba03a).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.722-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-303--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2021)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.707-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-285--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 1517)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.845-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.809-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.819-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection f13ebb13-8b27-4155-843a-421ecee443d6 from test5_fsmdb0.tmp.agg_out.5b55932b-ffe8-4fed-a1a7-1c2e4d474f92 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.723-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-291--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2021)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.708-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-292--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2021)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.845-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.809-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.819-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (fc9e6bfd-af76-4606-a934-860e20aba03a)'. Ident: 'index-916--4104909142373009110', commit timestamp: 'Timestamp(1574796776, 2522)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.724-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-300--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2526)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.709-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-303--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2021)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.845-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 3283a2f1-c379-409b-aca5-c4b775a153f3: test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8 (e1fb7d34-785d-46dd-a046-741e30624528 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.809-0500 I STORAGE [conn46] Index build initialized: db044671-936a-43ac-93d1-8918873401ae: test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119 (4cc54e7b-47b4-4788-a5b6-787f5e0527ad ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.819-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (fc9e6bfd-af76-4606-a934-860e20aba03a)'. Ident: 'index-921--4104909142373009110', commit timestamp: 'Timestamp(1574796776, 2522)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.726-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-307--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2526)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.710-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-291--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2021)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.845-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.810-0500 I INDEX [conn46] Waiting for index build to complete: db044671-936a-43ac-93d1-8918873401ae
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.819-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-915--4104909142373009110, commit timestamp: Timestamp(1574796776, 2522)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.727-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-299--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2526)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.711-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-300--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2526)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.846-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.827-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.828-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd with provided UUID: 104f4136-0f04-4e91-badd-c62c5209b392 and options: { uuid: UUID("104f4136-0f04-4e91-badd-c62c5209b392"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.727-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-302--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 3029)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.712-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-307--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2526)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.848-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.827-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.846-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.728-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-311--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 3029)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.713-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-299--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 2526)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.852-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 3283a2f1-c379-409b-aca5-c4b775a153f3: test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8 ( e1fb7d34-785d-46dd-a046-741e30624528 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.827-0500 I STORAGE [conn110] Index build initialized: f58bc9d8-9200-4831-8245-998f9dc1a2d4: test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd (104f4136-0f04-4e91-badd-c62c5209b392 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.864-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.729-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-301--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 3029)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.714-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-302--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 3029)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.869-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.827-0500 I INDEX [conn110] Waiting for index build to complete: f58bc9d8-9200-4831-8245-998f9dc1a2d4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.864-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.732-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-306--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.716-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-311--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 3029)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.869-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.827-0500 I INDEX [conn108] Index build completed: 6be6c46a-4a63-4039-9009-0210a8901805
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.864-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 681f73e8-07b5-4d36-93f6-3e52350526ea: test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8 (e1fb7d34-785d-46dd-a046-741e30624528 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.732-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-313--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.717-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-301--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796722, 3029)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.869-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: fe3da6c4-33cd-4aea-847a-989dccd71c8d: test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df (3dfc5a02-f26e-4f15-91ee-766096d1fd09 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.827-0500 I INDEX [conn114] Index build completed: 74f1561a-1a0a-4675-8dc3-99c3d2f1b953
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.864-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.734-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-305--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.718-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-306--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.869-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.827-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.864-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.735-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-310--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 510)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.719-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-313--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.869-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.827-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796776, 2084), signature: { hash: BinData(0, 93EE4854E5575B54E85D4FEF9278082F230517FE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796776, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 131ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.866-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.736-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-319--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 510)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.720-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-305--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.872-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.827-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (f13ebb13-8b27-4155-843a-421ecee443d6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 3034), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.871-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 681f73e8-07b5-4d36-93f6-3e52350526ea: test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8 ( e1fb7d34-785d-46dd-a046-741e30624528 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.737-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-309--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 510)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.721-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-310--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 510)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.883-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: fe3da6c4-33cd-4aea-847a-989dccd71c8d: test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df ( 3dfc5a02-f26e-4f15-91ee-766096d1fd09 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.827-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796776, 2084), signature: { hash: BinData(0, 93EE4854E5575B54E85D4FEF9278082F230517FE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796776, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 390 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 139ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.889-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.738-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-316--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 1850)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.722-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-319--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 510)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.893-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.827-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (f13ebb13-8b27-4155-843a-421ecee443d6).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.889-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.739-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-325--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 1850)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.723-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-309--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 510)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.894-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.827-0500 I STORAGE [conn112] renameCollection: renaming collection e1fb7d34-785d-46dd-a046-741e30624528 from test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.889-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: d77599eb-006d-478e-a554-37a4d27929e9: test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df (3dfc5a02-f26e-4f15-91ee-766096d1fd09 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.742-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-315--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 1850)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.726-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-316--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 1850)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.894-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: be2d0f3b-060c-48cf-b046-9164f5e016c9: test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead (a080f6f9-209a-4d76-b916-1c2b632bacea ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.828-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f13ebb13-8b27-4155-843a-421ecee443d6)'. Ident: 'index-914-8224331490264904478', commit timestamp: 'Timestamp(1574796776, 3034)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.889-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.743-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-318--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2021)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.727-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-325--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 1850)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.894-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.828-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f13ebb13-8b27-4155-843a-421ecee443d6)'. Ident: 'index-915-8224331490264904478', commit timestamp: 'Timestamp(1574796776, 3034)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.890-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.745-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-329--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2021)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.727-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-315--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 1850)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.894-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.828-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-913-8224331490264904478, commit timestamp: Timestamp(1574796776, 3034)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.893-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.746-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-317--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2021)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.728-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-318--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2021)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.898-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.828-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.895-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: d77599eb-006d-478e-a554-37a4d27929e9: test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df ( 3dfc5a02-f26e-4f15-91ee-766096d1fd09 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.747-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-322--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2022)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.729-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-329--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2021)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.902-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: be2d0f3b-060c-48cf-b046-9164f5e016c9: test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead ( a080f6f9-209a-4d76-b916-1c2b632bacea ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.828-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.912-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.748-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-331--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2022)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.730-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-317--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2021)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.912-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8 (e1fb7d34-785d-46dd-a046-741e30624528) to test5_fsmdb0.agg_out and drop f13ebb13-8b27-4155-843a-421ecee443d6.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.828-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4067869154347580577, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2125128303225949397, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796776628), clusterTime: Timestamp(1574796776, 2014) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796776, 2014), signature: { hash: BinData(0, 93EE4854E5575B54E85D4FEF9278082F230517FE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796776, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 199ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.912-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.749-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-321--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2022)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.732-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-322--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2022)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.912-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (f13ebb13-8b27-4155-843a-421ecee443d6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 3034), t: 1 } and commit timestamp Timestamp(1574796776, 3034)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.828-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.912-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 418e9689-0338-4d05-aab3-cb50e912150a: test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead (a080f6f9-209a-4d76-b916-1c2b632bacea ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.750-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-324--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 373)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.732-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-331--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2022)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.912-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (f13ebb13-8b27-4155-843a-421ecee443d6).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.829-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.912-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.753-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-335--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 373)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.735-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-321--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796723, 2022)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.912-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection e1fb7d34-785d-46dd-a046-741e30624528 from test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.831-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33 with generated UUID: 021d2085-f347-4a54-a76b-e2710727cdf9 and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.912-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.754-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-323--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 373)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.736-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-324--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 373)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.912-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f13ebb13-8b27-4155-843a-421ecee443d6)'. Ident: 'index-926--8000595249233899911', commit timestamp: 'Timestamp(1574796776, 3034)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.831-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.916-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.755-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-334--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796) with drop timestamp Timestamp(1574796726, 1203)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.738-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-335--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 373)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.912-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f13ebb13-8b27-4155-843a-421ecee443d6)'. Ident: 'index-933--8000595249233899911', commit timestamp: 'Timestamp(1574796776, 3034)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.834-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.920-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8 (e1fb7d34-785d-46dd-a046-741e30624528) to test5_fsmdb0.agg_out and drop f13ebb13-8b27-4155-843a-421ecee443d6.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.757-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-339--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796) with drop timestamp Timestamp(1574796726, 1203)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.739-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-323--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 373)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.912-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-925--8000595249233899911, commit timestamp: Timestamp(1574796776, 3034)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.845-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: db044671-936a-43ac-93d1-8918873401ae: test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119 ( 4cc54e7b-47b4-4788-a5b6-787f5e0527ad ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.920-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (f13ebb13-8b27-4155-843a-421ecee443d6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 3034), t: 1 } and commit timestamp Timestamp(1574796776, 3034)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.758-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-333--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796) with drop timestamp Timestamp(1574796726, 1203)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.740-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-334--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796) with drop timestamp Timestamp(1574796726, 1203)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.913-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33 with provided UUID: 021d2085-f347-4a54-a76b-e2710727cdf9 and options: { uuid: UUID("021d2085-f347-4a54-a76b-e2710727cdf9"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.845-0500 I INDEX [conn46] Index build completed: db044671-936a-43ac-93d1-8918873401ae
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.920-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (f13ebb13-8b27-4155-843a-421ecee443d6).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.759-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-346--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b) with drop timestamp Timestamp(1574796726, 2083)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.742-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-339--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796) with drop timestamp Timestamp(1574796726, 1203)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.930-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.845-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119 appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796776, 2084), signature: { hash: BinData(0, 93EE4854E5575B54E85D4FEF9278082F230517FE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796776, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 140ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.921-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection e1fb7d34-785d-46dd-a046-741e30624528 from test5_fsmdb0.tmp.agg_out.dc04c02a-5a06-4aa7-bd5c-5857b90c54f8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.760-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-349--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b) with drop timestamp Timestamp(1574796726, 2083)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.743-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-333--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.dafeccec-f0b0-405a-83cd-00e23168a796) with drop timestamp Timestamp(1574796726, 1203)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.950-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.848-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: f58bc9d8-9200-4831-8245-998f9dc1a2d4: test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd ( 104f4136-0f04-4e91-badd-c62c5209b392 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.921-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f13ebb13-8b27-4155-843a-421ecee443d6)'. Ident: 'index-926--4104909142373009110', commit timestamp: 'Timestamp(1574796776, 3034)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.761-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-345--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b) with drop timestamp Timestamp(1574796726, 2083)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.744-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-346--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b) with drop timestamp Timestamp(1574796726, 2083)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.950-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.950-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 64e1ed55-108b-4a1e-806b-4ce5c018008f: test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119 (4cc54e7b-47b4-4788-a5b6-787f5e0527ad ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.921-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f13ebb13-8b27-4155-843a-421ecee443d6)'. Ident: 'index-933--4104909142373009110', commit timestamp: 'Timestamp(1574796776, 3034)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.764-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-344--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05) with drop timestamp Timestamp(1574796726, 2150)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.746-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-349--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b) with drop timestamp Timestamp(1574796726, 2083)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.950-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.921-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-925--4104909142373009110, commit timestamp: Timestamp(1574796776, 3034)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.765-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-351--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05) with drop timestamp Timestamp(1574796726, 2150)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.848-0500 I INDEX [conn110] Index build completed: f58bc9d8-9200-4831-8245-998f9dc1a2d4
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.747-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-345--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.2ea3a6a5-c5ff-454c-89aa-3229f7f14e2b) with drop timestamp Timestamp(1574796726, 2083)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.951-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.921-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 418e9689-0338-4d05-aab3-cb50e912150a: test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead ( a080f6f9-209a-4d76-b916-1c2b632bacea ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.766-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-343--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05) with drop timestamp Timestamp(1574796726, 2150)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.857-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.748-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-344--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05) with drop timestamp Timestamp(1574796726, 2150)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.953-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.931-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33 with provided UUID: 021d2085-f347-4a54-a76b-e2710727cdf9 and options: { uuid: UUID("021d2085-f347-4a54-a76b-e2710727cdf9"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.767-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-342--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235) with drop timestamp Timestamp(1574796726, 3334)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.858-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.750-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-351--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05) with drop timestamp Timestamp(1574796726, 2150)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.957-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 64e1ed55-108b-4a1e-806b-4ce5c018008f: test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119 ( 4cc54e7b-47b4-4788-a5b6-787f5e0527ad ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.949-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.768-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-355--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235) with drop timestamp Timestamp(1574796726, 3334)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.859-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (e1fb7d34-785d-46dd-a046-741e30624528) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 4168), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.751-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-343--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.53ff971d-753c-401b-9626-c387b69d2b05) with drop timestamp Timestamp(1574796726, 2150)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.978-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.969-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.769-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-341--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235) with drop timestamp Timestamp(1574796726, 3334)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.859-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (e1fb7d34-785d-46dd-a046-741e30624528).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.753-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-342--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235) with drop timestamp Timestamp(1574796726, 3334)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.978-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.969-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.770-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-328--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 3335)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.859-0500 I STORAGE [conn114] renameCollection: renaming collection 3dfc5a02-f26e-4f15-91ee-766096d1fd09 from test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.753-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-355--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235) with drop timestamp Timestamp(1574796726, 3334)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.978-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 922d7158-db55-4c41-87d4-9972cffc9149: test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd (104f4136-0f04-4e91-badd-c62c5209b392 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.969-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: cafaa0e6-b8ef-4341-8ad4-cec5a915f821: test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119 (4cc54e7b-47b4-4788-a5b6-787f5e0527ad ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.771-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-337--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 3335)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.859-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e1fb7d34-785d-46dd-a046-741e30624528)'. Ident: 'index-921-8224331490264904478', commit timestamp: 'Timestamp(1574796776, 4168)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.755-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-341--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.fb69bd15-a5b6-443a-87a6-46c842db4235) with drop timestamp Timestamp(1574796726, 3334)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.978-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.969-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.773-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-327--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 3335)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.859-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e1fb7d34-785d-46dd-a046-741e30624528)'. Ident: 'index-925-8224331490264904478', commit timestamp: 'Timestamp(1574796776, 4168)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.757-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-328--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 3335)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.979-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.970-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.775-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-354--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 3336)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.859-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-917-8224331490264904478, commit timestamp: Timestamp(1574796776, 4168)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.758-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-337--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 3335)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.982-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.972-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.776-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-359--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 3336)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.859-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.760-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-327--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 3335)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.987-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 922d7158-db55-4c41-87d4-9972cffc9149: test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd ( 104f4136-0f04-4e91-badd-c62c5209b392 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.975-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: cafaa0e6-b8ef-4341-8ad4-cec5a915f821: test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119 ( 4cc54e7b-47b4-4788-a5b6-787f5e0527ad ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.778-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-353--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 3336)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.859-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (3dfc5a02-f26e-4f15-91ee-766096d1fd09) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 4169), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.761-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-354--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 3336)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.991-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df (3dfc5a02-f26e-4f15-91ee-766096d1fd09) to test5_fsmdb0.agg_out and drop e1fb7d34-785d-46dd-a046-741e30624528.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.999-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.779-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-362--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b) with drop timestamp Timestamp(1574796726, 4350)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.859-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (3dfc5a02-f26e-4f15-91ee-766096d1fd09).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.762-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-359--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 3336)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.991-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (e1fb7d34-785d-46dd-a046-741e30624528) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 4168), t: 1 } and commit timestamp Timestamp(1574796776, 4168)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.999-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.779-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-365--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b) with drop timestamp Timestamp(1574796726, 4350)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.859-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1109131887199700443, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 193480265855241782, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796776628), clusterTime: Timestamp(1574796776, 2014) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796776, 2014), signature: { hash: BinData(0, 93EE4854E5575B54E85D4FEF9278082F230517FE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796776, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 229ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.764-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-353--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 3336)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.991-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (e1fb7d34-785d-46dd-a046-741e30624528).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.999-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: c34d9bb2-2dbf-45e9-8ebf-04ed9ee73ee4: test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd (104f4136-0f04-4e91-badd-c62c5209b392 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.780-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-361--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b) with drop timestamp Timestamp(1574796726, 4350)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.859-0500 I STORAGE [conn112] renameCollection: renaming collection a080f6f9-209a-4d76-b916-1c2b632bacea from test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.765-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-362--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b) with drop timestamp Timestamp(1574796726, 4350)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.991-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 3dfc5a02-f26e-4f15-91ee-766096d1fd09 from test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.999-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.782-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-364--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a) with drop timestamp Timestamp(1574796726, 4354)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.859-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3dfc5a02-f26e-4f15-91ee-766096d1fd09)'. Ident: 'index-922-8224331490264904478', commit timestamp: 'Timestamp(1574796776, 4169)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.766-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-365--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b) with drop timestamp Timestamp(1574796726, 4350)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.991-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e1fb7d34-785d-46dd-a046-741e30624528)'. Ident: 'index-928--8000595249233899911', commit timestamp: 'Timestamp(1574796776, 4168)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:56.999-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.784-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-373--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a) with drop timestamp Timestamp(1574796726, 4354)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.859-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3dfc5a02-f26e-4f15-91ee-766096d1fd09)'. Ident: 'index-927-8224331490264904478', commit timestamp: 'Timestamp(1574796776, 4169)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.768-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-361--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.fc21d836-ccd3-45f5-87c1-920f2b03c03b) with drop timestamp Timestamp(1574796726, 4350)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.991-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e1fb7d34-785d-46dd-a046-741e30624528)'. Ident: 'index-939--8000595249233899911', commit timestamp: 'Timestamp(1574796776, 4168)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.003-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.785-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-363--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a) with drop timestamp Timestamp(1574796726, 4354)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.859-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-918-8224331490264904478, commit timestamp: Timestamp(1574796776, 4169)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.769-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-364--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a) with drop timestamp Timestamp(1574796726, 4354)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.991-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-927--8000595249233899911, commit timestamp: Timestamp(1574796776, 4168)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.008-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: c34d9bb2-2dbf-45e9-8ebf-04ed9ee73ee4: test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd ( 104f4136-0f04-4e91-badd-c62c5209b392 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.786-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-348--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5360)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.859-0500 I INDEX [conn108] Registering index build: b7bda0cd-7f15-477d-bf8b-83d5eb79b5ef
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.770-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-373--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a) with drop timestamp Timestamp(1574796726, 4354)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.992-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead (a080f6f9-209a-4d76-b916-1c2b632bacea) to test5_fsmdb0.agg_out and drop 3dfc5a02-f26e-4f15-91ee-766096d1fd09.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.013-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df (3dfc5a02-f26e-4f15-91ee-766096d1fd09) to test5_fsmdb0.agg_out and drop e1fb7d34-785d-46dd-a046-741e30624528.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.787-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-357--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5360)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.859-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7925760083110180624, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2851270042651971891, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796776629), clusterTime: Timestamp(1574796776, 2014) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796776, 2014), signature: { hash: BinData(0, 93EE4854E5575B54E85D4FEF9278082F230517FE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796776, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 230ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.771-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-363--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.21026da2-e991-48ab-bbb9-8e7b9205709a) with drop timestamp Timestamp(1574796726, 4354)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.992-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (3dfc5a02-f26e-4f15-91ee-766096d1fd09) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 4169), t: 1 } and commit timestamp Timestamp(1574796776, 4169)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.014-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (e1fb7d34-785d-46dd-a046-741e30624528) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 4168), t: 1 } and commit timestamp Timestamp(1574796776, 4168)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.789-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-347--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5360)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.862-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2 with generated UUID: 28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.772-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-348--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5360)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.992-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (3dfc5a02-f26e-4f15-91ee-766096d1fd09).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.014-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (e1fb7d34-785d-46dd-a046-741e30624528).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.790-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-368--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5361)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.864-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37 with generated UUID: ebd168fe-7c3a-4353-a065-9a920a4efab0 and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.773-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-357--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5360)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.992-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection a080f6f9-209a-4d76-b916-1c2b632bacea from test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.014-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection 3dfc5a02-f26e-4f15-91ee-766096d1fd09 from test5_fsmdb0.tmp.agg_out.b1de6d66-edb8-4092-97f8-adce36b436df to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.791-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-375--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5361)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.898-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.774-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-347--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5360)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.992-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3dfc5a02-f26e-4f15-91ee-766096d1fd09)'. Ident: 'index-930--8000595249233899911', commit timestamp: 'Timestamp(1574796776, 4169)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.014-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e1fb7d34-785d-46dd-a046-741e30624528)'. Ident: 'index-928--4104909142373009110', commit timestamp: 'Timestamp(1574796776, 4168)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.792-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-367--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5361)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.898-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.775-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-368--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5361)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.992-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3dfc5a02-f26e-4f15-91ee-766096d1fd09)'. Ident: 'index-941--8000595249233899911', commit timestamp: 'Timestamp(1574796776, 4169)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.014-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e1fb7d34-785d-46dd-a046-741e30624528)'. Ident: 'index-939--4104909142373009110', commit timestamp: 'Timestamp(1574796776, 4168)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.794-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-370--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5869)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.898-0500 I STORAGE [conn108] Index build initialized: b7bda0cd-7f15-477d-bf8b-83d5eb79b5ef: test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33 (021d2085-f347-4a54-a76b-e2710727cdf9 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.778-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-375--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5361)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.992-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-929--8000595249233899911, commit timestamp: Timestamp(1574796776, 4169)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.014-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-927--4104909142373009110, commit timestamp: Timestamp(1574796776, 4168)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.795-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-377--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5869)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.898-0500 I INDEX [conn108] Waiting for index build to complete: b7bda0cd-7f15-477d-bf8b-83d5eb79b5ef
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.779-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-367--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5361)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:56.995-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2 with provided UUID: 28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c and options: { uuid: UUID("28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.014-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead (a080f6f9-209a-4d76-b916-1c2b632bacea) to test5_fsmdb0.agg_out and drop 3dfc5a02-f26e-4f15-91ee-766096d1fd09.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.796-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-369--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5869)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.907-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.780-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-370--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5869)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.012-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.015-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (3dfc5a02-f26e-4f15-91ee-766096d1fd09) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 4169), t: 1 } and commit timestamp Timestamp(1574796776, 4169)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.797-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-372--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.915-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.781-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-377--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5869)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.015-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37 with provided UUID: ebd168fe-7c3a-4353-a065-9a920a4efab0 and options: { uuid: UUID("ebd168fe-7c3a-4353-a065-9a920a4efab0"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.015-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (3dfc5a02-f26e-4f15-91ee-766096d1fd09).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.799-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-383--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.915-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.782-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-369--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796726, 5869)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.033-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.015-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection a080f6f9-209a-4d76-b916-1c2b632bacea from test5_fsmdb0.tmp.agg_out.3f86a673-1336-4830-b67c-d2385cd09ead to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.799-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-371--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.915-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (a080f6f9-209a-4d76-b916-1c2b632bacea) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 5045), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.784-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-372--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.037-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119 (4cc54e7b-47b4-4788-a5b6-787f5e0527ad) to test5_fsmdb0.agg_out and drop a080f6f9-209a-4d76-b916-1c2b632bacea.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.015-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3dfc5a02-f26e-4f15-91ee-766096d1fd09)'. Ident: 'index-930--4104909142373009110', commit timestamp: 'Timestamp(1574796776, 4169)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.800-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-380--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 509)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.915-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (a080f6f9-209a-4d76-b916-1c2b632bacea).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.785-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-383--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.037-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.agg_out (a080f6f9-209a-4d76-b916-1c2b632bacea) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 5045), t: 1 } and commit timestamp Timestamp(1574796776, 5045)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.015-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3dfc5a02-f26e-4f15-91ee-766096d1fd09)'. Ident: 'index-941--4104909142373009110', commit timestamp: 'Timestamp(1574796776, 4169)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.801-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-387--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 509)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.915-0500 I STORAGE [conn46] renameCollection: renaming collection 4cc54e7b-47b4-4788-a5b6-787f5e0527ad from test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.786-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-371--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.037-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.agg_out (a080f6f9-209a-4d76-b916-1c2b632bacea).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.015-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-929--4104909142373009110, commit timestamp: Timestamp(1574796776, 4169)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.804-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-379--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 509)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.915-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (a080f6f9-209a-4d76-b916-1c2b632bacea)'. Ident: 'index-923-8224331490264904478', commit timestamp: 'Timestamp(1574796776, 5045)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.789-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-380--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 509)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.037-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection 4cc54e7b-47b4-4788-a5b6-787f5e0527ad from test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.017-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2 with provided UUID: 28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c and options: { uuid: UUID("28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.805-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-382--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 1077)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.915-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (a080f6f9-209a-4d76-b916-1c2b632bacea)'. Ident: 'index-931-8224331490264904478', commit timestamp: 'Timestamp(1574796776, 5045)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.790-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-387--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 509)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.037-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (a080f6f9-209a-4d76-b916-1c2b632bacea)'. Ident: 'index-932--8000595249233899911', commit timestamp: 'Timestamp(1574796776, 5045)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.034-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.805-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-391--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 1077)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.915-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-919-8224331490264904478, commit timestamp: Timestamp(1574796776, 5045)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.791-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-379--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 509)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.037-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (a080f6f9-209a-4d76-b916-1c2b632bacea)'. Ident: 'index-943--8000595249233899911', commit timestamp: 'Timestamp(1574796776, 5045)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.037-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37 with provided UUID: ebd168fe-7c3a-4353-a065-9a920a4efab0 and options: { uuid: UUID("ebd168fe-7c3a-4353-a065-9a920a4efab0"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.806-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-381--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 1077)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.915-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.792-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-382--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 1077)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.037-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-931--8000595249233899911, commit timestamp: Timestamp(1574796776, 5045)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.054-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.808-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-386--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 1518)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.915-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (4cc54e7b-47b4-4788-a5b6-787f5e0527ad) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 5046), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.793-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-391--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 1077)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.038-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd (104f4136-0f04-4e91-badd-c62c5209b392) to test5_fsmdb0.agg_out and drop 4cc54e7b-47b4-4788-a5b6-787f5e0527ad.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.058-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119 (4cc54e7b-47b4-4788-a5b6-787f5e0527ad) to test5_fsmdb0.agg_out and drop a080f6f9-209a-4d76-b916-1c2b632bacea.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.809-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-393--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 1518)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.915-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (4cc54e7b-47b4-4788-a5b6-787f5e0527ad).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.794-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-381--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 1077)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.038-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (4cc54e7b-47b4-4788-a5b6-787f5e0527ad) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 5046), t: 1 } and commit timestamp Timestamp(1574796776, 5046)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.058-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.agg_out (a080f6f9-209a-4d76-b916-1c2b632bacea) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 5045), t: 1 } and commit timestamp Timestamp(1574796776, 5045)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.810-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-385--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 1518)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.915-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5448131316956138460, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4298463183063364638, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796776629), clusterTime: Timestamp(1574796776, 2014) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796776, 2014), signature: { hash: BinData(0, 93EE4854E5575B54E85D4FEF9278082F230517FE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796776, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 285ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.795-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-386--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 1518)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.038-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (4cc54e7b-47b4-4788-a5b6-787f5e0527ad).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.058-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.agg_out (a080f6f9-209a-4d76-b916-1c2b632bacea).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.811-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-390--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2023)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.915-0500 I STORAGE [conn110] renameCollection: renaming collection 104f4136-0f04-4e91-badd-c62c5209b392 from test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.796-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-393--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 1518)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.038-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection 104f4136-0f04-4e91-badd-c62c5209b392 from test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.058-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection 4cc54e7b-47b4-4788-a5b6-787f5e0527ad from test5_fsmdb0.tmp.agg_out.a9fc2369-ba65-4f59-b6f5-554e9322e119 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.813-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-397--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2023)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.916-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (4cc54e7b-47b4-4788-a5b6-787f5e0527ad)'. Ident: 'index-924-8224331490264904478', commit timestamp: 'Timestamp(1574796776, 5046)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.799-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-385--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 1518)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.038-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (4cc54e7b-47b4-4788-a5b6-787f5e0527ad)'. Ident: 'index-936--8000595249233899911', commit timestamp: 'Timestamp(1574796776, 5046)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.058-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (a080f6f9-209a-4d76-b916-1c2b632bacea)'. Ident: 'index-932--4104909142373009110', commit timestamp: 'Timestamp(1574796776, 5045)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.814-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-389--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2023)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.916-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (4cc54e7b-47b4-4788-a5b6-787f5e0527ad)'. Ident: 'index-933-8224331490264904478', commit timestamp: 'Timestamp(1574796776, 5046)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.799-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-390--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2023)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.038-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (4cc54e7b-47b4-4788-a5b6-787f5e0527ad)'. Ident: 'index-947--8000595249233899911', commit timestamp: 'Timestamp(1574796776, 5046)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.058-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (a080f6f9-209a-4d76-b916-1c2b632bacea)'. Ident: 'index-943--4104909142373009110', commit timestamp: 'Timestamp(1574796776, 5045)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.815-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-396--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2532)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.916-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-920-8224331490264904478, commit timestamp: Timestamp(1574796776, 5046)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.800-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-397--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2023)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.038-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-935--8000595249233899911, commit timestamp: Timestamp(1574796776, 5046)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.058-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-931--4104909142373009110, commit timestamp: Timestamp(1574796776, 5045)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.817-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-405--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2532)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.916-0500 I INDEX [conn114] Registering index build: 2e27023c-0456-4cfd-b401-9506cc4858f1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.801-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-389--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2023)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.054-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.059-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd (104f4136-0f04-4e91-badd-c62c5209b392) to test5_fsmdb0.agg_out and drop 4cc54e7b-47b4-4788-a5b6-787f5e0527ad.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.818-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-395--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2532)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.916-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.802-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-396--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2532)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.054-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.059-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (4cc54e7b-47b4-4788-a5b6-787f5e0527ad) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796776, 5046), t: 1 } and commit timestamp Timestamp(1574796776, 5046)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.819-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-400--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 3537)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.916-0500 I INDEX [conn112] Registering index build: 99a8265e-16ea-4382-977a-2133353bd42c
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.804-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-405--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2532)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.054-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: f607567a-2a6c-4d75-ad6e-037c6dfe3537: test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33 (021d2085-f347-4a54-a76b-e2710727cdf9 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.059-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (4cc54e7b-47b4-4788-a5b6-787f5e0527ad).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.820-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-409--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 3537)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.916-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2575545485940378729, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 637849720931492511, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796776727), clusterTime: Timestamp(1574796776, 2522) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796776, 2522), signature: { hash: BinData(0, 93EE4854E5575B54E85D4FEF9278082F230517FE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796776, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 187ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.804-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-395--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 2532)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.054-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.059-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection 104f4136-0f04-4e91-badd-c62c5209b392 from test5_fsmdb0.tmp.agg_out.7e285288-c4a2-45f4-bf48-152dc1d0cfdd to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.821-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-399--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 3537)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.916-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.805-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-400--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 3537)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.055-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.059-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (4cc54e7b-47b4-4788-a5b6-787f5e0527ad)'. Ident: 'index-936--4104909142373009110', commit timestamp: 'Timestamp(1574796776, 5046)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.823-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-402--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 3540)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:59.856-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796776, 4169), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2995ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:59.861-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796776, 5046), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2944ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.917-0500 I COMMAND [conn71] CMD: dropIndexes test5_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.808-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-409--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 3537)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.056-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0 with provided UUID: dc8698bd-eac8-46db-a36a-65a1245ec94d and options: { uuid: UUID("dc8698bd-eac8-46db-a36a-65a1245ec94d"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.059-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (4cc54e7b-47b4-4788-a5b6-787f5e0527ad)'. Ident: 'index-947--4104909142373009110', commit timestamp: 'Timestamp(1574796776, 5046)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.824-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-411--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 3540)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:59.912-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796777, 763), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2866ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.928-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:32:59.894-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796776, 5049), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2955ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.809-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-399--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 3537)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.057-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.059-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-935--4104909142373009110, commit timestamp: Timestamp(1574796776, 5046)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.825-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-401--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 3540)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:32:59.913-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796779, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 108ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.937-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:00.019-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796779, 1332), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:823 protocol:op_msg 157ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.810-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-402--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 3540)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.069-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: f607567a-2a6c-4d75-ad6e-037c6dfe3537: test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33 ( 021d2085-f347-4a54-a76b-e2710727cdf9 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.076-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.826-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-404--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 4046)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:00.040-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796779, 1265), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:823 protocol:op_msg 181ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.937-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.811-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-411--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 3540)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.077-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.076-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.827-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-415--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 4046)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.937-0500 I STORAGE [conn114] Index build initialized: 2e27023c-0456-4cfd-b401-9506cc4858f1: test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37 (ebd168fe-7c3a-4353-a065-9a920a4efab0 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.813-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-401--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 3540)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.078-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8 with provided UUID: a7a18bfd-fa85-44a6-aec7-208302ab14a4 and options: { uuid: UUID("a7a18bfd-fa85-44a6-aec7-208302ab14a4"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.076-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: a1e69c1b-2cff-4fb1-887e-9db017d73b21: test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33 (021d2085-f347-4a54-a76b-e2710727cdf9 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.829-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-403--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 4046)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.937-0500 I INDEX [conn114] Waiting for index build to complete: 2e27023c-0456-4cfd-b401-9506cc4858f1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.814-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-404--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 4046)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.093-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.077-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.830-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-408--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.937-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.815-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-415--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 4046)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.111-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.077-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796776, 5046) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796776, 5046), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 135ms
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.831-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-419--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.938-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: b7bda0cd-7f15-477d-bf8b-83d5eb79b5ef: test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33 ( 021d2085-f347-4a54-a76b-e2710727cdf9 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.816-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-403--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796729, 4046)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.111-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.077-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.833-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-407--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.938-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0 with generated UUID: dc8698bd-eac8-46db-a36a-65a1245ec94d and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.818-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-408--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.111-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: a0fad54b-db1d-4191-b3b5-cd09c7f9a574: test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37 (ebd168fe-7c3a-4353-a065-9a920a4efab0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.078-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0 with provided UUID: dc8698bd-eac8-46db-a36a-65a1245ec94d and options: { uuid: UUID("dc8698bd-eac8-46db-a36a-65a1245ec94d"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.834-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-414--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 510)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.939-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.819-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-419--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.111-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.080-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.835-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-421--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 510)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.940-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8 with generated UUID: a7a18bfd-fa85-44a6-aec7-208302ab14a4 and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.820-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-407--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.112-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.090-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: a1e69c1b-2cff-4fb1-887e-9db017d73b21: test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33 ( 021d2085-f347-4a54-a76b-e2710727cdf9 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.836-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-413--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 510)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.942-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.822-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-414--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 510)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.115-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.098-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.837-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-418--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 1081)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.968-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 2e27023c-0456-4cfd-b401-9506cc4858f1: test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37 ( ebd168fe-7c3a-4353-a065-9a920a4efab0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.823-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-421--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 510)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.117-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: a0fad54b-db1d-4191-b3b5-cd09c7f9a574: test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37 ( ebd168fe-7c3a-4353-a065-9a920a4efab0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.099-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8 with provided UUID: a7a18bfd-fa85-44a6-aec7-208302ab14a4 and options: { uuid: UUID("a7a18bfd-fa85-44a6-aec7-208302ab14a4"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.839-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-427--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 1081)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.977-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.824-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-413--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 510)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.140-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.115-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.840-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-417--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 1081)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.978-0500 I INDEX [conn110] Registering index build: b0148589-933f-4cc4-81d6-8d774862c6b6
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.825-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-418--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 1081)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.140-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.133-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.841-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-424--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 1650)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.986-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.826-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-427--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 1081)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.140-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 13754aa9-9a25-4ff2-9d5d-60351de060b7: test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2 (28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.133-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.843-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-433--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 1650)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.986-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.829-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-417--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 1081)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.140-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.133-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: eb1bb1ba-ef20-4a8a-a0bf-7be70fa1cbce: test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37 (ebd168fe-7c3a-4353-a065-9a920a4efab0 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.844-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-423--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 1650)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.986-0500 I STORAGE [conn112] Index build initialized: 99a8265e-16ea-4382-977a-2133353bd42c: test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2 (28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.830-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-424--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 1650)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.141-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.133-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.845-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-426--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2091)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.986-0500 I INDEX [conn112] Waiting for index build to complete: 99a8265e-16ea-4382-977a-2133353bd42c
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.831-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-433--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 1650)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.143-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.133-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.846-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-435--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2091)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.986-0500 I INDEX [conn108] Index build completed: b7bda0cd-7f15-477d-bf8b-83d5eb79b5ef
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.832-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-423--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 1650)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.153-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 13754aa9-9a25-4ff2-9d5d-60351de060b7: test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2 ( 28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.135-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.847-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-425--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2091)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.986-0500 I INDEX [conn114] Index build completed: 2e27023c-0456-4cfd-b401-9506cc4858f1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.833-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-426--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2091)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.161-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.139-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: eb1bb1ba-ef20-4a8a-a0bf-7be70fa1cbce: test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37 ( ebd168fe-7c3a-4353-a065-9a920a4efab0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.848-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-430--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2596)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.986-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796776, 4167), signature: { hash: BinData(0, 93EE4854E5575B54E85D4FEF9278082F230517FE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796776, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 2125 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 128ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.834-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-435--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2091)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.161-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.161-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.849-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-439--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2596)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.992-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.835-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-425--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2091)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.161-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: b6015bdc-4534-4dca-9888-3cac6f5d256b: test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0 (dc8698bd-eac8-46db-a36a-65a1245ec94d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.161-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.850-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-429--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2596)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:56.992-0500 I INDEX [conn46] Registering index build: 57924b87-a720-48a1-99bb-f0e91a97c88a
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.836-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-430--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2596)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.161-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.161-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 54945582-35dc-4320-b92e-87026cf4e004: test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2 (28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.853-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-432--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 3164)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.013-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.839-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-439--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2596)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.161-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.161-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.854-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-443--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 3164)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.013-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.840-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-429--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 2596)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.163-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33 (021d2085-f347-4a54-a76b-e2710727cdf9) to test5_fsmdb0.agg_out and drop 104f4136-0f04-4e91-badd-c62c5209b392.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.161-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.855-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-431--7234316082034423155 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 3164)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.013-0500 I STORAGE [conn110] Index build initialized: b0148589-933f-4cc4-81d6-8d774862c6b6: test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0 (dc8698bd-eac8-46db-a36a-65a1245ec94d ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.841-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-432--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 3164)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.164-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.164-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.856-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-442--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948) with drop timestamp Timestamp(1574796731, 4546)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.013-0500 I INDEX [conn110] Waiting for index build to complete: b0148589-933f-4cc4-81d6-8d774862c6b6
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.842-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-443--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 3164)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.164-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (104f4136-0f04-4e91-badd-c62c5209b392) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796777, 699), t: 1 } and commit timestamp Timestamp(1574796777, 699)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.172-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 54945582-35dc-4320-b92e-87026cf4e004: test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2 ( 28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.857-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-449--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948) with drop timestamp Timestamp(1574796731, 4546)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.013-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.843-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-431--2310912778499990807 (ns: test4_fsmdb0.agg_out) with drop timestamp Timestamp(1574796731, 3164)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.164-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (104f4136-0f04-4e91-badd-c62c5209b392).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.181-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.858-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-441--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948) with drop timestamp Timestamp(1574796731, 4546)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.014-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.844-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-442--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948) with drop timestamp Timestamp(1574796731, 4546)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.164-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 021d2085-f347-4a54-a76b-e2710727cdf9 from test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.181-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.859-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-452--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94) with drop timestamp Timestamp(1574796731, 4547)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.014-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.845-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-449--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948) with drop timestamp Timestamp(1574796731, 4546)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.164-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (104f4136-0f04-4e91-badd-c62c5209b392)'. Ident: 'index-938--8000595249233899911', commit timestamp: 'Timestamp(1574796777, 699)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.181-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: d52bef3b-5a6e-4c4a-98ae-385319cc2f12: test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0 (dc8698bd-eac8-46db-a36a-65a1245ec94d ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.860-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-457--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94) with drop timestamp Timestamp(1574796731, 4547)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.015-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.846-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-441--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.a3fd2c58-61c3-48ce-b8e2-e9e0cd983948) with drop timestamp Timestamp(1574796731, 4546)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.164-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (104f4136-0f04-4e91-badd-c62c5209b392)'. Ident: 'index-949--8000595249233899911', commit timestamp: 'Timestamp(1574796777, 699)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.182-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.863-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-451--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94) with drop timestamp Timestamp(1574796731, 4547)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.031-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.848-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-452--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94) with drop timestamp Timestamp(1574796731, 4547)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.164-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-937--8000595249233899911, commit timestamp: Timestamp(1574796777, 699)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.182-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.864-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-446--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7) with drop timestamp Timestamp(1574796732, 1)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.034-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.849-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-457--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94) with drop timestamp Timestamp(1574796731, 4547)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:57.166-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: b6015bdc-4534-4dca-9888-3cac6f5d256b: test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0 ( dc8698bd-eac8-46db-a36a-65a1245ec94d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.183-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33 (021d2085-f347-4a54-a76b-e2710727cdf9) to test5_fsmdb0.agg_out and drop 104f4136-0f04-4e91-badd-c62c5209b392.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.865-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-453--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7) with drop timestamp Timestamp(1574796732, 1)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.042-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.850-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-451--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.163853d7-a487-4c6f-aaae-2e66f3bdbe94) with drop timestamp Timestamp(1574796731, 4547)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.804-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8 with provided UUID: bd2d1605-349f-4c3a-a188-1a8ebe7263a4 and options: { uuid: UUID("bd2d1605-349f-4c3a-a188-1a8ebe7263a4"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.186-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.867-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-445--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7) with drop timestamp Timestamp(1574796732, 1)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.042-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.851-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-446--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7) with drop timestamp Timestamp(1574796732, 1)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.818-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.186-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test5_fsmdb0.agg_out (104f4136-0f04-4e91-badd-c62c5209b392) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796777, 699), t: 1 } and commit timestamp Timestamp(1574796777, 699)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.868-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-456--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810) with drop timestamp Timestamp(1574796732, 517)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.042-0500 I STORAGE [conn46] Index build initialized: 57924b87-a720-48a1-99bb-f0e91a97c88a: test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8 (a7a18bfd-fa85-44a6-aec7-208302ab14a4 ): indexes: 1
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.853-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-453--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7) with drop timestamp Timestamp(1574796732, 1)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.832-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.186-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test5_fsmdb0.agg_out (104f4136-0f04-4e91-badd-c62c5209b392).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.869-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-465--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810) with drop timestamp Timestamp(1574796732, 517)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.042-0500 I INDEX [conn46] Waiting for index build to complete: 57924b87-a720-48a1-99bb-f0e91a97c88a
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.854-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-445--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.c2913698-dcb0-45b5-9165-a79a73343fa7) with drop timestamp Timestamp(1574796732, 1)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.832-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.186-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection 021d2085-f347-4a54-a76b-e2710727cdf9 from test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.870-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-455--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810) with drop timestamp Timestamp(1574796732, 517)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.042-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.855-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-456--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810) with drop timestamp Timestamp(1574796732, 517)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.832-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 4b128621-1a73-4439-be8c-203806464d11: test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8 (a7a18bfd-fa85-44a6-aec7-208302ab14a4 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.186-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (104f4136-0f04-4e91-badd-c62c5209b392)'. Ident: 'index-938--4104909142373009110', commit timestamp: 'Timestamp(1574796777, 699)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.871-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-464--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204) with drop timestamp Timestamp(1574796732, 2027)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.043-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (104f4136-0f04-4e91-badd-c62c5209b392) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796777, 699), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.856-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-465--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810) with drop timestamp Timestamp(1574796732, 517)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.832-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.186-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (104f4136-0f04-4e91-badd-c62c5209b392)'. Ident: 'index-949--4104909142373009110', commit timestamp: 'Timestamp(1574796777, 699)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.873-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-471--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204) with drop timestamp Timestamp(1574796732, 2027)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.043-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (104f4136-0f04-4e91-badd-c62c5209b392).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.858-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-455--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.f7b92567-0ab5-48b0-b752-66e061061810) with drop timestamp Timestamp(1574796732, 517)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.833-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.186-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-937--4104909142373009110, commit timestamp: Timestamp(1574796777, 699)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.875-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-463--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204) with drop timestamp Timestamp(1574796732, 2027)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.043-0500 I STORAGE [conn108] renameCollection: renaming collection 021d2085-f347-4a54-a76b-e2710727cdf9 from test5_fsmdb0.tmp.agg_out.881a0032-2223-4b3b-87cb-46d1d4694c33 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.859-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-464--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204) with drop timestamp Timestamp(1574796732, 2027)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.834-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37 (ebd168fe-7c3a-4353-a065-9a920a4efab0) to test5_fsmdb0.agg_out and drop 021d2085-f347-4a54-a76b-e2710727cdf9.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:57.187-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: d52bef3b-5a6e-4c4a-98ae-385319cc2f12: test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0 ( dc8698bd-eac8-46db-a36a-65a1245ec94d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.876-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-468--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0) with drop timestamp Timestamp(1574796732, 2029)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.043-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (104f4136-0f04-4e91-badd-c62c5209b392)'. Ident: 'index-930-8224331490264904478', commit timestamp: 'Timestamp(1574796777, 699)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.860-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-471--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204) with drop timestamp Timestamp(1574796732, 2027)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.835-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.877-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-473--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0) with drop timestamp Timestamp(1574796732, 2029)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.819-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8 with provided UUID: bd2d1605-349f-4c3a-a188-1a8ebe7263a4 and options: { uuid: UUID("bd2d1605-349f-4c3a-a188-1a8ebe7263a4"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.043-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (104f4136-0f04-4e91-badd-c62c5209b392)'. Ident: 'index-935-8224331490264904478', commit timestamp: 'Timestamp(1574796777, 699)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.861-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-463--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.4cc07e25-275f-45c7-af94-ee49e6014204) with drop timestamp Timestamp(1574796732, 2027)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.835-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.agg_out (021d2085-f347-4a54-a76b-e2710727cdf9) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796779, 3), t: 1 } and commit timestamp Timestamp(1574796779, 3)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.878-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-467--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0) with drop timestamp Timestamp(1574796732, 2029)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.834-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.043-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-928-8224331490264904478, commit timestamp: Timestamp(1574796777, 699)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.863-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-468--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0) with drop timestamp Timestamp(1574796732, 2029)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.835-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.agg_out (021d2085-f347-4a54-a76b-e2710727cdf9).
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.879-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-470--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8) with drop timestamp Timestamp(1574796732, 2030)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.849-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.043-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.864-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-473--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0) with drop timestamp Timestamp(1574796732, 2029)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.835-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection ebd168fe-7c3a-4353-a065-9a920a4efab0 from test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.880-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-475--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8) with drop timestamp Timestamp(1574796732, 2030)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.849-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.044-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8302835832390067768, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1637878423292414383, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796776830), clusterTime: Timestamp(1574796776, 3034) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796776, 3034), signature: { hash: BinData(0, 93EE4854E5575B54E85D4FEF9278082F230517FE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796776, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 213ms
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.864-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-467--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.eff38aef-23eb-4467-8cee-94c26b66bfa0) with drop timestamp Timestamp(1574796732, 2029)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.835-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (021d2085-f347-4a54-a76b-e2710727cdf9)'. Ident: 'index-946--8000595249233899911', commit timestamp: 'Timestamp(1574796779, 3)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.881-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-469--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8) with drop timestamp Timestamp(1574796732, 2030)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.849-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 577d9f59-0dda-4673-a258-0d0b1b6f34c7: test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8 (a7a18bfd-fa85-44a6-aec7-208302ab14a4 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.044-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 99a8265e-16ea-4382-977a-2133353bd42c: test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2 ( 28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.866-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-470--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8) with drop timestamp Timestamp(1574796732, 2030)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.835-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (021d2085-f347-4a54-a76b-e2710727cdf9)'. Ident: 'index-955--8000595249233899911', commit timestamp: 'Timestamp(1574796779, 3)'
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.883-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-480--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e) with drop timestamp Timestamp(1574796732, 2809)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.849-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.868-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-475--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8) with drop timestamp Timestamp(1574796732, 2030)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.835-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-945--8000595249233899911, commit timestamp: Timestamp(1574796779, 3)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.047-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: b0148589-933f-4cc4-81d6-8d774862c6b6: test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0 ( dc8698bd-eac8-46db-a36a-65a1245ec94d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.884-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-483--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e) with drop timestamp Timestamp(1574796732, 2809)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.849-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.869-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-469--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.e892abbc-f1ad-4ae3-8c6b-6e7f3ffa10c8) with drop timestamp Timestamp(1574796732, 2030)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:02.686-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796779, 2516), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2773ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.836-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c with provided UUID: 88cb9f23-8e5b-4c74-bf4c-a31370660e68 and options: { uuid: UUID("88cb9f23-8e5b-4c74-bf4c-a31370660e68"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.047-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.885-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-479--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e) with drop timestamp Timestamp(1574796732, 2809)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.850-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37 (ebd168fe-7c3a-4353-a065-9a920a4efab0) to test5_fsmdb0.agg_out and drop 021d2085-f347-4a54-a76b-e2710727cdf9.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.870-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-480--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e) with drop timestamp Timestamp(1574796732, 2809)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.837-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 4b128621-1a73-4439-be8c-203806464d11: test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8 ( a7a18bfd-fa85-44a6-aec7-208302ab14a4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:57.049-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8 with generated UUID: bd2d1605-349f-4c3a-a188-1a8ebe7263a4 and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.886-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-478--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298) with drop timestamp Timestamp(1574796732, 3056)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.851-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.871-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-483--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e) with drop timestamp Timestamp(1574796732, 2809)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.851-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.786-0500 I INDEX [conn112] Index build completed: 99a8265e-16ea-4382-977a-2133353bd42c
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.886-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-481--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298) with drop timestamp Timestamp(1574796732, 3056)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.851-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (021d2085-f347-4a54-a76b-e2710727cdf9) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796779, 3), t: 1 } and commit timestamp Timestamp(1574796779, 3)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.872-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-479--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.9da8e9a7-c485-441a-ab2f-7b4b00120a0e) with drop timestamp Timestamp(1574796732, 2809)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.880-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.786-0500 I INDEX [conn110] Index build completed: b0148589-933f-4cc4-81d6-8d774862c6b6
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:32:57.887-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-477--7234316082034423155 (ns: test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298) with drop timestamp Timestamp(1574796732, 3056)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.851-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (021d2085-f347-4a54-a76b-e2710727cdf9).
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.873-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-478--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298) with drop timestamp Timestamp(1574796732, 3056)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.881-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.786-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2 appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796776, 5044), signature: { hash: BinData(0, 93EE4854E5575B54E85D4FEF9278082F230517FE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796776, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 8309 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2878ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.851-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection ebd168fe-7c3a-4353-a065-9a920a4efab0 from test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.874-0500 I STORAGE [TimestampMonitor] Completing drop for ident index-481--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298) with drop timestamp Timestamp(1574796732, 3056)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.881-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 5d0d782a-38d9-4d27-b0b7-468966b8f9b3: test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8 (bd2d1605-349f-4c3a-a188-1a8ebe7263a4 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.786-0500 I COMMAND [conn110] command test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0 appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796776, 5053), signature: { hash: BinData(0, 93EE4854E5575B54E85D4FEF9278082F230517FE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796776, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2808ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.852-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (021d2085-f347-4a54-a76b-e2710727cdf9)'. Ident: 'index-946--4104909142373009110', commit timestamp: 'Timestamp(1574796779, 3)'
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:32:57.875-0500 I STORAGE [TimestampMonitor] Completing drop for ident collection-477--2310912778499990807 (ns: test4_fsmdb0.tmp.agg_out.05020806-dbdc-4665-aa14-1335a2bec298) with drop timestamp Timestamp(1574796732, 3056)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.881-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.789-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.852-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (021d2085-f347-4a54-a76b-e2710727cdf9)'. Ident: 'index-955--4104909142373009110', commit timestamp: 'Timestamp(1574796779, 3)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.881-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.796-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 57924b87-a720-48a1-99bb-f0e91a97c88a: test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8 ( a7a18bfd-fa85-44a6-aec7-208302ab14a4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.852-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-945--4104909142373009110, commit timestamp: Timestamp(1574796779, 3)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.883-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2 (28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c) to test5_fsmdb0.agg_out and drop ebd168fe-7c3a-4353-a065-9a920a4efab0.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.796-0500 I INDEX [conn46] Index build completed: 57924b87-a720-48a1-99bb-f0e91a97c88a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.853-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c with provided UUID: 88cb9f23-8e5b-4c74-bf4c-a31370660e68 and options: { uuid: UUID("88cb9f23-8e5b-4c74-bf4c-a31370660e68"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.883-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.796-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8 appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796776, 5182), signature: { hash: BinData(0, 93EE4854E5575B54E85D4FEF9278082F230517FE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796776, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2803ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.853-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 577d9f59-0dda-4673-a258-0d0b1b6f34c7: test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8 ( a7a18bfd-fa85-44a6-aec7-208302ab14a4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.883-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (ebd168fe-7c3a-4353-a065-9a920a4efab0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796779, 1085), t: 1 } and commit timestamp Timestamp(1574796779, 1085)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.801-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.873-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.883-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (ebd168fe-7c3a-4353-a065-9a920a4efab0).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.802-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.912-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.883-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c from test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.802-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8 appName: "tid:0" command: create { create: "tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8", temp: true, validationLevel: "moderate", validationAction: "warn", databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796777, 827), signature: { hash: BinData(0, 3843848A34750623FAFB6040C0469C0281FFF5B0), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796776, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2752ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.913-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.883-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ebd168fe-7c3a-4353-a065-9a920a4efab0)'. Ident: 'index-954--8000595249233899911', commit timestamp: 'Timestamp(1574796779, 1085)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:02.691-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796779, 2522), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2757ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.802-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (021d2085-f347-4a54-a76b-e2710727cdf9) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796779, 3), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.913-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 9128583b-caa4-4e3a-99ba-443c640bb03c: test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8 (bd2d1605-349f-4c3a-a188-1a8ebe7263a4 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.883-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ebd168fe-7c3a-4353-a065-9a920a4efab0)'. Ident: 'index-961--8000595249233899911', commit timestamp: 'Timestamp(1574796779, 1085)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.802-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (021d2085-f347-4a54-a76b-e2710727cdf9).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.913-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.883-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-953--8000595249233899911, commit timestamp: Timestamp(1574796779, 1085)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.802-0500 I STORAGE [conn114] renameCollection: renaming collection ebd168fe-7c3a-4353-a065-9a920a4efab0 from test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.913-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.884-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 5d0d782a-38d9-4d27-b0b7-468966b8f9b3: test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8 ( bd2d1605-349f-4c3a-a188-1a8ebe7263a4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.802-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (021d2085-f347-4a54-a76b-e2710727cdf9)'. Ident: 'index-938-8224331490264904478', commit timestamp: 'Timestamp(1574796779, 3)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.915-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2 (28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c) to test5_fsmdb0.agg_out and drop ebd168fe-7c3a-4353-a065-9a920a4efab0.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.902-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.802-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (021d2085-f347-4a54-a76b-e2710727cdf9)'. Ident: 'index-939-8224331490264904478', commit timestamp: 'Timestamp(1574796779, 3)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.916-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.902-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.802-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-937-8224331490264904478, commit timestamp: Timestamp(1574796779, 3)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.916-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (ebd168fe-7c3a-4353-a065-9a920a4efab0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796779, 1085), t: 1 } and commit timestamp Timestamp(1574796779, 1085)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.902-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 2da27384-c2eb-4d0b-888e-9c00798eb6c6: test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c (88cb9f23-8e5b-4c74-bf4c-a31370660e68 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.802-0500 I INDEX [conn108] Registering index build: fb49b6e4-0d35-4ee6-8281-066038cc05ee
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.916-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (ebd168fe-7c3a-4353-a065-9a920a4efab0).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.902-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.802-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37 appName: "tid:2" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.224bd2e1-e4ab-4029-ab22-f0082ab40c37", to: "test5_fsmdb0.agg_out", collectionOptions: { validationLevel: "moderate", validationAction: "warn" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796777, 880), signature: { hash: BinData(0, 3843848A34750623FAFB6040C0469C0281FFF5B0), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796776, 1), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 2749152 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2749ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.916-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection 28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c from test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.903-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.802-0500 I COMMAND [conn220] command test5_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796776, 5046), lsid: { id: UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") }, $clusterTime: { clusterTime: Timestamp(1574796776, 5046), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796776, 5046). Collection minimum timestamp is Timestamp(1574796777, 695)" errName:SnapshotUnavailable errCode:246 reslen:601 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2724123 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 2724ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.916-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ebd168fe-7c3a-4353-a065-9a920a4efab0)'. Ident: 'index-954--4104909142373009110', commit timestamp: 'Timestamp(1574796779, 1085)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.904-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0 (dc8698bd-eac8-46db-a36a-65a1245ec94d) to test5_fsmdb0.agg_out and drop 28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.803-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8601846922753189621, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6198767095349102395, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796776863), clusterTime: Timestamp(1574796776, 4361) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796776, 4554), signature: { hash: BinData(0, 93EE4854E5575B54E85D4FEF9278082F230517FE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796776, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2939ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.916-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ebd168fe-7c3a-4353-a065-9a920a4efab0)'. Ident: 'index-961--4104909142373009110', commit timestamp: 'Timestamp(1574796779, 1085)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.906-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.806-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c with generated UUID: 88cb9f23-8e5b-4c74-bf4c-a31370660e68 and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.916-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-953--4104909142373009110, commit timestamp: Timestamp(1574796779, 1085)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.906-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796779, 1268), t: 1 } and commit timestamp Timestamp(1574796779, 1268)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.821-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.919-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 9128583b-caa4-4e3a-99ba-443c640bb03c: test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8 ( bd2d1605-349f-4c3a-a188-1a8ebe7263a4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.906-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.821-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.936-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.906-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection dc8698bd-eac8-46db-a36a-65a1245ec94d from test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.821-0500 I STORAGE [conn108] Index build initialized: fb49b6e4-0d35-4ee6-8281-066038cc05ee: test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8 (bd2d1605-349f-4c3a-a188-1a8ebe7263a4 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.936-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.907-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c)'. Ident: 'index-952--8000595249233899911', commit timestamp: 'Timestamp(1574796779, 1268)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.821-0500 I INDEX [conn108] Waiting for index build to complete: fb49b6e4-0d35-4ee6-8281-066038cc05ee
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.936-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 9650990d-27a4-46f9-b2d2-4842b07e5408: test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c (88cb9f23-8e5b-4c74-bf4c-a31370660e68 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.907-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c)'. Ident: 'index-963--8000595249233899911', commit timestamp: 'Timestamp(1574796779, 1268)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.821-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.936-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.907-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-951--8000595249233899911, commit timestamp: Timestamp(1574796779, 1268)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.821-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.937-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.909-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 2da27384-c2eb-4d0b-888e-9c00798eb6c6: test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c ( 88cb9f23-8e5b-4c74-bf4c-a31370660e68 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.829-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.938-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0 (dc8698bd-eac8-46db-a36a-65a1245ec94d) to test5_fsmdb0.agg_out and drop 28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.909-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea with provided UUID: a268a754-2354-45e9-8a64-4def0e932572 and options: { uuid: UUID("a268a754-2354-45e9-8a64-4def0e932572"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.830-0500 I INDEX [conn114] Registering index build: d291b1e7-fee8-48ca-8fef-98ce7b89bc70
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.939-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.923-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.835-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.939-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796779, 1268), t: 1 } and commit timestamp Timestamp(1574796779, 1268)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.924-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b with provided UUID: 48c18786-54b7-4f64-b76d-0c7a93c7ac87 and options: { uuid: UUID("48c18786-54b7-4f64-b76d-0c7a93c7ac87"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.845-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: fb49b6e4-0d35-4ee6-8281-066038cc05ee: test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8 ( bd2d1605-349f-4c3a-a188-1a8ebe7263a4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.939-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.940-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.852-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.939-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection dc8698bd-eac8-46db-a36a-65a1245ec94d from test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.945-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8 (a7a18bfd-fa85-44a6-aec7-208302ab14a4) to test5_fsmdb0.agg_out and drop dc8698bd-eac8-46db-a36a-65a1245ec94d.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.852-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.939-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c)'. Ident: 'index-952--4104909142373009110', commit timestamp: 'Timestamp(1574796779, 1268)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.946-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (dc8698bd-eac8-46db-a36a-65a1245ec94d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796779, 1643), t: 1 } and commit timestamp Timestamp(1574796779, 1643)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.852-0500 I STORAGE [conn114] Index build initialized: d291b1e7-fee8-48ca-8fef-98ce7b89bc70: test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c (88cb9f23-8e5b-4c74-bf4c-a31370660e68 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.939-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c)'. Ident: 'index-963--4104909142373009110', commit timestamp: 'Timestamp(1574796779, 1268)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.946-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (dc8698bd-eac8-46db-a36a-65a1245ec94d).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.852-0500 I INDEX [conn114] Waiting for index build to complete: d291b1e7-fee8-48ca-8fef-98ce7b89bc70
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.939-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-951--4104909142373009110, commit timestamp: Timestamp(1574796779, 1268)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.946-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection a7a18bfd-fa85-44a6-aec7-208302ab14a4 from test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.852-0500 I INDEX [conn108] Index build completed: fb49b6e4-0d35-4ee6-8281-066038cc05ee
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.941-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 9650990d-27a4-46f9-b2d2-4842b07e5408: test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c ( 88cb9f23-8e5b-4c74-bf4c-a31370660e68 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.946-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (dc8698bd-eac8-46db-a36a-65a1245ec94d)'. Ident: 'index-958--8000595249233899911', commit timestamp: 'Timestamp(1574796779, 1643)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.852-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.942-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea with provided UUID: a268a754-2354-45e9-8a64-4def0e932572 and options: { uuid: UUID("a268a754-2354-45e9-8a64-4def0e932572"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.946-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (dc8698bd-eac8-46db-a36a-65a1245ec94d)'. Ident: 'index-965--8000595249233899911', commit timestamp: 'Timestamp(1574796779, 1643)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.852-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (ebd168fe-7c3a-4353-a065-9a920a4efab0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796779, 1085), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.956-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.946-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-957--8000595249233899911, commit timestamp: Timestamp(1574796779, 1643)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.852-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (ebd168fe-7c3a-4353-a065-9a920a4efab0).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.957-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b with provided UUID: 48c18786-54b7-4f64-b76d-0c7a93c7ac87 and options: { uuid: UUID("48c18786-54b7-4f64-b76d-0c7a93c7ac87"), temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.953-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8 (bd2d1605-349f-4c3a-a188-1a8ebe7263a4) to test5_fsmdb0.agg_out and drop a7a18bfd-fa85-44a6-aec7-208302ab14a4.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.852-0500 I STORAGE [conn46] renameCollection: renaming collection 28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c from test5_fsmdb0.tmp.agg_out.a4269228-8879-47d3-824e-40979516bff2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.972-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.953-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (a7a18bfd-fa85-44a6-aec7-208302ab14a4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796779, 2517), t: 1 } and commit timestamp Timestamp(1574796779, 2517)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.852-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ebd168fe-7c3a-4353-a065-9a920a4efab0)'. Ident: 'index-944-8224331490264904478', commit timestamp: 'Timestamp(1574796779, 1085)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.977-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8 (a7a18bfd-fa85-44a6-aec7-208302ab14a4) to test5_fsmdb0.agg_out and drop dc8698bd-eac8-46db-a36a-65a1245ec94d.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.953-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (a7a18bfd-fa85-44a6-aec7-208302ab14a4).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.852-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ebd168fe-7c3a-4353-a065-9a920a4efab0)'. Ident: 'index-945-8224331490264904478', commit timestamp: 'Timestamp(1574796779, 1085)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.977-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (dc8698bd-eac8-46db-a36a-65a1245ec94d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796779, 1643), t: 1 } and commit timestamp Timestamp(1574796779, 1643)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.953-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection bd2d1605-349f-4c3a-a188-1a8ebe7263a4 from test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.852-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-941-8224331490264904478, commit timestamp: Timestamp(1574796779, 1085)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.977-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (dc8698bd-eac8-46db-a36a-65a1245ec94d).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.953-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (a7a18bfd-fa85-44a6-aec7-208302ab14a4)'. Ident: 'index-960--8000595249233899911', commit timestamp: 'Timestamp(1574796779, 2517)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.853-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 180055038306317948, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6762872252109127052, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796776860), clusterTime: Timestamp(1574796776, 4297) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796776, 4361), signature: { hash: BinData(0, 93EE4854E5575B54E85D4FEF9278082F230517FE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796776, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2991ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.977-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection a7a18bfd-fa85-44a6-aec7-208302ab14a4 from test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.953-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (a7a18bfd-fa85-44a6-aec7-208302ab14a4)'. Ident: 'index-969--8000595249233899911', commit timestamp: 'Timestamp(1574796779, 2517)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.856-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.977-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (dc8698bd-eac8-46db-a36a-65a1245ec94d)'. Ident: 'index-958--4104909142373009110', commit timestamp: 'Timestamp(1574796779, 1643)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.953-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-959--8000595249233899911, commit timestamp: Timestamp(1574796779, 2517)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.857-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.977-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (dc8698bd-eac8-46db-a36a-65a1245ec94d)'. Ident: 'index-965--4104909142373009110', commit timestamp: 'Timestamp(1574796779, 1643)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.954-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c (88cb9f23-8e5b-4c74-bf4c-a31370660e68) to test5_fsmdb0.agg_out and drop bd2d1605-349f-4c3a-a188-1a8ebe7263a4.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.857-0500 I COMMAND [conn68] CMD: dropIndexes test5_fsmdb0.agg_out: { rand: -1.0, randInt: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.977-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-957--4104909142373009110, commit timestamp: Timestamp(1574796779, 1643)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.954-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (bd2d1605-349f-4c3a-a188-1a8ebe7263a4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796779, 2518), t: 1 } and commit timestamp Timestamp(1574796779, 2518)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.859-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.983-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8 (bd2d1605-349f-4c3a-a188-1a8ebe7263a4) to test5_fsmdb0.agg_out and drop a7a18bfd-fa85-44a6-aec7-208302ab14a4.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.954-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (bd2d1605-349f-4c3a-a188-1a8ebe7263a4).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.860-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.983-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (a7a18bfd-fa85-44a6-aec7-208302ab14a4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796779, 2517), t: 1 } and commit timestamp Timestamp(1574796779, 2517)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.954-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection 88cb9f23-8e5b-4c74-bf4c-a31370660e68 from test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.860-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796779, 1268), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.983-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (a7a18bfd-fa85-44a6-aec7-208302ab14a4).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.954-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bd2d1605-349f-4c3a-a188-1a8ebe7263a4)'. Ident: 'index-968--8000595249233899911', commit timestamp: 'Timestamp(1574796779, 2518)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.860-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.983-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection bd2d1605-349f-4c3a-a188-1a8ebe7263a4 from test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.954-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bd2d1605-349f-4c3a-a188-1a8ebe7263a4)'. Ident: 'index-973--8000595249233899911', commit timestamp: 'Timestamp(1574796779, 2518)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.860-0500 I STORAGE [conn112] renameCollection: renaming collection dc8698bd-eac8-46db-a36a-65a1245ec94d from test5_fsmdb0.tmp.agg_out.0a8b691d-e2b4-4e8d-98c3-6836a9388ac0 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.983-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (a7a18bfd-fa85-44a6-aec7-208302ab14a4)'. Ident: 'index-960--4104909142373009110', commit timestamp: 'Timestamp(1574796779, 2517)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.954-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-967--8000595249233899911, commit timestamp: Timestamp(1574796779, 2518)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.860-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c)'. Ident: 'index-943-8224331490264904478', commit timestamp: 'Timestamp(1574796779, 1268)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.983-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (a7a18bfd-fa85-44a6-aec7-208302ab14a4)'. Ident: 'index-969--4104909142373009110', commit timestamp: 'Timestamp(1574796779, 2517)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.971-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.860-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (28efb3c8-ca51-4c9a-88b4-8f1187d1fd0c)'. Ident: 'index-947-8224331490264904478', commit timestamp: 'Timestamp(1574796779, 1268)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.983-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-959--4104909142373009110, commit timestamp: Timestamp(1574796779, 2517)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.971-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.860-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-940-8224331490264904478, commit timestamp: Timestamp(1574796779, 1268)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.983-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c (88cb9f23-8e5b-4c74-bf4c-a31370660e68) to test5_fsmdb0.agg_out and drop bd2d1605-349f-4c3a-a188-1a8ebe7263a4.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.971-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: e35584b9-6bac-4244-a3f4-d46a8e97cf30: test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b (48c18786-54b7-4f64-b76d-0c7a93c7ac87 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.861-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 583852999959353037, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2457057304469155212, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796776917), clusterTime: Timestamp(1574796776, 5046) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796776, 5046), signature: { hash: BinData(0, 93EE4854E5575B54E85D4FEF9278082F230517FE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796776, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 19487 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2942ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.984-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.agg_out (bd2d1605-349f-4c3a-a188-1a8ebe7263a4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796779, 2518), t: 1 } and commit timestamp Timestamp(1574796779, 2518)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.972-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.861-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea with generated UUID: a268a754-2354-45e9-8a64-4def0e932572 and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.984-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.agg_out (bd2d1605-349f-4c3a-a188-1a8ebe7263a4).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.972-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.862-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: d291b1e7-fee8-48ca-8fef-98ce7b89bc70: test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c ( 88cb9f23-8e5b-4c74-bf4c-a31370660e68 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.984-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection 88cb9f23-8e5b-4c74-bf4c-a31370660e68 from test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.975-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.862-0500 I INDEX [conn114] Index build completed: d291b1e7-fee8-48ca-8fef-98ce7b89bc70
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.984-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bd2d1605-349f-4c3a-a188-1a8ebe7263a4)'. Ident: 'index-968--4104909142373009110', commit timestamp: 'Timestamp(1574796779, 2518)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.976-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: e35584b9-6bac-4244-a3f4-d46a8e97cf30: test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b ( 48c18786-54b7-4f64-b76d-0c7a93c7ac87 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.868-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b with generated UUID: 48c18786-54b7-4f64-b76d-0c7a93c7ac87 and options: { temp: true, validationLevel: "moderate", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.984-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bd2d1605-349f-4c3a-a188-1a8ebe7263a4)'. Ident: 'index-973--4104909142373009110', commit timestamp: 'Timestamp(1574796779, 2518)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.977-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f with provided UUID: 526918bd-08ea-4b6f-9911-cef9acd4f152 and options: { uuid: UUID("526918bd-08ea-4b6f-9911-cef9acd4f152"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.886-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:32:59.984-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-967--4104909142373009110, commit timestamp: Timestamp(1574796779, 2518)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.992-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.893-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.000-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:32:59.997-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952 with provided UUID: e26c4867-07b9-43f2-97fb-c5bb7e770e88 and options: { uuid: UUID("e26c4867-07b9-43f2-97fb-c5bb7e770e88"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.893-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.000-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.013-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.894-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (dc8698bd-eac8-46db-a36a-65a1245ec94d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796779, 1643), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.001-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: fbe031d3-edc6-464d-ac2d-00e335e61bc6: test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b (48c18786-54b7-4f64-b76d-0c7a93c7ac87 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.014-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c with provided UUID: 6b287a9e-fb9c-4d0e-9c02-d148d34708d8 and options: { uuid: UUID("6b287a9e-fb9c-4d0e-9c02-d148d34708d8"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.894-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (dc8698bd-eac8-46db-a36a-65a1245ec94d).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.001-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.028-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.894-0500 I STORAGE [conn110] renameCollection: renaming collection a7a18bfd-fa85-44a6-aec7-208302ab14a4 from test5_fsmdb0.tmp.agg_out.fb969fb8-b5b0-4686-bddd-c20c07a2a8c8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.001-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.048-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.894-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (dc8698bd-eac8-46db-a36a-65a1245ec94d)'. Ident: 'index-950-8224331490264904478', commit timestamp: 'Timestamp(1574796779, 1643)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.004-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.048-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.894-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (dc8698bd-eac8-46db-a36a-65a1245ec94d)'. Ident: 'index-953-8224331490264904478', commit timestamp: 'Timestamp(1574796779, 1643)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.005-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f with provided UUID: 526918bd-08ea-4b6f-9911-cef9acd4f152 and options: { uuid: UUID("526918bd-08ea-4b6f-9911-cef9acd4f152"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.048-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: e9f5ca2b-9ff0-47cc-b06e-4398c85841b3: test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea (a268a754-2354-45e9-8a64-4def0e932572 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.894-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-948-8224331490264904478, commit timestamp: Timestamp(1574796779, 1643)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.007-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: fbe031d3-edc6-464d-ac2d-00e335e61bc6: test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b ( 48c18786-54b7-4f64-b76d-0c7a93c7ac87 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.048-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.894-0500 I INDEX [conn46] Registering index build: 4178e408-8d83-47ce-b479-dd5884dc933c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.023-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.049-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.894-0500 I INDEX [conn108] Registering index build: a2d5b836-dac6-4a6d-91eb-3d48cf5afb33
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.024-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952 with provided UUID: e26c4867-07b9-43f2-97fb-c5bb7e770e88 and options: { uuid: UUID("e26c4867-07b9-43f2-97fb-c5bb7e770e88"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.052-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 7 side writes (inserted: 7, deleted: 0) for '_id_hashed' in 1 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.894-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2087101508761164654, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3115115742170618112, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796776939), clusterTime: Timestamp(1574796776, 5049) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796776, 5050), signature: { hash: BinData(0, 93EE4854E5575B54E85D4FEF9278082F230517FE), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796776, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2954ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.039-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.052-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: drain applied 57 side writes (inserted: 57, deleted: 0) for '_id_hashed' in 0 ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.896-0500 I COMMAND [conn71] CMD: dropIndexes test5_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.040-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c with provided UUID: 6b287a9e-fb9c-4d0e-9c02-d148d34708d8 and options: { uuid: UUID("6b287a9e-fb9c-4d0e-9c02-d148d34708d8"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.052-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.911-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.059-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.055-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: e9f5ca2b-9ff0-47cc-b06e-4398c85841b3: test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea ( a268a754-2354-45e9-8a64-4def0e932572 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.911-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.079-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.060-0500 I COMMAND [ReplWriterWorker-11] CMD: drop test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.911-0500 I STORAGE [conn46] Index build initialized: 4178e408-8d83-47ce-b479-dd5884dc933c: test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b (48c18786-54b7-4f64-b76d-0c7a93c7ac87 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.079-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.060-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b (48c18786-54b7-4f64-b76d-0c7a93c7ac87) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796780, 439), t: 1 } and commit timestamp Timestamp(1574796780, 439)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.911-0500 I INDEX [conn46] Waiting for index build to complete: 4178e408-8d83-47ce-b479-dd5884dc933c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.079-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 09deecb6-ecbd-45c1-96b3-5aa776b75916: test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea (a268a754-2354-45e9-8a64-4def0e932572 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.061-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b (48c18786-54b7-4f64-b76d-0c7a93c7ac87).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.911-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.079-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.061-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b (48c18786-54b7-4f64-b76d-0c7a93c7ac87)'. Ident: 'index-980--8000595249233899911', commit timestamp: 'Timestamp(1574796780, 439)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.912-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (a7a18bfd-fa85-44a6-aec7-208302ab14a4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796779, 2517), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.080-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.061-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b (48c18786-54b7-4f64-b76d-0c7a93c7ac87)'. Ident: 'index-981--8000595249233899911', commit timestamp: 'Timestamp(1574796780, 439)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.912-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (a7a18bfd-fa85-44a6-aec7-208302ab14a4).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.083-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: drain applied 64 side writes (inserted: 64, deleted: 0) for '_id_hashed' in 1 ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.061-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b'. Ident: collection-979--8000595249233899911, commit timestamp: Timestamp(1574796780, 439)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.912-0500 I STORAGE [conn112] renameCollection: renaming collection bd2d1605-349f-4c3a-a188-1a8ebe7263a4 from test5_fsmdb0.tmp.agg_out.70537077-d34e-4e8f-86e4-476e6bf6f0d8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.083-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.078-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.912-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (a7a18bfd-fa85-44a6-aec7-208302ab14a4)'. Ident: 'index-952-8224331490264904478', commit timestamp: 'Timestamp(1574796779, 2517)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.085-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 09deecb6-ecbd-45c1-96b3-5aa776b75916: test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea ( a268a754-2354-45e9-8a64-4def0e932572 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.078-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.912-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (a7a18bfd-fa85-44a6-aec7-208302ab14a4)'. Ident: 'index-955-8224331490264904478', commit timestamp: 'Timestamp(1574796779, 2517)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.087-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.078-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: df5ad11f-6f7c-4eb4-a82d-56b720bc25b2: test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c (6b287a9e-fb9c-4d0e-9c02-d148d34708d8 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.912-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-949-8224331490264904478, commit timestamp: Timestamp(1574796779, 2517)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.087-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b (48c18786-54b7-4f64-b76d-0c7a93c7ac87) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796780, 439), t: 1 } and commit timestamp Timestamp(1574796780, 439)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.078-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.912-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.087-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b (48c18786-54b7-4f64-b76d-0c7a93c7ac87).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.078-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.912-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (bd2d1605-349f-4c3a-a188-1a8ebe7263a4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796779, 2518), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.087-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b (48c18786-54b7-4f64-b76d-0c7a93c7ac87)'. Ident: 'index-980--4104909142373009110', commit timestamp: 'Timestamp(1574796780, 439)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.080-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.912-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (bd2d1605-349f-4c3a-a188-1a8ebe7263a4).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.087-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b (48c18786-54b7-4f64-b76d-0c7a93c7ac87)'. Ident: 'index-981--4104909142373009110', commit timestamp: 'Timestamp(1574796780, 439)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.089-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: df5ad11f-6f7c-4eb4-a82d-56b720bc25b2: test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c ( 6b287a9e-fb9c-4d0e-9c02-d148d34708d8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.912-0500 I STORAGE [conn114] renameCollection: renaming collection 88cb9f23-8e5b-4c74-bf4c-a31370660e68 from test5_fsmdb0.tmp.agg_out.00264808-fe11-4e7b-9dd5-2cf712e2ec7c to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.087-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b'. Ident: collection-979--4104909142373009110, commit timestamp: Timestamp(1574796780, 439)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.097-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.912-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4885792215386136158, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6487255351983755102, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796777045), clusterTime: Timestamp(1574796777, 763) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796777, 763), signature: { hash: BinData(0, 3843848A34750623FAFB6040C0469C0281FFF5B0), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796776, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2864ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.102-0500 I INDEX [ReplWriterWorker-12] index build: starting on test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.097-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.912-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bd2d1605-349f-4c3a-a188-1a8ebe7263a4)'. Ident: 'index-958-8224331490264904478', commit timestamp: 'Timestamp(1574796779, 2518)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.103-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.097-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 31a279e1-af5c-4e3c-be6c-d9a090614ea0: test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f (526918bd-08ea-4b6f-9911-cef9acd4f152 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.912-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bd2d1605-349f-4c3a-a188-1a8ebe7263a4)'. Ident: 'index-959-8224331490264904478', commit timestamp: 'Timestamp(1574796779, 2518)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.103-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: af8acefe-e60b-47fc-8db4-881e451d506d: test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c (6b287a9e-fb9c-4d0e-9c02-d148d34708d8 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.097-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.912-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-957-8224331490264904478, commit timestamp: Timestamp(1574796779, 2518)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.103-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.098-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.912-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.103-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.100-0500 I COMMAND [ReplWriterWorker-6] CMD: drop test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.912-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3054399152127509261, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6664241797613734364, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796779804), clusterTime: Timestamp(1574796779, 3) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796779, 3), signature: { hash: BinData(0, 62DF4306384833AB2A703DF64C47402873CAAC75), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 106ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.106-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.100-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea (a268a754-2354-45e9-8a64-4def0e932572) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796780, 445), t: 1 } and commit timestamp Timestamp(1574796780, 445)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.913-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.115-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: af8acefe-e60b-47fc-8db4-881e451d506d: test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c ( 6b287a9e-fb9c-4d0e-9c02-d148d34708d8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.100-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea (a268a754-2354-45e9-8a64-4def0e932572).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.925-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.122-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.100-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea (a268a754-2354-45e9-8a64-4def0e932572)'. Ident: 'index-978--8000595249233899911', commit timestamp: 'Timestamp(1574796780, 445)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.932-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.122-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.100-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea (a268a754-2354-45e9-8a64-4def0e932572)'. Ident: 'index-989--8000595249233899911', commit timestamp: 'Timestamp(1574796780, 445)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.932-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.122-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 054e3b4d-b367-4686-918d-6a1633639777: test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f (526918bd-08ea-4b6f-9911-cef9acd4f152 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.100-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea'. Ident: collection-977--8000595249233899911, commit timestamp: Timestamp(1574796780, 445)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.932-0500 I STORAGE [conn108] Index build initialized: a2d5b836-dac6-4a6d-91eb-3d48cf5afb33: test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea (a268a754-2354-45e9-8a64-4def0e932572 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.122-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.100-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.932-0500 I INDEX [conn108] Waiting for index build to complete: a2d5b836-dac6-4a6d-91eb-3d48cf5afb33
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.123-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.104-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 31a279e1-af5c-4e3c-be6c-d9a090614ea0: test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f ( 526918bd-08ea-4b6f-9911-cef9acd4f152 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.933-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 4178e408-8d83-47ce-b479-dd5884dc933c: test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b ( 48c18786-54b7-4f64-b76d-0c7a93c7ac87 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.124-0500 I COMMAND [ReplWriterWorker-9] CMD: drop test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.119-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.933-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.124-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea (a268a754-2354-45e9-8a64-4def0e932572) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796780, 445), t: 1 } and commit timestamp Timestamp(1574796780, 445)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.119-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.933-0500 I INDEX [conn46] Index build completed: 4178e408-8d83-47ce-b479-dd5884dc933c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.124-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea (a268a754-2354-45e9-8a64-4def0e932572).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.119-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 769f7cbb-8b4d-4640-99dc-36b4c2a205c1: test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952 (e26c4867-07b9-43f2-97fb-c5bb7e770e88 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.933-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f with generated UUID: 526918bd-08ea-4b6f-9911-cef9acd4f152 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.124-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea (a268a754-2354-45e9-8a64-4def0e932572)'. Ident: 'index-978--4104909142373009110', commit timestamp: 'Timestamp(1574796780, 445)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.119-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.933-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.124-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea (a268a754-2354-45e9-8a64-4def0e932572)'. Ident: 'index-989--4104909142373009110', commit timestamp: 'Timestamp(1574796780, 445)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.120-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.937-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952 with generated UUID: e26c4867-07b9-43f2-97fb-c5bb7e770e88 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.124-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea'. Ident: collection-977--4104909142373009110, commit timestamp: Timestamp(1574796780, 445)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.121-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28 with provided UUID: 223b2910-58e8-47c8-bc49-888d49f5f240 and options: { uuid: UUID("223b2910-58e8-47c8-bc49-888d49f5f240"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.937-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c with generated UUID: 6b287a9e-fb9c-4d0e-9c02-d148d34708d8 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.126-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.122-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.942-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.167-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 054e3b4d-b367-4686-918d-6a1633639777: test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f ( 526918bd-08ea-4b6f-9911-cef9acd4f152 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.133-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 769f7cbb-8b4d-4640-99dc-36b4c2a205c1: test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952 ( e26c4867-07b9-43f2-97fb-c5bb7e770e88 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.945-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: a2d5b836-dac6-4a6d-91eb-3d48cf5afb33: test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea ( a268a754-2354-45e9-8a64-4def0e932572 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.174-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:02.709-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796779, 2518), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2795ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.171-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.945-0500 I INDEX [conn108] Index build completed: a2d5b836-dac6-4a6d-91eb-3d48cf5afb33
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.174-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.172-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab with provided UUID: db8c1047-9657-4464-a6b8-50983e8706cb and options: { uuid: UUID("db8c1047-9657-4464-a6b8-50983e8706cb"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.961-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.174-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 69706223-7824-443f-b2a2-dec6f67da107: test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952 (e26c4867-07b9-43f2-97fb-c5bb7e770e88 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.186-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.961-0500 I INDEX [conn114] Registering index build: 1d124553-2a03-4481-8a16-f2cb363168d5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.174-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.194-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f (526918bd-08ea-4b6f-9911-cef9acd4f152) to test5_fsmdb0.agg_out and drop 88cb9f23-8e5b-4c74-bf4c-a31370660e68.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.978-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.175-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.194-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (88cb9f23-8e5b-4c74-bf4c-a31370660e68) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796780, 1399), t: 1 } and commit timestamp Timestamp(1574796780, 1399)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:32:59.994-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.176-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28 with provided UUID: 223b2910-58e8-47c8-bc49-888d49f5f240 and options: { uuid: UUID("223b2910-58e8-47c8-bc49-888d49f5f240"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.194-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (88cb9f23-8e5b-4c74-bf4c-a31370660e68).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.001-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.177-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.194-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 526918bd-08ea-4b6f-9911-cef9acd4f152 from test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.001-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.185-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 69706223-7824-443f-b2a2-dec6f67da107: test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952 ( e26c4867-07b9-43f2-97fb-c5bb7e770e88 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.194-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (88cb9f23-8e5b-4c74-bf4c-a31370660e68)'. Ident: 'index-972--8000595249233899911', commit timestamp: 'Timestamp(1574796780, 1399)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.001-0500 I STORAGE [conn114] Index build initialized: 1d124553-2a03-4481-8a16-f2cb363168d5: test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f (526918bd-08ea-4b6f-9911-cef9acd4f152 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.191-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.194-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (88cb9f23-8e5b-4c74-bf4c-a31370660e68)'. Ident: 'index-975--8000595249233899911', commit timestamp: 'Timestamp(1574796780, 1399)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.001-0500 I INDEX [conn114] Waiting for index build to complete: 1d124553-2a03-4481-8a16-f2cb363168d5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.193-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab with provided UUID: db8c1047-9657-4464-a6b8-50983e8706cb and options: { uuid: UUID("db8c1047-9657-4464-a6b8-50983e8706cb"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:00.194-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-971--8000595249233899911, commit timestamp: Timestamp(1574796780, 1399)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.002-0500 I INDEX [conn110] Registering index build: 94774c90-e070-4805-a7df-08f95f61b3e8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.206-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.002-0500 I INDEX [conn112] Registering index build: b0ad19f9-a949-4e13-a599-bf13f8f1dc33
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.711-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.213-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f (526918bd-08ea-4b6f-9911-cef9acd4f152) to test5_fsmdb0.agg_out and drop 88cb9f23-8e5b-4c74-bf4c-a31370660e68.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.002-0500 I COMMAND [conn46] CMD: drop test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.711-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.213-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (88cb9f23-8e5b-4c74-bf4c-a31370660e68) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796780, 1399), t: 1 } and commit timestamp Timestamp(1574796780, 1399)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.018-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.711-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: d5fb7059-a2a9-46fd-9f0a-23c9e00f6916: test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28 (223b2910-58e8-47c8-bc49-888d49f5f240 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.213-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (88cb9f23-8e5b-4c74-bf4c-a31370660e68).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.018-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.711-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.213-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 526918bd-08ea-4b6f-9911-cef9acd4f152 from test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.018-0500 I STORAGE [conn110] Index build initialized: 94774c90-e070-4805-a7df-08f95f61b3e8: test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c (6b287a9e-fb9c-4d0e-9c02-d148d34708d8 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.711-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.213-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (88cb9f23-8e5b-4c74-bf4c-a31370660e68)'. Ident: 'index-972--4104909142373009110', commit timestamp: 'Timestamp(1574796780, 1399)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.018-0500 I INDEX [conn110] Waiting for index build to complete: 94774c90-e070-4805-a7df-08f95f61b3e8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.213-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (88cb9f23-8e5b-4c74-bf4c-a31370660e68)'. Ident: 'index-975--4104909142373009110', commit timestamp: 'Timestamp(1574796780, 1399)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.713-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c (6b287a9e-fb9c-4d0e-9c02-d148d34708d8) to test5_fsmdb0.agg_out and drop 526918bd-08ea-4b6f-9911-cef9acd4f152.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.018-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b (48c18786-54b7-4f64-b76d-0c7a93c7ac87) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.213-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-971--4104909142373009110, commit timestamp: Timestamp(1574796780, 1399)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.018-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b (48c18786-54b7-4f64-b76d-0c7a93c7ac87).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:00.216-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796780, 1399) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796780, 1527), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 4766 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 115ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.018-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b (48c18786-54b7-4f64-b76d-0c7a93c7ac87)'. Ident: 'index-968-8224331490264904478', commit timestamp: 'Timestamp(1574796780, 439)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.018-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b (48c18786-54b7-4f64-b76d-0c7a93c7ac87)'. Ident: 'index-969-8224331490264904478', commit timestamp: 'Timestamp(1574796780, 439)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.018-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b'. Ident: collection-966-8224331490264904478, commit timestamp: Timestamp(1574796780, 439)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.019-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.019-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.019-0500 I COMMAND [conn70] command test5_fsmdb0.agg_out appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2030006425466826117, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7662360306173746500, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796779862), clusterTime: Timestamp(1574796779, 1332) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796779, 1333), signature: { hash: BinData(0, 62DF4306384833AB2A703DF64C47402873CAAC75), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.fe0e91c2-7874-46b4-b6d6-54817102b77b\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:993 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 155ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.019-0500 I COMMAND [conn108] CMD: drop test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.019-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.020-0500 I COMMAND [conn70] CMD: dropIndexes test5_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.022-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.022-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.029-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 94774c90-e070-4805-a7df-08f95f61b3e8: test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c ( 6b287a9e-fb9c-4d0e-9c02-d148d34708d8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.031-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.039-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.039-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.039-0500 I STORAGE [conn112] Index build initialized: b0ad19f9-a949-4e13-a599-bf13f8f1dc33: test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952 (e26c4867-07b9-43f2-97fb-c5bb7e770e88 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.039-0500 I INDEX [conn112] Waiting for index build to complete: b0ad19f9-a949-4e13-a599-bf13f8f1dc33
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.039-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.039-0500 I INDEX [conn110] Index build completed: 94774c90-e070-4805-a7df-08f95f61b3e8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.039-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea (a268a754-2354-45e9-8a64-4def0e932572) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.039-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea (a268a754-2354-45e9-8a64-4def0e932572).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.039-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea (a268a754-2354-45e9-8a64-4def0e932572)'. Ident: 'index-967-8224331490264904478', commit timestamp: 'Timestamp(1574796780, 445)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.039-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea (a268a754-2354-45e9-8a64-4def0e932572)'. Ident: 'index-971-8224331490264904478', commit timestamp: 'Timestamp(1574796780, 445)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.039-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea'. Ident: collection-965-8224331490264904478, commit timestamp: Timestamp(1574796780, 445)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.039-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 1d124553-2a03-4481-8a16-f2cb363168d5: test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f ( 526918bd-08ea-4b6f-9911-cef9acd4f152 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.039-0500 I INDEX [conn114] Index build completed: 1d124553-2a03-4481-8a16-f2cb363168d5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.039-0500 I COMMAND [conn68] command test5_fsmdb0.agg_out appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1092739655595121385, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1600594644196994477, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796779859), clusterTime: Timestamp(1574796779, 1265) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796779, 1332), signature: { hash: BinData(0, 62DF4306384833AB2A703DF64C47402873CAAC75), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.01fa4c33-b26b-4d5d-813a-58d47a16c0ea\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"moderate\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"moderate\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"warn\" }" errName:CommandFailed errCode:125 reslen:993 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 178ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.040-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.043-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.048-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28 with generated UUID: 223b2910-58e8-47c8-bc49-888d49f5f240 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.048-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab with generated UUID: db8c1047-9657-4464-a6b8-50983e8706cb and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.049-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: b0ad19f9-a949-4e13-a599-bf13f8f1dc33: test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952 ( e26c4867-07b9-43f2-97fb-c5bb7e770e88 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.049-0500 I INDEX [conn112] Index build completed: b0ad19f9-a949-4e13-a599-bf13f8f1dc33
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.072-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.072-0500 I INDEX [conn108] Registering index build: 61795036-d24f-4c59-98d5-51e62b6d5b7f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.080-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.096-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.096-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.096-0500 I STORAGE [conn108] Index build initialized: 61795036-d24f-4c59-98d5-51e62b6d5b7f: test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28 (223b2910-58e8-47c8-bc49-888d49f5f240 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.096-0500 I INDEX [conn108] Waiting for index build to complete: 61795036-d24f-4c59-98d5-51e62b6d5b7f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.096-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.096-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (88cb9f23-8e5b-4c74-bf4c-a31370660e68) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796780, 1399), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.096-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (88cb9f23-8e5b-4c74-bf4c-a31370660e68).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.096-0500 I STORAGE [conn110] renameCollection: renaming collection 526918bd-08ea-4b6f-9911-cef9acd4f152 from test5_fsmdb0.tmp.agg_out.5fc13084-62a3-43a9-a6e2-46ffa72fba7f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.096-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (88cb9f23-8e5b-4c74-bf4c-a31370660e68)'. Ident: 'index-962-8224331490264904478', commit timestamp: 'Timestamp(1574796780, 1399)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.096-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (88cb9f23-8e5b-4c74-bf4c-a31370660e68)'. Ident: 'index-963-8224331490264904478', commit timestamp: 'Timestamp(1574796780, 1399)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.096-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-960-8224331490264904478, commit timestamp: Timestamp(1574796780, 1399)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.097-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.097-0500 I INDEX [conn46] Registering index build: f84aabd1-daf5-4896-91da-0e1b0c653f07
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.714-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.728-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:02.771-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796780, 445), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2730ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:02.806-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796780, 444), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2765ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.097-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 9200736309259306840, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6847170038725014460, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796779913), clusterTime: Timestamp(1574796779, 2516) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796779, 2518), signature: { hash: BinData(0, 62DF4306384833AB2A703DF64C47402873CAAC75), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 18970 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 183ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.728-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.715-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (526918bd-08ea-4b6f-9911-cef9acd4f152) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 4), t: 1 } and commit timestamp Timestamp(1574796782, 4)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:02.849-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796782, 56), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 157ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:02.848-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796780, 1527), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 160ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.098-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.715-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (526918bd-08ea-4b6f-9911-cef9acd4f152).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.728-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 7a02e97e-148a-43dd-be43-4765d5aac48f: test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28 (223b2910-58e8-47c8-bc49-888d49f5f240 ): indexes: 1
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:02.900-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796782, 60), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 189ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:00.115-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.715-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 6b287a9e-fb9c-4d0e-9c02-d148d34708d8 from test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.728-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:02.950-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796782, 695), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 177ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.686-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.715-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (526918bd-08ea-4b6f-9911-cef9acd4f152)'. Ident: 'index-984--8000595249233899911', commit timestamp: 'Timestamp(1574796782, 4)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.728-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.686-0500 I STORAGE [conn46] Index build initialized: f84aabd1-daf5-4896-91da-0e1b0c653f07: test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab (db8c1047-9657-4464-a6b8-50983e8706cb ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.715-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (526918bd-08ea-4b6f-9911-cef9acd4f152)'. Ident: 'index-993--8000595249233899911', commit timestamp: 'Timestamp(1574796782, 4)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.730-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c (6b287a9e-fb9c-4d0e-9c02-d148d34708d8) to test5_fsmdb0.agg_out and drop 526918bd-08ea-4b6f-9911-cef9acd4f152.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.686-0500 I INDEX [conn46] Waiting for index build to complete: f84aabd1-daf5-4896-91da-0e1b0c653f07
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.715-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-983--8000595249233899911, commit timestamp: Timestamp(1574796782, 4)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.731-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.689-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.716-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: d5fb7059-a2a9-46fd-9f0a-23c9e00f6916: test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28 ( 223b2910-58e8-47c8-bc49-888d49f5f240 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.731-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (526918bd-08ea-4b6f-9911-cef9acd4f152) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 4), t: 1 } and commit timestamp Timestamp(1574796782, 4)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.690-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:05.811-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796782, 1072), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3003ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.717-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d with provided UUID: d46b60d5-fac6-4129-b98b-ca820264d17d and options: { uuid: UUID("d46b60d5-fac6-4129-b98b-ca820264d17d"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.731-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (526918bd-08ea-4b6f-9911-cef9acd4f152).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.690-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (526918bd-08ea-4b6f-9911-cef9acd4f152) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 4), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.731-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.731-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 6b287a9e-fb9c-4d0e-9c02-d148d34708d8 from test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.690-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (526918bd-08ea-4b6f-9911-cef9acd4f152).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.748-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.732-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (526918bd-08ea-4b6f-9911-cef9acd4f152)'. Ident: 'index-984--4104909142373009110', commit timestamp: 'Timestamp(1574796782, 4)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.690-0500 I STORAGE [conn112] renameCollection: renaming collection 6b287a9e-fb9c-4d0e-9c02-d148d34708d8 from test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.748-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.732-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (526918bd-08ea-4b6f-9911-cef9acd4f152)'. Ident: 'index-993--4104909142373009110', commit timestamp: 'Timestamp(1574796782, 4)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.690-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (526918bd-08ea-4b6f-9911-cef9acd4f152)'. Ident: 'index-976-8224331490264904478', commit timestamp: 'Timestamp(1574796782, 4)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.748-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 765a634c-8bc7-4525-80bf-1aac992a80f9: test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab (db8c1047-9657-4464-a6b8-50983e8706cb ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.732-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-983--4104909142373009110, commit timestamp: Timestamp(1574796782, 4)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.690-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (526918bd-08ea-4b6f-9911-cef9acd4f152)'. Ident: 'index-978-8224331490264904478', commit timestamp: 'Timestamp(1574796782, 4)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.749-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.733-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 7a02e97e-148a-43dd-be43-4765d5aac48f: test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28 ( 223b2910-58e8-47c8-bc49-888d49f5f240 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.690-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-973-8224331490264904478, commit timestamp: Timestamp(1574796782, 4)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.749-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.734-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d with provided UUID: d46b60d5-fac6-4129-b98b-ca820264d17d and options: { uuid: UUID("d46b60d5-fac6-4129-b98b-ca820264d17d"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.690-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c appName: "tid:0" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.4f451dd7-584d-4674-82a6-c6e2764e8e9c", to: "test5_fsmdb0.agg_out", collectionOptions: { validationLevel: "strict", validationAction: "warn" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796780, 1899), signature: { hash: BinData(0, F80EE476C9F33D18043F32BDD207B13DB1FF3284), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 2586978 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2587ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.750-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952 (e26c4867-07b9-43f2-97fb-c5bb7e770e88) to test5_fsmdb0.agg_out and drop 6b287a9e-fb9c-4d0e-9c02-d148d34708d8.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.748-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.690-0500 I COMMAND [conn220] command test5_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796780, 1399), lsid: { id: UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") }, $clusterTime: { clusterTime: Timestamp(1574796780, 1527), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796780, 1399). Collection minimum timestamp is Timestamp(1574796782, 3)" errName:SnapshotUnavailable errCode:246 reslen:599 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2472730 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 2472ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.752-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.768-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.691-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6427923959247498224, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3969136292877006311, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796779933), clusterTime: Timestamp(1574796779, 2522) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796779, 2523), signature: { hash: BinData(0, 62DF4306384833AB2A703DF64C47402873CAAC75), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2756ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.752-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (6b287a9e-fb9c-4d0e-9c02-d148d34708d8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 60), t: 1 } and commit timestamp Timestamp(1574796782, 60)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.768-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.691-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.752-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (6b287a9e-fb9c-4d0e-9c02-d148d34708d8).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.768-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: d5f83ba5-c041-4ecd-a288-2f6098a93d15: test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab (db8c1047-9657-4464-a6b8-50983e8706cb ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.691-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d with generated UUID: d46b60d5-fac6-4129-b98b-ca820264d17d and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.752-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection e26c4867-07b9-43f2-97fb-c5bb7e770e88 from test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.768-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.691-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952 appName: "tid:2" command: insert { insert: "tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952", bypassDocumentValidation: false, ordered: false, documents: 500, shardVersion: [ Timestamp(0, 0), ObjectId('000000000000000000000000') ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, writeConcern: { w: 1, wtimeout: 0 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796780, 577), signature: { hash: BinData(0, F80EE476C9F33D18043F32BDD207B13DB1FF3284), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } ninserted:500 keysInserted:1000 numYields:0 reslen:400 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 8 } }, ReplicationStateTransition: { acquireCount: { w: 8 } }, Global: { acquireCount: { w: 8 } }, Database: { acquireCount: { w: 8 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 2609145 } }, Collection: { acquireCount: { w: 8 } }, Mutex: { acquireCount: { r: 1016 } } } flowControl:{ acquireCount: 8 } storage:{ timeWaitingMicros: { schemaLock: 13667 } } protocol:op_msg 2634ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.753-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (6b287a9e-fb9c-4d0e-9c02-d148d34708d8)'. Ident: 'index-988--8000595249233899911', commit timestamp: 'Timestamp(1574796782, 60)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.768-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.692-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 61795036-d24f-4c59-98d5-51e62b6d5b7f: test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28 ( 223b2910-58e8-47c8-bc49-888d49f5f240 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.753-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (6b287a9e-fb9c-4d0e-9c02-d148d34708d8)'. Ident: 'index-991--8000595249233899911', commit timestamp: 'Timestamp(1574796782, 60)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.769-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952 (e26c4867-07b9-43f2-97fb-c5bb7e770e88) to test5_fsmdb0.agg_out and drop 6b287a9e-fb9c-4d0e-9c02-d148d34708d8.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.692-0500 I INDEX [conn108] Index build completed: 61795036-d24f-4c59-98d5-51e62b6d5b7f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.753-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-987--8000595249233899911, commit timestamp: Timestamp(1574796782, 60)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.771-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.692-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28 appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796780, 1217), signature: { hash: BinData(0, F80EE476C9F33D18043F32BDD207B13DB1FF3284), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2620ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.753-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476 with provided UUID: b360f29d-1a5c-4704-8e76-c6e9c8cfc951 and options: { uuid: UUID("b360f29d-1a5c-4704-8e76-c6e9c8cfc951"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.771-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (6b287a9e-fb9c-4d0e-9c02-d148d34708d8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 60), t: 1 } and commit timestamp Timestamp(1574796782, 60)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.693-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.755-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 765a634c-8bc7-4525-80bf-1aac992a80f9: test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab ( db8c1047-9657-4464-a6b8-50983e8706cb ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.771-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (6b287a9e-fb9c-4d0e-9c02-d148d34708d8).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.701-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.770-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.771-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection e26c4867-07b9-43f2-97fb-c5bb7e770e88 from test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.708-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.771-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0 with provided UUID: ccce1418-7a8a-42e8-b605-d02af92c7466 and options: { uuid: UUID("ccce1418-7a8a-42e8-b605-d02af92c7466"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.771-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (6b287a9e-fb9c-4d0e-9c02-d148d34708d8)'. Ident: 'index-988--4104909142373009110', commit timestamp: 'Timestamp(1574796782, 60)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.708-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.788-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.771-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (6b287a9e-fb9c-4d0e-9c02-d148d34708d8)'. Ident: 'index-991--4104909142373009110', commit timestamp: 'Timestamp(1574796782, 60)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.709-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (6b287a9e-fb9c-4d0e-9c02-d148d34708d8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 60), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.815-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.771-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-987--4104909142373009110, commit timestamp: Timestamp(1574796782, 60)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.709-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (6b287a9e-fb9c-4d0e-9c02-d148d34708d8).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.815-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.772-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476 with provided UUID: b360f29d-1a5c-4704-8e76-c6e9c8cfc951 and options: { uuid: UUID("b360f29d-1a5c-4704-8e76-c6e9c8cfc951"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.709-0500 I STORAGE [conn114] renameCollection: renaming collection e26c4867-07b9-43f2-97fb-c5bb7e770e88 from test5_fsmdb0.tmp.agg_out.baa41abb-a843-4493-8cb9-3c17d2123952 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.815-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 16f9f565-5f3e-4442-ae5e-4160c87dc458: test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d (d46b60d5-fac6-4129-b98b-ca820264d17d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.774-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: d5f83ba5-c041-4ecd-a288-2f6098a93d15: test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab ( db8c1047-9657-4464-a6b8-50983e8706cb ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.709-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (6b287a9e-fb9c-4d0e-9c02-d148d34708d8)'. Ident: 'index-977-8224331490264904478', commit timestamp: 'Timestamp(1574796782, 60)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.815-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.789-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.709-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (6b287a9e-fb9c-4d0e-9c02-d148d34708d8)'. Ident: 'index-981-8224331490264904478', commit timestamp: 'Timestamp(1574796782, 60)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.815-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.790-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0 with provided UUID: ccce1418-7a8a-42e8-b605-d02af92c7466 and options: { uuid: UUID("ccce1418-7a8a-42e8-b605-d02af92c7466"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.709-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-975-8224331490264904478, commit timestamp: Timestamp(1574796782, 60)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.816-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28 (223b2910-58e8-47c8-bc49-888d49f5f240) to test5_fsmdb0.agg_out and drop e26c4867-07b9-43f2-97fb-c5bb7e770e88.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.806-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.709-0500 I INDEX [conn112] Registering index build: 6d73b418-1b33-44f2-a041-4ad1d0afa2c3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.818-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.832-0500 I INDEX [ReplWriterWorker-12] index build: starting on test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.709-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8535670507792140461, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3840456444008427248, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796779914), clusterTime: Timestamp(1574796779, 2518) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796779, 2522), signature: { hash: BinData(0, 62DF4306384833AB2A703DF64C47402873CAAC75), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2775ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.819-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (e26c4867-07b9-43f2-97fb-c5bb7e770e88) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 631), t: 1 } and commit timestamp Timestamp(1574796782, 631)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.832-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.709-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: f84aabd1-daf5-4896-91da-0e1b0c653f07: test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab ( db8c1047-9657-4464-a6b8-50983e8706cb ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.819-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (e26c4867-07b9-43f2-97fb-c5bb7e770e88).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.832-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 778c8353-b8b8-4dd0-97cf-43c8e27a06b5: test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d (d46b60d5-fac6-4129-b98b-ca820264d17d ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.710-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476 with generated UUID: b360f29d-1a5c-4704-8e76-c6e9c8cfc951 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.819-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 223b2910-58e8-47c8-bc49-888d49f5f240 from test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.832-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.714-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0 with generated UUID: ccce1418-7a8a-42e8-b605-d02af92c7466 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.819-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e26c4867-07b9-43f2-97fb-c5bb7e770e88)'. Ident: 'index-986--8000595249233899911', commit timestamp: 'Timestamp(1574796782, 631)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.832-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.736-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.819-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e26c4867-07b9-43f2-97fb-c5bb7e770e88)'. Ident: 'index-995--8000595249233899911', commit timestamp: 'Timestamp(1574796782, 631)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.833-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28 (223b2910-58e8-47c8-bc49-888d49f5f240) to test5_fsmdb0.agg_out and drop e26c4867-07b9-43f2-97fb-c5bb7e770e88.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.736-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.819-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-985--8000595249233899911, commit timestamp: Timestamp(1574796782, 631)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.835-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.736-0500 I STORAGE [conn112] Index build initialized: 6d73b418-1b33-44f2-a041-4ad1d0afa2c3: test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d (d46b60d5-fac6-4129-b98b-ca820264d17d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.821-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 16f9f565-5f3e-4442-ae5e-4160c87dc458: test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d ( d46b60d5-fac6-4129-b98b-ca820264d17d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.835-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.agg_out (e26c4867-07b9-43f2-97fb-c5bb7e770e88) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 631), t: 1 } and commit timestamp Timestamp(1574796782, 631)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.736-0500 I INDEX [conn112] Waiting for index build to complete: 6d73b418-1b33-44f2-a041-4ad1d0afa2c3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.822-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404 with provided UUID: 861f0ca2-94ac-4213-a598-19555294a03c and options: { uuid: UUID("861f0ca2-94ac-4213-a598-19555294a03c"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.835-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.agg_out (e26c4867-07b9-43f2-97fb-c5bb7e770e88).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.736-0500 I INDEX [conn46] Index build completed: f84aabd1-daf5-4896-91da-0e1b0c653f07
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.845-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.835-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection 223b2910-58e8-47c8-bc49-888d49f5f240 from test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.736-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796780, 1397), signature: { hash: BinData(0, F80EE476C9F33D18043F32BDD207B13DB1FF3284), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 15598 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2655ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.864-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.835-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e26c4867-07b9-43f2-97fb-c5bb7e770e88)'. Ident: 'index-986--4104909142373009110', commit timestamp: 'Timestamp(1574796782, 631)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.736-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.864-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.835-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e26c4867-07b9-43f2-97fb-c5bb7e770e88)'. Ident: 'index-995--4104909142373009110', commit timestamp: 'Timestamp(1574796782, 631)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.744-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.864-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: f1668b20-092d-4145-b450-91ba4792ce71: test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476 (b360f29d-1a5c-4704-8e76-c6e9c8cfc951 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.835-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-985--4104909142373009110, commit timestamp: Timestamp(1574796782, 631)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.744-0500 I INDEX [conn114] Registering index build: ca86c59e-d028-4b13-b91d-2c0ae18b2675
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.865-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.837-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 778c8353-b8b8-4dd0-97cf-43c8e27a06b5: test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d ( d46b60d5-fac6-4129-b98b-ca820264d17d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.750-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.867-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.846-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404 with provided UUID: 861f0ca2-94ac-4213-a598-19555294a03c and options: { uuid: UUID("861f0ca2-94ac-4213-a598-19555294a03c"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.751-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.867-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab (db8c1047-9657-4464-a6b8-50983e8706cb) to test5_fsmdb0.agg_out and drop 223b2910-58e8-47c8-bc49-888d49f5f240.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.861-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.767-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.870-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.881-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.767-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.870-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (223b2910-58e8-47c8-bc49-888d49f5f240) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 1072), t: 1 } and commit timestamp Timestamp(1574796782, 1072)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.881-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.767-0500 I STORAGE [conn114] Index build initialized: ca86c59e-d028-4b13-b91d-2c0ae18b2675: test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476 (b360f29d-1a5c-4704-8e76-c6e9c8cfc951 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.870-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (223b2910-58e8-47c8-bc49-888d49f5f240).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.881-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: bf17d0b8-35d1-4181-88d9-ab3a0484bfbf: test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476 (b360f29d-1a5c-4704-8e76-c6e9c8cfc951 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.767-0500 I INDEX [conn114] Waiting for index build to complete: ca86c59e-d028-4b13-b91d-2c0ae18b2675
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.870-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection db8c1047-9657-4464-a6b8-50983e8706cb from test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.881-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.770-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.870-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (223b2910-58e8-47c8-bc49-888d49f5f240)'. Ident: 'index-998--8000595249233899911', commit timestamp: 'Timestamp(1574796782, 1072)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.882-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.770-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.870-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (223b2910-58e8-47c8-bc49-888d49f5f240)'. Ident: 'index-1001--8000595249233899911', commit timestamp: 'Timestamp(1574796782, 1072)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.883-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab (db8c1047-9657-4464-a6b8-50983e8706cb) to test5_fsmdb0.agg_out and drop 223b2910-58e8-47c8-bc49-888d49f5f240.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.770-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (e26c4867-07b9-43f2-97fb-c5bb7e770e88) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 631), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.870-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-997--8000595249233899911, commit timestamp: Timestamp(1574796782, 1072)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.885-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.770-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (e26c4867-07b9-43f2-97fb-c5bb7e770e88).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.871-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73 with provided UUID: 2e48d489-1045-4ffa-84bc-f62bb8a23159 and options: { uuid: UUID("2e48d489-1045-4ffa-84bc-f62bb8a23159"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.885-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (223b2910-58e8-47c8-bc49-888d49f5f240) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 1072), t: 1 } and commit timestamp Timestamp(1574796782, 1072)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.770-0500 I STORAGE [conn108] renameCollection: renaming collection 223b2910-58e8-47c8-bc49-888d49f5f240 from test5_fsmdb0.tmp.agg_out.1edba0ef-162b-426a-97b8-14c3f416db28 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.872-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: f1668b20-092d-4145-b450-91ba4792ce71: test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476 ( b360f29d-1a5c-4704-8e76-c6e9c8cfc951 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.885-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (223b2910-58e8-47c8-bc49-888d49f5f240).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.770-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e26c4867-07b9-43f2-97fb-c5bb7e770e88)'. Ident: 'index-979-8224331490264904478', commit timestamp: 'Timestamp(1574796782, 631)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.887-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.885-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection db8c1047-9657-4464-a6b8-50983e8706cb from test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.770-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e26c4867-07b9-43f2-97fb-c5bb7e770e88)'. Ident: 'index-983-8224331490264904478', commit timestamp: 'Timestamp(1574796782, 631)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.908-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.885-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (223b2910-58e8-47c8-bc49-888d49f5f240)'. Ident: 'index-998--4104909142373009110', commit timestamp: 'Timestamp(1574796782, 1072)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.770-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-974-8224331490264904478, commit timestamp: Timestamp(1574796782, 631)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.908-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.885-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (223b2910-58e8-47c8-bc49-888d49f5f240)'. Ident: 'index-1001--4104909142373009110', commit timestamp: 'Timestamp(1574796782, 1072)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.770-0500 I INDEX [conn110] Registering index build: 4d32fc79-8ff9-4c34-85ec-734dc2c81585
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.908-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: c95aa036-98db-4f37-bdf4-803e6e137d4d: test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0 (ccce1418-7a8a-42e8-b605-d02af92c7466 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.885-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-997--4104909142373009110, commit timestamp: Timestamp(1574796782, 1072)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.770-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.908-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.887-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: bf17d0b8-35d1-4181-88d9-ab3a0484bfbf: test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476 ( b360f29d-1a5c-4704-8e76-c6e9c8cfc951 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.771-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1450916505166224626, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8021829760420063254, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796780041), clusterTime: Timestamp(1574796780, 445) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796780, 445), signature: { hash: BinData(0, F80EE476C9F33D18043F32BDD207B13DB1FF3284), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2727ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.909-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.888-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73 with provided UUID: 2e48d489-1045-4ffa-84bc-f62bb8a23159 and options: { uuid: UUID("2e48d489-1045-4ffa-84bc-f62bb8a23159"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.771-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 6d73b418-1b33-44f2-a041-4ad1d0afa2c3: test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d ( d46b60d5-fac6-4129-b98b-ca820264d17d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.912-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.903-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.772-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.914-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: c95aa036-98db-4f37-bdf4-803e6e137d4d: test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0 ( ccce1418-7a8a-42e8-b605-d02af92c7466 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.923-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.774-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404 with generated UUID: 861f0ca2-94ac-4213-a598-19555294a03c and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.935-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.923-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.782-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.935-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.923-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 921d1346-be57-4bdf-b42e-340089383d64: test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0 (ccce1418-7a8a-42e8-b605-d02af92c7466 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.797-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.935-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 1e46457f-a1d1-469e-a584-ff412b44c36e: test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404 (861f0ca2-94ac-4213-a598-19555294a03c ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.923-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.797-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.935-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.924-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.797-0500 I STORAGE [conn110] Index build initialized: 4d32fc79-8ff9-4c34-85ec-734dc2c81585: test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0 (ccce1418-7a8a-42e8-b605-d02af92c7466 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.936-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d (d46b60d5-fac6-4129-b98b-ca820264d17d) to test5_fsmdb0.agg_out and drop db8c1047-9657-4464-a6b8-50983e8706cb.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.926-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.797-0500 I INDEX [conn110] Waiting for index build to complete: 4d32fc79-8ff9-4c34-85ec-734dc2c81585
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.937-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.932-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 921d1346-be57-4bdf-b42e-340089383d64: test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0 ( ccce1418-7a8a-42e8-b605-d02af92c7466 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.797-0500 I INDEX [conn112] Index build completed: 6d73b418-1b33-44f2-a041-4ad1d0afa2c3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.939-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.958-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.797-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: ca86c59e-d028-4b13-b91d-2c0ae18b2675: test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476 ( b360f29d-1a5c-4704-8e76-c6e9c8cfc951 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.939-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.agg_out (db8c1047-9657-4464-a6b8-50983e8706cb) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 2079), t: 1 } and commit timestamp Timestamp(1574796782, 2079)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.958-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.797-0500 I INDEX [conn114] Index build completed: ca86c59e-d028-4b13-b91d-2c0ae18b2675
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.940-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.agg_out (db8c1047-9657-4464-a6b8-50983e8706cb).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.958-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: a0dd086f-20a0-44b6-a49e-836f8b0fa5e5: test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404 (861f0ca2-94ac-4213-a598-19555294a03c ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.805-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.940-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection d46b60d5-fac6-4129-b98b-ca820264d17d from test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.958-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.805-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.940-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (db8c1047-9657-4464-a6b8-50983e8706cb)'. Ident: 'index-1000--8000595249233899911', commit timestamp: 'Timestamp(1574796782, 2079)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.959-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.805-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (223b2910-58e8-47c8-bc49-888d49f5f240) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 1072), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.940-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (db8c1047-9657-4464-a6b8-50983e8706cb)'. Ident: 'index-1005--8000595249233899911', commit timestamp: 'Timestamp(1574796782, 2079)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.959-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d (d46b60d5-fac6-4129-b98b-ca820264d17d) to test5_fsmdb0.agg_out and drop db8c1047-9657-4464-a6b8-50983e8706cb.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.805-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (223b2910-58e8-47c8-bc49-888d49f5f240).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.940-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-999--8000595249233899911, commit timestamp: Timestamp(1574796782, 2079)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.962-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.805-0500 I STORAGE [conn46] renameCollection: renaming collection db8c1047-9657-4464-a6b8-50983e8706cb from test5_fsmdb0.tmp.agg_out.c2893482-338f-49a6-a0a1-e45ddca808ab to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.940-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476 (b360f29d-1a5c-4704-8e76-c6e9c8cfc951) to test5_fsmdb0.agg_out and drop d46b60d5-fac6-4129-b98b-ca820264d17d.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.962-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (db8c1047-9657-4464-a6b8-50983e8706cb) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 2079), t: 1 } and commit timestamp Timestamp(1574796782, 2079)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.805-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (223b2910-58e8-47c8-bc49-888d49f5f240)'. Ident: 'index-987-8224331490264904478', commit timestamp: 'Timestamp(1574796782, 1072)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.940-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 1e46457f-a1d1-469e-a584-ff412b44c36e: test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404 ( 861f0ca2-94ac-4213-a598-19555294a03c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.962-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (db8c1047-9657-4464-a6b8-50983e8706cb).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.805-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (223b2910-58e8-47c8-bc49-888d49f5f240)'. Ident: 'index-989-8224331490264904478', commit timestamp: 'Timestamp(1574796782, 1072)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.941-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (d46b60d5-fac6-4129-b98b-ca820264d17d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 2080), t: 1 } and commit timestamp Timestamp(1574796782, 2080)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.962-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection d46b60d5-fac6-4129-b98b-ca820264d17d from test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.805-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-985-8224331490264904478, commit timestamp: Timestamp(1574796782, 1072)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.941-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (d46b60d5-fac6-4129-b98b-ca820264d17d).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.962-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (db8c1047-9657-4464-a6b8-50983e8706cb)'. Ident: 'index-1000--4104909142373009110', commit timestamp: 'Timestamp(1574796782, 2079)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.805-0500 I INDEX [conn108] Registering index build: 33d1c2bb-e0e2-429f-9ef7-f8afe899a4d5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.941-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection b360f29d-1a5c-4704-8e76-c6e9c8cfc951 from test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.962-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (db8c1047-9657-4464-a6b8-50983e8706cb)'. Ident: 'index-1005--4104909142373009110', commit timestamp: 'Timestamp(1574796782, 2079)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.805-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.941-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d46b60d5-fac6-4129-b98b-ca820264d17d)'. Ident: 'index-1004--8000595249233899911', commit timestamp: 'Timestamp(1574796782, 2080)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.962-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-999--4104909142373009110, commit timestamp: Timestamp(1574796782, 2079)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.806-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5314899426142570755, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 177708184822372260, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796780040), clusterTime: Timestamp(1574796780, 444) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796780, 445), signature: { hash: BinData(0, F80EE476C9F33D18043F32BDD207B13DB1FF3284), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2762ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.941-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d46b60d5-fac6-4129-b98b-ca820264d17d)'. Ident: 'index-1011--8000595249233899911', commit timestamp: 'Timestamp(1574796782, 2080)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.963-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476 (b360f29d-1a5c-4704-8e76-c6e9c8cfc951) to test5_fsmdb0.agg_out and drop d46b60d5-fac6-4129-b98b-ca820264d17d.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.806-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.941-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1003--8000595249233899911, commit timestamp: Timestamp(1574796782, 2080)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.963-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (d46b60d5-fac6-4129-b98b-ca820264d17d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 2080), t: 1 } and commit timestamp Timestamp(1574796782, 2080)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.809-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73 with generated UUID: 2e48d489-1045-4ffa-84bc-f62bb8a23159 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.941-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774 with provided UUID: ec38d6fa-6e33-4f60-b07d-5a539df38f7d and options: { uuid: UUID("ec38d6fa-6e33-4f60-b07d-5a539df38f7d"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.963-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (d46b60d5-fac6-4129-b98b-ca820264d17d).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.816-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.957-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.963-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection b360f29d-1a5c-4704-8e76-c6e9c8cfc951 from test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.834-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.960-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3 with provided UUID: d0562d5d-96c9-47f7-a8fa-ee8472a65ebf and options: { uuid: UUID("d0562d5d-96c9-47f7-a8fa-ee8472a65ebf"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.964-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d46b60d5-fac6-4129-b98b-ca820264d17d)'. Ident: 'index-1004--4104909142373009110', commit timestamp: 'Timestamp(1574796782, 2080)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.834-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.975-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.964-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d46b60d5-fac6-4129-b98b-ca820264d17d)'. Ident: 'index-1011--4104909142373009110', commit timestamp: 'Timestamp(1574796782, 2080)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.834-0500 I STORAGE [conn108] Index build initialized: 33d1c2bb-e0e2-429f-9ef7-f8afe899a4d5: test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404 (861f0ca2-94ac-4213-a598-19555294a03c ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.993-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.964-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1003--4104909142373009110, commit timestamp: Timestamp(1574796782, 2080)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.834-0500 I INDEX [conn108] Waiting for index build to complete: 33d1c2bb-e0e2-429f-9ef7-f8afe899a4d5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.993-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.964-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774 with provided UUID: ec38d6fa-6e33-4f60-b07d-5a539df38f7d and options: { uuid: UUID("ec38d6fa-6e33-4f60-b07d-5a539df38f7d"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.834-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.993-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 6b6301b3-3ef7-453e-9b65-8c3244847717: test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73 (2e48d489-1045-4ffa-84bc-f62bb8a23159 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.965-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: a0dd086f-20a0-44b6-a49e-836f8b0fa5e5: test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404 ( 861f0ca2-94ac-4213-a598-19555294a03c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.836-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 4d32fc79-8ff9-4c34-85ec-734dc2c81585: test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0 ( ccce1418-7a8a-42e8-b605-d02af92c7466 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.993-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.980-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.836-0500 I INDEX [conn110] Index build completed: 4d32fc79-8ff9-4c34-85ec-734dc2c81585
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.993-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.983-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3 with provided UUID: d0562d5d-96c9-47f7-a8fa-ee8472a65ebf and options: { uuid: UUID("d0562d5d-96c9-47f7-a8fa-ee8472a65ebf"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.845-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.994-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0 (ccce1418-7a8a-42e8-b605-d02af92c7466) to test5_fsmdb0.agg_out and drop b360f29d-1a5c-4704-8e76-c6e9c8cfc951.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:02.998-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.845-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.996-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.027-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.847-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.996-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (b360f29d-1a5c-4704-8e76-c6e9c8cfc951) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 2586), t: 1 } and commit timestamp Timestamp(1574796782, 2586)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.027-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.848-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.996-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (b360f29d-1a5c-4704-8e76-c6e9c8cfc951).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.027-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 75846729-07b7-4bfa-bfbd-039f2b3aba4f: test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73 (2e48d489-1045-4ffa-84bc-f62bb8a23159 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.848-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (db8c1047-9657-4464-a6b8-50983e8706cb) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 2079), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.996-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection ccce1418-7a8a-42e8-b605-d02af92c7466 from test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.028-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.848-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (db8c1047-9657-4464-a6b8-50983e8706cb).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.996-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (b360f29d-1a5c-4704-8e76-c6e9c8cfc951)'. Ident: 'index-1008--8000595249233899911', commit timestamp: 'Timestamp(1574796782, 2586)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.028-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.848-0500 I STORAGE [conn114] renameCollection: renaming collection d46b60d5-fac6-4129-b98b-ca820264d17d from test5_fsmdb0.tmp.agg_out.ce8bceeb-a629-4db7-a083-4852f45e0a7d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.996-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (b360f29d-1a5c-4704-8e76-c6e9c8cfc951)'. Ident: 'index-1015--8000595249233899911', commit timestamp: 'Timestamp(1574796782, 2586)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.030-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.848-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (db8c1047-9657-4464-a6b8-50983e8706cb)'. Ident: 'index-988-8224331490264904478', commit timestamp: 'Timestamp(1574796782, 2079)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.996-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1007--8000595249233899911, commit timestamp: Timestamp(1574796782, 2586)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.032-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 75846729-07b7-4bfa-bfbd-039f2b3aba4f: test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73 ( 2e48d489-1045-4ffa-84bc-f62bb8a23159 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.848-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (db8c1047-9657-4464-a6b8-50983e8706cb)'. Ident: 'index-991-8224331490264904478', commit timestamp: 'Timestamp(1574796782, 2079)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.997-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 6b6301b3-3ef7-453e-9b65-8c3244847717: test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73 ( 2e48d489-1045-4ffa-84bc-f62bb8a23159 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.032-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0 (ccce1418-7a8a-42e8-b605-d02af92c7466) to test5_fsmdb0.agg_out and drop b360f29d-1a5c-4704-8e76-c6e9c8cfc951.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.848-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-986-8224331490264904478, commit timestamp: Timestamp(1574796782, 2079)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:02.997-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4 with provided UUID: c9f16cbb-93d4-4708-a595-3d1f352c459d and options: { uuid: UUID("c9f16cbb-93d4-4708-a595-3d1f352c459d"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.033-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.agg_out (b360f29d-1a5c-4704-8e76-c6e9c8cfc951) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 2586), t: 1 } and commit timestamp Timestamp(1574796782, 2586)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.848-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2381161881669618927, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5904792276736701989, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796782688), clusterTime: Timestamp(1574796780, 1527) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796782, 56), signature: { hash: BinData(0, 46155B04D5E3ACE833C17C9FD3FBF7B8A8E34E67), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 157ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:03.020-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.033-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.agg_out (b360f29d-1a5c-4704-8e76-c6e9c8cfc951).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.848-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:03.027-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404 (861f0ca2-94ac-4213-a598-19555294a03c) to test5_fsmdb0.agg_out and drop ccce1418-7a8a-42e8-b605-d02af92c7466.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.033-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection ccce1418-7a8a-42e8-b605-d02af92c7466 from test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.849-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (d46b60d5-fac6-4129-b98b-ca820264d17d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 2080), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:03.027-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (ccce1418-7a8a-42e8-b605-d02af92c7466) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 3090), t: 1 } and commit timestamp Timestamp(1574796782, 3090)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.033-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (b360f29d-1a5c-4704-8e76-c6e9c8cfc951)'. Ident: 'index-1008--4104909142373009110', commit timestamp: 'Timestamp(1574796782, 2586)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.849-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (d46b60d5-fac6-4129-b98b-ca820264d17d).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:03.027-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (ccce1418-7a8a-42e8-b605-d02af92c7466).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.033-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (b360f29d-1a5c-4704-8e76-c6e9c8cfc951)'. Ident: 'index-1015--4104909142373009110', commit timestamp: 'Timestamp(1574796782, 2586)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.849-0500 I STORAGE [conn112] renameCollection: renaming collection b360f29d-1a5c-4704-8e76-c6e9c8cfc951 from test5_fsmdb0.tmp.agg_out.015a6c57-b7c1-4890-994d-f8e5c3c65476 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:03.027-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 861f0ca2-94ac-4213-a598-19555294a03c from test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.033-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1007--4104909142373009110, commit timestamp: Timestamp(1574796782, 2586)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.849-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d46b60d5-fac6-4129-b98b-ca820264d17d)'. Ident: 'index-994-8224331490264904478', commit timestamp: 'Timestamp(1574796782, 2080)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:03.027-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ccce1418-7a8a-42e8-b605-d02af92c7466)'. Ident: 'index-1010--8000595249233899911', commit timestamp: 'Timestamp(1574796782, 3090)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.036-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4 with provided UUID: c9f16cbb-93d4-4708-a595-3d1f352c459d and options: { uuid: UUID("c9f16cbb-93d4-4708-a595-3d1f352c459d"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.849-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d46b60d5-fac6-4129-b98b-ca820264d17d)'. Ident: 'index-995-8224331490264904478', commit timestamp: 'Timestamp(1574796782, 2080)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:03.027-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ccce1418-7a8a-42e8-b605-d02af92c7466)'. Ident: 'index-1019--8000595249233899911', commit timestamp: 'Timestamp(1574796782, 3090)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.048-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.849-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-993-8224331490264904478, commit timestamp: Timestamp(1574796782, 2080)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:03.027-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1009--8000595249233899911, commit timestamp: Timestamp(1574796782, 3090)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.059-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404 (861f0ca2-94ac-4213-a598-19555294a03c) to test5_fsmdb0.agg_out and drop ccce1418-7a8a-42e8-b605-d02af92c7466.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.849-0500 I INDEX [conn46] Registering index build: e6276531-15ae-4f99-9870-cf3359741372
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.818-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831 with provided UUID: e98c731b-a779-4d7e-8c43-71dbc93801e6 and options: { uuid: UUID("e98c731b-a779-4d7e-8c43-71dbc93801e6"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.059-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (ccce1418-7a8a-42e8-b605-d02af92c7466) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 3090), t: 1 } and commit timestamp Timestamp(1574796782, 3090)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.849-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8693362478639751652, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3066658468777643125, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796782692), clusterTime: Timestamp(1574796782, 56) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796782, 60), signature: { hash: BinData(0, 46155B04D5E3ACE833C17C9FD3FBF7B8A8E34E67), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 139ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.832-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.059-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (ccce1418-7a8a-42e8-b605-d02af92c7466).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.850-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 33d1c2bb-e0e2-429f-9ef7-f8afe899a4d5: test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404 ( 861f0ca2-94ac-4213-a598-19555294a03c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.059-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection 861f0ca2-94ac-4213-a598-19555294a03c from test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.851-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774 with generated UUID: ec38d6fa-6e33-4f60-b07d-5a539df38f7d and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.059-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ccce1418-7a8a-42e8-b605-d02af92c7466)'. Ident: 'index-1010--4104909142373009110', commit timestamp: 'Timestamp(1574796782, 3090)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.852-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3 with generated UUID: d0562d5d-96c9-47f7-a8fa-ee8472a65ebf and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.059-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ccce1418-7a8a-42e8-b605-d02af92c7466)'. Ident: 'index-1019--4104909142373009110', commit timestamp: 'Timestamp(1574796782, 3090)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.881-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.059-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1009--4104909142373009110, commit timestamp: Timestamp(1574796782, 3090)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.881-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:03.073-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796782, 3090) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796782, 3154), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 7068 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 108ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.881-0500 I STORAGE [conn46] Index build initialized: e6276531-15ae-4f99-9870-cf3359741372: test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73 (2e48d489-1045-4ffa-84bc-f62bb8a23159 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.881-0500 I INDEX [conn46] Waiting for index build to complete: e6276531-15ae-4f99-9870-cf3359741372
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.881-0500 I INDEX [conn108] Index build completed: 33d1c2bb-e0e2-429f-9ef7-f8afe899a4d5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.881-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.889-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.897-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.897-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.899-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.899-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.899-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (b360f29d-1a5c-4704-8e76-c6e9c8cfc951) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 2586), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.899-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (b360f29d-1a5c-4704-8e76-c6e9c8cfc951).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.899-0500 I STORAGE [conn110] renameCollection: renaming collection ccce1418-7a8a-42e8-b605-d02af92c7466 from test5_fsmdb0.tmp.agg_out.b5ad2e22-75fb-4961-a415-2c2dc0bb29b0 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.900-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (b360f29d-1a5c-4704-8e76-c6e9c8cfc951)'. Ident: 'index-999-8224331490264904478', commit timestamp: 'Timestamp(1574796782, 2586)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.900-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (b360f29d-1a5c-4704-8e76-c6e9c8cfc951)'. Ident: 'index-1001-8224331490264904478', commit timestamp: 'Timestamp(1574796782, 2586)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.900-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-996-8224331490264904478, commit timestamp: Timestamp(1574796782, 2586)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.900-0500 I INDEX [conn114] Registering index build: 22b4fd1a-06d6-430c-905a-740f65baa797
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.900-0500 I INDEX [conn112] Registering index build: 93443b7c-98da-40d4-8e9d-d986e25cb1d9
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.833-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831 with provided UUID: e98c731b-a779-4d7e-8c43-71dbc93801e6 and options: { uuid: UUID("e98c731b-a779-4d7e-8c43-71dbc93801e6"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.900-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1607479294991808244, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1844021779233186725, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796782710), clusterTime: Timestamp(1574796782, 60) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796782, 61), signature: { hash: BinData(0, 46155B04D5E3ACE833C17C9FD3FBF7B8A8E34E67), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 186ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.902-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: e6276531-15ae-4f99-9870-cf3359741372: test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73 ( 2e48d489-1045-4ffa-84bc-f62bb8a23159 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.903-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4 with generated UUID: c9f16cbb-93d4-4708-a595-3d1f352c459d and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.924-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.924-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.924-0500 I STORAGE [conn114] Index build initialized: 22b4fd1a-06d6-430c-905a-740f65baa797: test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3 (d0562d5d-96c9-47f7-a8fa-ee8472a65ebf ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.924-0500 I INDEX [conn114] Waiting for index build to complete: 22b4fd1a-06d6-430c-905a-740f65baa797
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.925-0500 I INDEX [conn46] Index build completed: e6276531-15ae-4f99-9870-cf3359741372
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.934-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.948-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.949-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.949-0500 I STORAGE [conn112] Index build initialized: 93443b7c-98da-40d4-8e9d-d986e25cb1d9: test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774 (ec38d6fa-6e33-4f60-b07d-5a539df38f7d ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.949-0500 I INDEX [conn112] Waiting for index build to complete: 93443b7c-98da-40d4-8e9d-d986e25cb1d9
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.949-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.949-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (ccce1418-7a8a-42e8-b605-d02af92c7466) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796782, 3090), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.949-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (ccce1418-7a8a-42e8-b605-d02af92c7466).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.949-0500 I STORAGE [conn108] renameCollection: renaming collection 861f0ca2-94ac-4213-a598-19555294a03c from test5_fsmdb0.tmp.agg_out.2c3a8cd2-8f3b-4c71-8807-f18934d86404 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.949-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ccce1418-7a8a-42e8-b605-d02af92c7466)'. Ident: 'index-1000-8224331490264904478', commit timestamp: 'Timestamp(1574796782, 3090)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.949-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ccce1418-7a8a-42e8-b605-d02af92c7466)'. Ident: 'index-1003-8224331490264904478', commit timestamp: 'Timestamp(1574796782, 3090)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.949-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-997-8224331490264904478, commit timestamp: Timestamp(1574796782, 3090)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.949-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.949-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.949-0500 I INDEX [conn110] Registering index build: 3d5e9eb2-ab01-4be0-9b1c-6956bf3f1aab
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.949-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7255988771845583249, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5078554877342754582, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796782772), clusterTime: Timestamp(1574796782, 695) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796782, 759), signature: { hash: BinData(0, 46155B04D5E3ACE833C17C9FD3FBF7B8A8E34E67), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 176ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.950-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.951-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.952-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831 with generated UUID: e98c731b-a779-4d7e-8c43-71dbc93801e6 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.960-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.963-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.979-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.979-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.979-0500 I STORAGE [conn110] Index build initialized: 3d5e9eb2-ab01-4be0-9b1c-6956bf3f1aab: test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4 (c9f16cbb-93d4-4708-a595-3d1f352c459d ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.979-0500 I INDEX [conn110] Waiting for index build to complete: 3d5e9eb2-ab01-4be0-9b1c-6956bf3f1aab
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.981-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 22b4fd1a-06d6-430c-905a-740f65baa797: test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3 ( d0562d5d-96c9-47f7-a8fa-ee8472a65ebf ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.981-0500 I INDEX [conn114] Index build completed: 22b4fd1a-06d6-430c-905a-740f65baa797
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.983-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 93443b7c-98da-40d4-8e9d-d986e25cb1d9: test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774 ( ec38d6fa-6e33-4f60-b07d-5a539df38f7d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:02.990-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.809-0500 I INDEX [conn112] Index build completed: 93443b7c-98da-40d4-8e9d-d986e25cb1d9
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.809-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796782, 2583), signature: { hash: BinData(0, 46155B04D5E3ACE833C17C9FD3FBF7B8A8E34E67), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 2532 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2911ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.809-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774 appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796782, 2583), signature: { hash: BinData(0, 46155B04D5E3ACE833C17C9FD3FBF7B8A8E34E67), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 9568 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2919ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.809-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831 appName: "tid:4" command: create { create: "tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831", temp: true, validationLevel: "strict", validationAction: "warn", databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796782, 3154), signature: { hash: BinData(0, 46155B04D5E3ACE833C17C9FD3FBF7B8A8E34E67), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2857ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.810-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.810-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (861f0ca2-94ac-4213-a598-19555294a03c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796785, 1), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.810-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (861f0ca2-94ac-4213-a598-19555294a03c).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.810-0500 I STORAGE [conn46] renameCollection: renaming collection 2e48d489-1045-4ffa-84bc-f62bb8a23159 from test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.810-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (861f0ca2-94ac-4213-a598-19555294a03c)'. Ident: 'index-1006-8224331490264904478', commit timestamp: 'Timestamp(1574796785, 1)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.810-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (861f0ca2-94ac-4213-a598-19555294a03c)'. Ident: 'index-1007-8224331490264904478', commit timestamp: 'Timestamp(1574796785, 1)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.810-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1004-8224331490264904478, commit timestamp: Timestamp(1574796785, 1)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.810-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73 appName: "tid:1" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73", to: "test5_fsmdb0.agg_out", collectionOptions: { validationLevel: "strict", validationAction: "warn" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796782, 3595), signature: { hash: BinData(0, 46155B04D5E3ACE833C17C9FD3FBF7B8A8E34E67), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 2838319 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2839ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.810-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.810-0500 I COMMAND [conn220] command test5_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796782, 3090), lsid: { id: UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") }, $clusterTime: { clusterTime: Timestamp(1574796782, 3154), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796782, 3090). Collection minimum timestamp is Timestamp(1574796785, 1)" errName:SnapshotUnavailable errCode:246 reslen:599 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2735610 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 2735ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.811-0500 I INDEX [conn108] Registering index build: c1bec2e3-da55-43a5-bdf7-0f819222f984
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.811-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6796207168316096651, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8234975543873729931, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796782807), clusterTime: Timestamp(1574796782, 1072) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796782, 1072), signature: { hash: BinData(0, 46155B04D5E3ACE833C17C9FD3FBF7B8A8E34E67), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3002ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.811-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.814-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027 with generated UUID: 9b9410b1-eb46-4ac9-980f-6791a5dcb237 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.816-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.828-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 3d5e9eb2-ab01-4be0-9b1c-6956bf3f1aab: test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4 ( c9f16cbb-93d4-4708-a595-3d1f352c459d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.835-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.835-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.835-0500 I STORAGE [conn108] Index build initialized: c1bec2e3-da55-43a5-bdf7-0f819222f984: test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831 (e98c731b-a779-4d7e-8c43-71dbc93801e6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.835-0500 I INDEX [conn108] Waiting for index build to complete: c1bec2e3-da55-43a5-bdf7-0f819222f984
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.835-0500 I INDEX [conn110] Index build completed: 3d5e9eb2-ab01-4be0-9b1c-6956bf3f1aab
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.835-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.835-0500 I COMMAND [conn110] command test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4 appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796782, 3088), signature: { hash: BinData(0, 46155B04D5E3ACE833C17C9FD3FBF7B8A8E34E67), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 15096 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2901ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.843-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.843-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.845-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.846-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.846-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (2e48d489-1045-4ffa-84bc-f62bb8a23159) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796785, 636), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.846-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (2e48d489-1045-4ffa-84bc-f62bb8a23159).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.846-0500 I STORAGE [conn114] renameCollection: renaming collection d0562d5d-96c9-47f7-a8fa-ee8472a65ebf from test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.846-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (2e48d489-1045-4ffa-84bc-f62bb8a23159)'. Ident: 'index-1010-8224331490264904478', commit timestamp: 'Timestamp(1574796785, 636)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.846-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (2e48d489-1045-4ffa-84bc-f62bb8a23159)'. Ident: 'index-1011-8224331490264904478', commit timestamp: 'Timestamp(1574796785, 636)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.846-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1008-8224331490264904478, commit timestamp: Timestamp(1574796785, 636)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.846-0500 I INDEX [conn46] Registering index build: 78449ed4-41f6-4f29-8c91-3ff40806782e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.846-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7967856379188075356, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3320597275889576645, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796782850), clusterTime: Timestamp(1574796782, 2080) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796782, 2080), signature: { hash: BinData(0, 46155B04D5E3ACE833C17C9FD3FBF7B8A8E34E67), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2994ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:05.846-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796782, 2080), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2996ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.847-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.847-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.847-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: ea2b981f-2b75-4834-99e7-3d16876460e1: test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3 (d0562d5d-96c9-47f7-a8fa-ee8472a65ebf ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.847-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.848-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.848-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.849-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3 with generated UUID: 11ae90f2-60e4-4ba4-9937-49910b6ea6ca and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.850-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.851-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: c1bec2e3-da55-43a5-bdf7-0f819222f984: test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831 ( e98c731b-a779-4d7e-8c43-71dbc93801e6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.855-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: ea2b981f-2b75-4834-99e7-3d16876460e1: test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3 ( d0562d5d-96c9-47f7-a8fa-ee8472a65ebf ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.871-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.871-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.871-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: ca8aebbe-e41b-46b2-a0bb-93322adf41a9: test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3 (d0562d5d-96c9-47f7-a8fa-ee8472a65ebf ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.871-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.872-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.872-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.872-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.872-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 24bf486b-3484-460f-a09b-49281374b997: test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774 (ec38d6fa-6e33-4f60-b07d-5a539df38f7d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.872-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.873-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.874-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.874-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.874-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.874-0500 I STORAGE [conn46] Index build initialized: 78449ed4-41f6-4f29-8c91-3ff40806782e: test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027 (9b9410b1-eb46-4ac9-980f-6791a5dcb237 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.874-0500 I INDEX [conn46] Waiting for index build to complete: 78449ed4-41f6-4f29-8c91-3ff40806782e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.874-0500 I INDEX [conn108] Index build completed: c1bec2e3-da55-43a5-bdf7-0f819222f984
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.876-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.877-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 24bf486b-3484-460f-a09b-49281374b997: test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774 ( ec38d6fa-6e33-4f60-b07d-5a539df38f7d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.877-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: ca8aebbe-e41b-46b2-a0bb-93322adf41a9: test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3 ( d0562d5d-96c9-47f7-a8fa-ee8472a65ebf ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.879-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73 (2e48d489-1045-4ffa-84bc-f62bb8a23159) to test5_fsmdb0.agg_out and drop 861f0ca2-94ac-4213-a598-19555294a03c.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.879-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (861f0ca2-94ac-4213-a598-19555294a03c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796785, 1), t: 1 } and commit timestamp Timestamp(1574796785, 1)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.879-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (861f0ca2-94ac-4213-a598-19555294a03c).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.879-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 2e48d489-1045-4ffa-84bc-f62bb8a23159 from test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.879-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (861f0ca2-94ac-4213-a598-19555294a03c)'. Ident: 'index-1014--8000595249233899911', commit timestamp: 'Timestamp(1574796785, 1)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.879-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (861f0ca2-94ac-4213-a598-19555294a03c)'. Ident: 'index-1021--8000595249233899911', commit timestamp: 'Timestamp(1574796785, 1)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.879-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1013--8000595249233899911, commit timestamp: Timestamp(1574796785, 1)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.880-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027 with provided UUID: 9b9410b1-eb46-4ac9-980f-6791a5dcb237 and options: { uuid: UUID("9b9410b1-eb46-4ac9-980f-6791a5dcb237"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.882-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.882-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.883-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (d0562d5d-96c9-47f7-a8fa-ee8472a65ebf) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796785, 1459), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.883-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (d0562d5d-96c9-47f7-a8fa-ee8472a65ebf).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.883-0500 I STORAGE [conn112] renameCollection: renaming collection ec38d6fa-6e33-4f60-b07d-5a539df38f7d from test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.883-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d0562d5d-96c9-47f7-a8fa-ee8472a65ebf)'. Ident: 'index-1016-8224331490264904478', commit timestamp: 'Timestamp(1574796785, 1459)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.883-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d0562d5d-96c9-47f7-a8fa-ee8472a65ebf)'. Ident: 'index-1017-8224331490264904478', commit timestamp: 'Timestamp(1574796785, 1459)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.883-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1013-8224331490264904478, commit timestamp: Timestamp(1574796785, 1459)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.883-0500 I INDEX [conn114] Registering index build: 6786be41-57ba-4855-b179-7aacc114237d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.883-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.883-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4423231629523059460, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3908962547281232484, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796782850), clusterTime: Timestamp(1574796782, 2079) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796782, 2080), signature: { hash: BinData(0, 46155B04D5E3ACE833C17C9FD3FBF7B8A8E34E67), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3032ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:05.883-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796782, 2079), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3033ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.884-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.887-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7 with generated UUID: c6cc280d-e61a-486c-829e-c2da1507d01b and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.890-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.890-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.890-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: c90dbcec-2881-4321-9251-ccc00bd05b15: test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774 (ec38d6fa-6e33-4f60-b07d-5a539df38f7d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.891-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.891-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.894-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.896-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.897-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.897-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: c90dbcec-2881-4321-9251-ccc00bd05b15: test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774 ( ec38d6fa-6e33-4f60-b07d-5a539df38f7d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.899-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73 (2e48d489-1045-4ffa-84bc-f62bb8a23159) to test5_fsmdb0.agg_out and drop 861f0ca2-94ac-4213-a598-19555294a03c.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.899-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (861f0ca2-94ac-4213-a598-19555294a03c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796785, 1), t: 1 } and commit timestamp Timestamp(1574796785, 1)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.899-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (861f0ca2-94ac-4213-a598-19555294a03c).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.899-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 2e48d489-1045-4ffa-84bc-f62bb8a23159 from test5_fsmdb0.tmp.agg_out.caccaeea-cc4d-4ac9-955a-ccf1d52bee73 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.899-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (861f0ca2-94ac-4213-a598-19555294a03c)'. Ident: 'index-1014--4104909142373009110', commit timestamp: 'Timestamp(1574796785, 1)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.899-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (861f0ca2-94ac-4213-a598-19555294a03c)'. Ident: 'index-1021--4104909142373009110', commit timestamp: 'Timestamp(1574796785, 1)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.899-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1013--4104909142373009110, commit timestamp: Timestamp(1574796785, 1)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.900-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027 with provided UUID: 9b9410b1-eb46-4ac9-980f-6791a5dcb237 and options: { uuid: UUID("9b9410b1-eb46-4ac9-980f-6791a5dcb237"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.911-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.911-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.911-0500 I STORAGE [conn114] Index build initialized: 6786be41-57ba-4855-b179-7aacc114237d: test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3 (11ae90f2-60e4-4ba4-9937-49910b6ea6ca ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.911-0500 I INDEX [conn114] Waiting for index build to complete: 6786be41-57ba-4855-b179-7aacc114237d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.912-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.912-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.912-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 3d5bf4ae-d629-4d4f-a75d-c2ac9bcd3920: test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4 (c9f16cbb-93d4-4708-a595-3d1f352c459d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.912-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.913-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.913-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 78449ed4-41f6-4f29-8c91-3ff40806782e: test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027 ( 9b9410b1-eb46-4ac9-980f-6791a5dcb237 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.913-0500 I INDEX [conn46] Index build completed: 78449ed4-41f6-4f29-8c91-3ff40806782e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.915-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.915-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.917-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 3d5bf4ae-d629-4d4f-a75d-c2ac9bcd3920: test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4 ( c9f16cbb-93d4-4708-a595-3d1f352c459d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.920-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.920-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.920-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (ec38d6fa-6e33-4f60-b07d-5a539df38f7d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796785, 1516), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.920-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (ec38d6fa-6e33-4f60-b07d-5a539df38f7d).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.920-0500 I STORAGE [conn110] renameCollection: renaming collection c9f16cbb-93d4-4708-a595-3d1f352c459d from test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.920-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ec38d6fa-6e33-4f60-b07d-5a539df38f7d)'. Ident: 'index-1015-8224331490264904478', commit timestamp: 'Timestamp(1574796785, 1516)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.920-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ec38d6fa-6e33-4f60-b07d-5a539df38f7d)'. Ident: 'index-1021-8224331490264904478', commit timestamp: 'Timestamp(1574796785, 1516)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.920-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1012-8224331490264904478, commit timestamp: Timestamp(1574796785, 1516)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.921-0500 I INDEX [conn112] Registering index build: 847705bc-f387-4a0b-b786-86881d67565f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.921-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.921-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6125952846357393263, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4426806640000510717, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796782901), clusterTime: Timestamp(1574796782, 2586) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796782, 2586), signature: { hash: BinData(0, 46155B04D5E3ACE833C17C9FD3FBF7B8A8E34E67), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3018ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:05.921-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796782, 2586), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3019ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.921-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.924-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.924-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f with generated UUID: bf0549ab-8e22-4460-aac1-66a3f1facf75 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.931-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.931-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.931-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: a23facc7-ed5a-4267-acb9-31082f380ebb: test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4 (c9f16cbb-93d4-4708-a595-3d1f352c459d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.931-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.931-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.933-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 6786be41-57ba-4855-b179-7aacc114237d: test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3 ( 11ae90f2-60e4-4ba4-9937-49910b6ea6ca ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.934-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.937-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: a23facc7-ed5a-4267-acb9-31082f380ebb: test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4 ( c9f16cbb-93d4-4708-a595-3d1f352c459d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.939-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.939-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.939-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 09aaf464-b3c7-43fb-bab7-9aaf88b2cc10: test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831 (e98c731b-a779-4d7e-8c43-71dbc93801e6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.939-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.940-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.942-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.943-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3 (d0562d5d-96c9-47f7-a8fa-ee8472a65ebf) to test5_fsmdb0.agg_out and drop 2e48d489-1045-4ffa-84bc-f62bb8a23159.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.944-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (2e48d489-1045-4ffa-84bc-f62bb8a23159) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796785, 636), t: 1 } and commit timestamp Timestamp(1574796785, 636)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.944-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (2e48d489-1045-4ffa-84bc-f62bb8a23159).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.944-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection d0562d5d-96c9-47f7-a8fa-ee8472a65ebf from test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.944-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (2e48d489-1045-4ffa-84bc-f62bb8a23159)'. Ident: 'index-1018--8000595249233899911', commit timestamp: 'Timestamp(1574796785, 636)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.944-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (2e48d489-1045-4ffa-84bc-f62bb8a23159)'. Ident: 'index-1027--8000595249233899911', commit timestamp: 'Timestamp(1574796785, 636)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.944-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1017--8000595249233899911, commit timestamp: Timestamp(1574796785, 636)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.944-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 09aaf464-b3c7-43fb-bab7-9aaf88b2cc10: test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831 ( e98c731b-a779-4d7e-8c43-71dbc93801e6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.947-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3 with provided UUID: 11ae90f2-60e4-4ba4-9937-49910b6ea6ca and options: { uuid: UUID("11ae90f2-60e4-4ba4-9937-49910b6ea6ca"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.948-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.948-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.948-0500 I STORAGE [conn112] Index build initialized: 847705bc-f387-4a0b-b786-86881d67565f: test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7 (c6cc280d-e61a-486c-829e-c2da1507d01b ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.948-0500 I INDEX [conn112] Waiting for index build to complete: 847705bc-f387-4a0b-b786-86881d67565f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.948-0500 I INDEX [conn114] Index build completed: 6786be41-57ba-4855-b179-7aacc114237d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.957-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.957-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.957-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (c9f16cbb-93d4-4708-a595-3d1f352c459d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796785, 2277), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.957-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (c9f16cbb-93d4-4708-a595-3d1f352c459d).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.957-0500 I STORAGE [conn108] renameCollection: renaming collection e98c731b-a779-4d7e-8c43-71dbc93801e6 from test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.957-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c9f16cbb-93d4-4708-a595-3d1f352c459d)'. Ident: 'index-1020-8224331490264904478', commit timestamp: 'Timestamp(1574796785, 2277)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.957-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c9f16cbb-93d4-4708-a595-3d1f352c459d)'. Ident: 'index-1023-8224331490264904478', commit timestamp: 'Timestamp(1574796785, 2277)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.957-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1018-8224331490264904478, commit timestamp: Timestamp(1574796785, 2277)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.958-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.958-0500 I INDEX [conn110] Registering index build: fc39f65e-2e7b-4726-aaf4-55f8d36586da
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.958-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3229915973219253740, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3058772245076064175, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796782951), clusterTime: Timestamp(1574796782, 3154) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796782, 3154), signature: { hash: BinData(0, 46155B04D5E3ACE833C17C9FD3FBF7B8A8E34E67), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 3006ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:05.958-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796782, 3154), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 3007ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.958-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.960-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.960-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.960-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: aba96f7b-59ac-458a-a9be-f12ec9b065c7: test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831 (e98c731b-a779-4d7e-8c43-71dbc93801e6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.960-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.961-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.962-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3 (d0562d5d-96c9-47f7-a8fa-ee8472a65ebf) to test5_fsmdb0.agg_out and drop 2e48d489-1045-4ffa-84bc-f62bb8a23159.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.962-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.963-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e with generated UUID: 1f7a950b-5169-4f9e-b79c-19f7533a1b9f and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.965-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.965-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (2e48d489-1045-4ffa-84bc-f62bb8a23159) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796785, 636), t: 1 } and commit timestamp Timestamp(1574796785, 636)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.965-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (2e48d489-1045-4ffa-84bc-f62bb8a23159).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.965-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection d0562d5d-96c9-47f7-a8fa-ee8472a65ebf from test5_fsmdb0.tmp.agg_out.07ca76f6-4ed0-4898-9c8e-80b19175fbd3 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.965-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (2e48d489-1045-4ffa-84bc-f62bb8a23159)'. Ident: 'index-1018--4104909142373009110', commit timestamp: 'Timestamp(1574796785, 636)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.965-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (2e48d489-1045-4ffa-84bc-f62bb8a23159)'. Ident: 'index-1027--4104909142373009110', commit timestamp: 'Timestamp(1574796785, 636)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.965-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1017--4104909142373009110, commit timestamp: Timestamp(1574796785, 636)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.966-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: aba96f7b-59ac-458a-a9be-f12ec9b065c7: test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831 ( e98c731b-a779-4d7e-8c43-71dbc93801e6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.968-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774 (ec38d6fa-6e33-4f60-b07d-5a539df38f7d) to test5_fsmdb0.agg_out and drop d0562d5d-96c9-47f7-a8fa-ee8472a65ebf.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.968-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (d0562d5d-96c9-47f7-a8fa-ee8472a65ebf) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796785, 1459), t: 1 } and commit timestamp Timestamp(1574796785, 1459)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.968-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (d0562d5d-96c9-47f7-a8fa-ee8472a65ebf).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.969-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection ec38d6fa-6e33-4f60-b07d-5a539df38f7d from test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.969-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d0562d5d-96c9-47f7-a8fa-ee8472a65ebf)'. Ident: 'index-1026--8000595249233899911', commit timestamp: 'Timestamp(1574796785, 1459)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.969-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d0562d5d-96c9-47f7-a8fa-ee8472a65ebf)'. Ident: 'index-1033--8000595249233899911', commit timestamp: 'Timestamp(1574796785, 1459)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.969-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1025--8000595249233899911, commit timestamp: Timestamp(1574796785, 1459)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.969-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3 with provided UUID: 11ae90f2-60e4-4ba4-9937-49910b6ea6ca and options: { uuid: UUID("11ae90f2-60e4-4ba4-9937-49910b6ea6ca"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.970-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.971-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7 with provided UUID: c6cc280d-e61a-486c-829e-c2da1507d01b and options: { uuid: UUID("c6cc280d-e61a-486c-829e-c2da1507d01b"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.985-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:05.987-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.987-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 847705bc-f387-4a0b-b786-86881d67565f: test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7 ( c6cc280d-e61a-486c-829e-c2da1507d01b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.990-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774 (ec38d6fa-6e33-4f60-b07d-5a539df38f7d) to test5_fsmdb0.agg_out and drop d0562d5d-96c9-47f7-a8fa-ee8472a65ebf.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.990-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.agg_out (d0562d5d-96c9-47f7-a8fa-ee8472a65ebf) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796785, 1459), t: 1 } and commit timestamp Timestamp(1574796785, 1459)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.990-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.agg_out (d0562d5d-96c9-47f7-a8fa-ee8472a65ebf).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.990-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection ec38d6fa-6e33-4f60-b07d-5a539df38f7d from test5_fsmdb0.tmp.agg_out.0e062d00-8952-4cc4-a529-94e8f0d83774 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.990-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d0562d5d-96c9-47f7-a8fa-ee8472a65ebf)'. Ident: 'index-1026--4104909142373009110', commit timestamp: 'Timestamp(1574796785, 1459)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.990-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d0562d5d-96c9-47f7-a8fa-ee8472a65ebf)'. Ident: 'index-1033--4104909142373009110', commit timestamp: 'Timestamp(1574796785, 1459)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.990-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1025--4104909142373009110, commit timestamp: Timestamp(1574796785, 1459)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:05.992-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7 with provided UUID: c6cc280d-e61a-486c-829e-c2da1507d01b and options: { uuid: UUID("c6cc280d-e61a-486c-829e-c2da1507d01b"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.994-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.996-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.996-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.996-0500 I STORAGE [conn110] Index build initialized: fc39f65e-2e7b-4726-aaf4-55f8d36586da: test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f (bf0549ab-8e22-4460-aac1-66a3f1facf75 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.996-0500 I INDEX [conn110] Waiting for index build to complete: fc39f65e-2e7b-4726-aaf4-55f8d36586da
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.996-0500 I INDEX [conn112] Index build completed: 847705bc-f387-4a0b-b786-86881d67565f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.996-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.996-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (e98c731b-a779-4d7e-8c43-71dbc93801e6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796785, 2590), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.996-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (e98c731b-a779-4d7e-8c43-71dbc93801e6).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.996-0500 I STORAGE [conn46] renameCollection: renaming collection 9b9410b1-eb46-4ac9-980f-6791a5dcb237 from test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.997-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e98c731b-a779-4d7e-8c43-71dbc93801e6)'. Ident: 'index-1026-8224331490264904478', commit timestamp: 'Timestamp(1574796785, 2590)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.997-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e98c731b-a779-4d7e-8c43-71dbc93801e6)'. Ident: 'index-1027-8224331490264904478', commit timestamp: 'Timestamp(1574796785, 2590)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.997-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1024-8224331490264904478, commit timestamp: Timestamp(1574796785, 2590)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.997-0500 I INDEX [conn108] Registering index build: a81cd4d5-a57b-4445-8efb-965d8fc5c223
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.997-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.997-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1569473150507201651, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2857691470296419461, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796785812), clusterTime: Timestamp(1574796785, 1) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796785, 1), signature: { hash: BinData(0, 1CD6CC44447C325D03E90CA58A77FD3F916A311A), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796777, 886), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 183ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:05.997-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796785, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 184ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:05.997-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.000-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2 with generated UUID: eb1f68ad-9646-4e0b-a7b3-59019165c923 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.003-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.003-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.003-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 50ff1a86-63f1-4c71-8b2d-1ce0a9cb4c84: test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027 (9b9410b1-eb46-4ac9-980f-6791a5dcb237 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.004-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.005-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.006-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.007-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4 (c9f16cbb-93d4-4708-a595-3d1f352c459d) to test5_fsmdb0.agg_out and drop ec38d6fa-6e33-4f60-b07d-5a539df38f7d.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.007-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.008-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (ec38d6fa-6e33-4f60-b07d-5a539df38f7d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796785, 1516), t: 1 } and commit timestamp Timestamp(1574796785, 1516)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.008-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (ec38d6fa-6e33-4f60-b07d-5a539df38f7d).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.008-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection c9f16cbb-93d4-4708-a595-3d1f352c459d from test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.008-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ec38d6fa-6e33-4f60-b07d-5a539df38f7d)'. Ident: 'index-1024--8000595249233899911', commit timestamp: 'Timestamp(1574796785, 1516)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.008-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ec38d6fa-6e33-4f60-b07d-5a539df38f7d)'. Ident: 'index-1035--8000595249233899911', commit timestamp: 'Timestamp(1574796785, 1516)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.008-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1023--8000595249233899911, commit timestamp: Timestamp(1574796785, 1516)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.010-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.011-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 50ff1a86-63f1-4c71-8b2d-1ce0a9cb4c84: test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027 ( 9b9410b1-eb46-4ac9-980f-6791a5dcb237 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.023-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.023-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.023-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 398d7f1c-3d79-4fe3-b905-3c8736aac76e: test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027 (9b9410b1-eb46-4ac9-980f-6791a5dcb237 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.023-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.024-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.024-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.024-0500 I STORAGE [conn108] Index build initialized: a81cd4d5-a57b-4445-8efb-965d8fc5c223: test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e (1f7a950b-5169-4f9e-b79c-19f7533a1b9f ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.024-0500 I INDEX [conn108] Waiting for index build to complete: a81cd4d5-a57b-4445-8efb-965d8fc5c223
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.025-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.025-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4 (c9f16cbb-93d4-4708-a595-3d1f352c459d) to test5_fsmdb0.agg_out and drop ec38d6fa-6e33-4f60-b07d-5a539df38f7d.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.026-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: fc39f65e-2e7b-4726-aaf4-55f8d36586da: test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f ( bf0549ab-8e22-4460-aac1-66a3f1facf75 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.026-0500 I INDEX [conn110] Index build completed: fc39f65e-2e7b-4726-aaf4-55f8d36586da
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.026-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.026-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.026-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: ce4e2096-558d-4be6-864c-543c90cb022a: test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3 (11ae90f2-60e4-4ba4-9937-49910b6ea6ca ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.026-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.027-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.028-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f with provided UUID: bf0549ab-8e22-4460-aac1-66a3f1facf75 and options: { uuid: UUID("bf0549ab-8e22-4460-aac1-66a3f1facf75"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.028-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.028-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.agg_out (ec38d6fa-6e33-4f60-b07d-5a539df38f7d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796785, 1516), t: 1 } and commit timestamp Timestamp(1574796785, 1516)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.028-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.agg_out (ec38d6fa-6e33-4f60-b07d-5a539df38f7d).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.028-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection c9f16cbb-93d4-4708-a595-3d1f352c459d from test5_fsmdb0.tmp.agg_out.53ba7410-b8bd-4ca3-974b-50a6965b80c4 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.028-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ec38d6fa-6e33-4f60-b07d-5a539df38f7d)'. Ident: 'index-1024--4104909142373009110', commit timestamp: 'Timestamp(1574796785, 1516)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.028-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ec38d6fa-6e33-4f60-b07d-5a539df38f7d)'. Ident: 'index-1035--4104909142373009110', commit timestamp: 'Timestamp(1574796785, 1516)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.028-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1023--4104909142373009110, commit timestamp: Timestamp(1574796785, 1516)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.029-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.031-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 398d7f1c-3d79-4fe3-b905-3c8736aac76e: test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027 ( 9b9410b1-eb46-4ac9-980f-6791a5dcb237 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.034-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796785, 1516) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796785, 1580), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 8742 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 106ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.035-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.035-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.035-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (9b9410b1-eb46-4ac9-980f-6791a5dcb237) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 377), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:06.036-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796785, 764), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 188ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.039-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: ce4e2096-558d-4be6-864c-543c90cb022a: test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3 ( 11ae90f2-60e4-4ba4-9937-49910b6ea6ca ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.050-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:06.072-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796785, 1511), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 187ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.035-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (9b9410b1-eb46-4ac9-980f-6791a5dcb237).
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:06.106-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796785, 1580), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 184ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.045-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.050-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.035-0500 I STORAGE [conn114] renameCollection: renaming collection 11ae90f2-60e4-4ba4-9937-49910b6ea6ca from test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:06.186-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796785, 2654), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 187ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:06.151-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796785, 2341), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 191ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.060-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831 (e98c731b-a779-4d7e-8c43-71dbc93801e6) to test5_fsmdb0.agg_out and drop c9f16cbb-93d4-4708-a595-3d1f352c459d.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.035-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (9b9410b1-eb46-4ac9-980f-6791a5dcb237)'. Ident: 'index-1030-8224331490264904478', commit timestamp: 'Timestamp(1574796786, 377)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.050-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 900decd2-4b4a-4422-820e-a2fd2ed597d5: test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3 (11ae90f2-60e4-4ba4-9937-49910b6ea6ca ): indexes: 1
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:06.252-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796786, 946), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 178ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:06.229-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796786, 441), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 191ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.061-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (c9f16cbb-93d4-4708-a595-3d1f352c459d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796785, 2277), t: 1 } and commit timestamp Timestamp(1574796785, 2277)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.035-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (9b9410b1-eb46-4ac9-980f-6791a5dcb237)'. Ident: 'index-1031-8224331490264904478', commit timestamp: 'Timestamp(1574796786, 377)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.050-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:06.382-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796786, 2525), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 194ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:06.295-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796786, 1323), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 187ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.061-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (c9f16cbb-93d4-4708-a595-3d1f352c459d).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.035-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1028-8224331490264904478, commit timestamp: Timestamp(1574796786, 377)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.051-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:06.331-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796786, 1894), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 179ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.061-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection e98c731b-a779-4d7e-8c43-71dbc93801e6 from test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.036-0500 I INDEX [conn46] Registering index build: 87f6e428-71a1-4091-a949-cf4b836e30e2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.051-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f with provided UUID: bf0549ab-8e22-4460-aac1-66a3f1facf75 and options: { uuid: UUID("bf0549ab-8e22-4460-aac1-66a3f1facf75"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:09.044-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796786, 3533), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2791ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.061-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c9f16cbb-93d4-4708-a595-3d1f352c459d)'. Ident: 'index-1030--8000595249233899911', commit timestamp: 'Timestamp(1574796785, 2277)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.036-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6068367134957490153, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2780791310008330692, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796785848), clusterTime: Timestamp(1574796785, 764) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796785, 764), signature: { hash: BinData(0, 1CD6CC44447C325D03E90CA58A77FD3F916A311A), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796785, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 187ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.053-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.061-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c9f16cbb-93d4-4708-a595-3d1f352c459d)'. Ident: 'index-1039--8000595249233899911', commit timestamp: 'Timestamp(1574796785, 2277)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.036-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.062-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 900decd2-4b4a-4422-820e-a2fd2ed597d5: test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3 ( 11ae90f2-60e4-4ba4-9937-49910b6ea6ca ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.061-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1029--8000595249233899911, commit timestamp: Timestamp(1574796785, 2277)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.039-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839 with generated UUID: 1268ca6e-19ba-4107-887b-b990537ae086 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.071-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.064-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e with provided UUID: 1f7a950b-5169-4f9e-b79c-19f7533a1b9f and options: { uuid: UUID("1f7a950b-5169-4f9e-b79c-19f7533a1b9f"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.044-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.078-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831 (e98c731b-a779-4d7e-8c43-71dbc93801e6) to test5_fsmdb0.agg_out and drop c9f16cbb-93d4-4708-a595-3d1f352c459d.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.079-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.060-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.078-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test5_fsmdb0.agg_out (c9f16cbb-93d4-4708-a595-3d1f352c459d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796785, 2277), t: 1 } and commit timestamp Timestamp(1574796785, 2277)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.099-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.060-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.079-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test5_fsmdb0.agg_out (c9f16cbb-93d4-4708-a595-3d1f352c459d).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.100-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.060-0500 I STORAGE [conn46] Index build initialized: 87f6e428-71a1-4091-a949-cf4b836e30e2: test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2 (eb1f68ad-9646-4e0b-a7b3-59019165c923 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.079-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection e98c731b-a779-4d7e-8c43-71dbc93801e6 from test5_fsmdb0.tmp.agg_out.7d3e1d48-e33b-4c18-8629-8b5a613c8831 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.100-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: b5ea6b94-4195-45b9-9a71-30b02b9130cb: test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7 (c6cc280d-e61a-486c-829e-c2da1507d01b ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.060-0500 I INDEX [conn46] Waiting for index build to complete: 87f6e428-71a1-4091-a949-cf4b836e30e2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.079-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c9f16cbb-93d4-4708-a595-3d1f352c459d)'. Ident: 'index-1030--4104909142373009110', commit timestamp: 'Timestamp(1574796785, 2277)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.100-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.063-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.079-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c9f16cbb-93d4-4708-a595-3d1f352c459d)'. Ident: 'index-1039--4104909142373009110', commit timestamp: 'Timestamp(1574796785, 2277)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.100-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.071-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.079-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1029--4104909142373009110, commit timestamp: Timestamp(1574796785, 2277)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.102-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027 (9b9410b1-eb46-4ac9-980f-6791a5dcb237) to test5_fsmdb0.agg_out and drop e98c731b-a779-4d7e-8c43-71dbc93801e6.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.071-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.081-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e with provided UUID: 1f7a950b-5169-4f9e-b79c-19f7533a1b9f and options: { uuid: UUID("1f7a950b-5169-4f9e-b79c-19f7533a1b9f"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.102-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.071-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (11ae90f2-60e4-4ba4-9937-49910b6ea6ca) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 882), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.095-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.102-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (e98c731b-a779-4d7e-8c43-71dbc93801e6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796785, 2590), t: 1 } and commit timestamp Timestamp(1574796785, 2590)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.071-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (11ae90f2-60e4-4ba4-9937-49910b6ea6ca).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.124-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.102-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (e98c731b-a779-4d7e-8c43-71dbc93801e6).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.071-0500 I STORAGE [conn112] renameCollection: renaming collection c6cc280d-e61a-486c-829e-c2da1507d01b from test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.124-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.102-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 9b9410b1-eb46-4ac9-980f-6791a5dcb237 from test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.071-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (11ae90f2-60e4-4ba4-9937-49910b6ea6ca)'. Ident: 'index-1034-8224331490264904478', commit timestamp: 'Timestamp(1574796786, 882)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.124-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 13bfe19a-4dea-47ad-9307-903c25c75a41: test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7 (c6cc280d-e61a-486c-829e-c2da1507d01b ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.102-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e98c731b-a779-4d7e-8c43-71dbc93801e6)'. Ident: 'index-1032--8000595249233899911', commit timestamp: 'Timestamp(1574796785, 2590)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.071-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (11ae90f2-60e4-4ba4-9937-49910b6ea6ca)'. Ident: 'index-1035-8224331490264904478', commit timestamp: 'Timestamp(1574796786, 882)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.124-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.102-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e98c731b-a779-4d7e-8c43-71dbc93801e6)'. Ident: 'index-1041--8000595249233899911', commit timestamp: 'Timestamp(1574796785, 2590)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.071-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1032-8224331490264904478, commit timestamp: Timestamp(1574796786, 882)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.124-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.102-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1031--8000595249233899911, commit timestamp: Timestamp(1574796785, 2590)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.071-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.126-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027 (9b9410b1-eb46-4ac9-980f-6791a5dcb237) to test5_fsmdb0.agg_out and drop e98c731b-a779-4d7e-8c43-71dbc93801e6.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.105-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2 with provided UUID: eb1f68ad-9646-4e0b-a7b3-59019165c923 and options: { uuid: UUID("eb1f68ad-9646-4e0b-a7b3-59019165c923"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.071-0500 I INDEX [conn114] Registering index build: 327d43de-b9ae-4e10-ac28-191d3e4af29e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.126-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.108-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: b5ea6b94-4195-45b9-9a71-30b02b9130cb: test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7 ( c6cc280d-e61a-486c-829e-c2da1507d01b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.072-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: a81cd4d5-a57b-4445-8efb-965d8fc5c223: test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e ( 1f7a950b-5169-4f9e-b79c-19f7533a1b9f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.126-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (e98c731b-a779-4d7e-8c43-71dbc93801e6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796785, 2590), t: 1 } and commit timestamp Timestamp(1574796785, 2590)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.125-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.072-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6061916276773582506, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1100178471371338171, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796785885), clusterTime: Timestamp(1574796785, 1511) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796785, 1511), signature: { hash: BinData(0, 1CD6CC44447C325D03E90CA58A77FD3F916A311A), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796785, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 186ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.126-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (e98c731b-a779-4d7e-8c43-71dbc93801e6).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.145-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.075-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b with generated UUID: 132bb45b-faac-4bba-8b64-7801c263e2c3 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.127-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection 9b9410b1-eb46-4ac9-980f-6791a5dcb237 from test5_fsmdb0.tmp.agg_out.7e2bb204-930a-4412-b41a-5877e4b07027 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.145-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.082-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.127-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e98c731b-a779-4d7e-8c43-71dbc93801e6)'. Ident: 'index-1032--4104909142373009110', commit timestamp: 'Timestamp(1574796785, 2590)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.145-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 294d79f3-610f-4e3d-8c98-4af74bfbb391: test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f (bf0549ab-8e22-4460-aac1-66a3f1facf75 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.096-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.127-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e98c731b-a779-4d7e-8c43-71dbc93801e6)'. Ident: 'index-1041--4104909142373009110', commit timestamp: 'Timestamp(1574796785, 2590)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.145-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.096-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.127-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1031--4104909142373009110, commit timestamp: Timestamp(1574796785, 2590)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.146-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.096-0500 I STORAGE [conn114] Index build initialized: 327d43de-b9ae-4e10-ac28-191d3e4af29e: test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839 (1268ca6e-19ba-4107-887b-b990537ae086 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.128-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 13bfe19a-4dea-47ad-9307-903c25c75a41: test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7 ( c6cc280d-e61a-486c-829e-c2da1507d01b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.148-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.096-0500 I INDEX [conn114] Waiting for index build to complete: 327d43de-b9ae-4e10-ac28-191d3e4af29e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.129-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2 with provided UUID: eb1f68ad-9646-4e0b-a7b3-59019165c923 and options: { uuid: UUID("eb1f68ad-9646-4e0b-a7b3-59019165c923"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.149-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3 (11ae90f2-60e4-4ba4-9937-49910b6ea6ca) to test5_fsmdb0.agg_out and drop 9b9410b1-eb46-4ac9-980f-6791a5dcb237.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.096-0500 I INDEX [conn108] Index build completed: a81cd4d5-a57b-4445-8efb-965d8fc5c223
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.144-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.149-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test5_fsmdb0.agg_out (9b9410b1-eb46-4ac9-980f-6791a5dcb237) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 377), t: 1 } and commit timestamp Timestamp(1574796786, 377)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.097-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796785, 2588), signature: { hash: BinData(0, 1CD6CC44447C325D03E90CA58A77FD3F916A311A), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796785, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 2468 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 102ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.163-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.149-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test5_fsmdb0.agg_out (9b9410b1-eb46-4ac9-980f-6791a5dcb237).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.099-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.163-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.149-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection 11ae90f2-60e4-4ba4-9937-49910b6ea6ca from test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.105-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.163-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 769a18c2-5ab8-450e-a113-cd587f1493df: test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f (bf0549ab-8e22-4460-aac1-66a3f1facf75 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.149-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (9b9410b1-eb46-4ac9-980f-6791a5dcb237)'. Ident: 'index-1038--8000595249233899911', commit timestamp: 'Timestamp(1574796786, 377)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.106-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.164-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.149-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (9b9410b1-eb46-4ac9-980f-6791a5dcb237)'. Ident: 'index-1047--8000595249233899911', commit timestamp: 'Timestamp(1574796786, 377)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.106-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (c6cc280d-e61a-486c-829e-c2da1507d01b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 1323), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.164-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.149-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1037--8000595249233899911, commit timestamp: Timestamp(1574796786, 377)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.106-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (c6cc280d-e61a-486c-829e-c2da1507d01b).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.166-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.151-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 294d79f3-610f-4e3d-8c98-4af74bfbb391: test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f ( bf0549ab-8e22-4460-aac1-66a3f1facf75 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.106-0500 I STORAGE [conn110] renameCollection: renaming collection bf0549ab-8e22-4460-aac1-66a3f1facf75 from test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.168-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3 (11ae90f2-60e4-4ba4-9937-49910b6ea6ca) to test5_fsmdb0.agg_out and drop 9b9410b1-eb46-4ac9-980f-6791a5dcb237.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.152-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839 with provided UUID: 1268ca6e-19ba-4107-887b-b990537ae086 and options: { uuid: UUID("1268ca6e-19ba-4107-887b-b990537ae086"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.106-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c6cc280d-e61a-486c-829e-c2da1507d01b)'. Ident: 'index-1038-8224331490264904478', commit timestamp: 'Timestamp(1574796786, 1323)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.168-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (9b9410b1-eb46-4ac9-980f-6791a5dcb237) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 377), t: 1 } and commit timestamp Timestamp(1574796786, 377)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.170-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.106-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c6cc280d-e61a-486c-829e-c2da1507d01b)'. Ident: 'index-1039-8224331490264904478', commit timestamp: 'Timestamp(1574796786, 1323)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.168-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (9b9410b1-eb46-4ac9-980f-6791a5dcb237).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.190-0500 I INDEX [ReplWriterWorker-12] index build: starting on test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.106-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1036-8224331490264904478, commit timestamp: Timestamp(1574796786, 1323)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.168-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 11ae90f2-60e4-4ba4-9937-49910b6ea6ca from test5_fsmdb0.tmp.agg_out.df8f67df-f3f0-480b-b31c-1058e6f6c6d3 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.190-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.106-0500 I INDEX [conn112] Registering index build: 8c49ab71-c8e1-498e-9223-df96e3a5d28c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.168-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (9b9410b1-eb46-4ac9-980f-6791a5dcb237)'. Ident: 'index-1038--4104909142373009110', commit timestamp: 'Timestamp(1574796786, 377)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.190-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 16da8b26-fadc-4da4-bdb7-fb53903915fb: test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e (1f7a950b-5169-4f9e-b79c-19f7533a1b9f ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.106-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.168-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (9b9410b1-eb46-4ac9-980f-6791a5dcb237)'. Ident: 'index-1047--4104909142373009110', commit timestamp: 'Timestamp(1574796786, 377)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.190-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.106-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4057552551543961322, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 270875316355110369, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796785922), clusterTime: Timestamp(1574796785, 1580) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796785, 1580), signature: { hash: BinData(0, 1CD6CC44447C325D03E90CA58A77FD3F916A311A), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796785, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 183ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.168-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1037--4104909142373009110, commit timestamp: Timestamp(1574796786, 377)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.190-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.107-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 87f6e428-71a1-4091-a949-cf4b836e30e2: test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2 ( eb1f68ad-9646-4e0b-a7b3-59019165c923 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.170-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796786, 377) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796786, 441), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 12843 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 131ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.191-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7 (c6cc280d-e61a-486c-829e-c2da1507d01b) to test5_fsmdb0.agg_out and drop 11ae90f2-60e4-4ba4-9937-49910b6ea6ca.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.109-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b with generated UUID: 2ec7bfe2-db2b-4950-8542-ca770fed7a50 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.171-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 769a18c2-5ab8-450e-a113-cd587f1493df: test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f ( bf0549ab-8e22-4460-aac1-66a3f1facf75 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.192-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.110-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.171-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839 with provided UUID: 1268ca6e-19ba-4107-887b-b990537ae086 and options: { uuid: UUID("1268ca6e-19ba-4107-887b-b990537ae086"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.193-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (11ae90f2-60e4-4ba4-9937-49910b6ea6ca) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 882), t: 1 } and commit timestamp Timestamp(1574796786, 882)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.127-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.186-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.193-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (11ae90f2-60e4-4ba4-9937-49910b6ea6ca).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.136-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.207-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.193-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection c6cc280d-e61a-486c-829e-c2da1507d01b from test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.136-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.207-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.193-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (11ae90f2-60e4-4ba4-9937-49910b6ea6ca)'. Ident: 'index-1044--8000595249233899911', commit timestamp: 'Timestamp(1574796786, 882)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.136-0500 I STORAGE [conn112] Index build initialized: 8c49ab71-c8e1-498e-9223-df96e3a5d28c: test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b (132bb45b-faac-4bba-8b64-7801c263e2c3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.207-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 340f942d-2245-42dc-bae9-9cd7c5214067: test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e (1f7a950b-5169-4f9e-b79c-19f7533a1b9f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.193-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (11ae90f2-60e4-4ba4-9937-49910b6ea6ca)'. Ident: 'index-1049--8000595249233899911', commit timestamp: 'Timestamp(1574796786, 882)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.136-0500 I INDEX [conn112] Waiting for index build to complete: 8c49ab71-c8e1-498e-9223-df96e3a5d28c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.207-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.193-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1043--8000595249233899911, commit timestamp: Timestamp(1574796786, 882)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.136-0500 I INDEX [conn46] Index build completed: 87f6e428-71a1-4091-a949-cf4b836e30e2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.207-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.194-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 16da8b26-fadc-4da4-bdb7-fb53903915fb: test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e ( 1f7a950b-5169-4f9e-b79c-19f7533a1b9f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.136-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.208-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7 (c6cc280d-e61a-486c-829e-c2da1507d01b) to test5_fsmdb0.agg_out and drop 11ae90f2-60e4-4ba4-9937-49910b6ea6ca.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.195-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b with provided UUID: 132bb45b-faac-4bba-8b64-7801c263e2c3 and options: { uuid: UUID("132bb45b-faac-4bba-8b64-7801c263e2c3"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.136-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2 appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796786, 377), signature: { hash: BinData(0, CFEA4FE89D137DA05B4194320336D8365880E1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796785, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 133 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 100ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.210-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.214-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.144-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.210-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.agg_out (11ae90f2-60e4-4ba4-9937-49910b6ea6ca) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 882), t: 1 } and commit timestamp Timestamp(1574796786, 882)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.235-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.146-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 327d43de-b9ae-4e10-ac28-191d3e4af29e: test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839 ( 1268ca6e-19ba-4107-887b-b990537ae086 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.210-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.agg_out (11ae90f2-60e4-4ba4-9937-49910b6ea6ca).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.235-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.146-0500 I INDEX [conn114] Index build completed: 327d43de-b9ae-4e10-ac28-191d3e4af29e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.210-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection c6cc280d-e61a-486c-829e-c2da1507d01b from test5_fsmdb0.tmp.agg_out.7109410c-45d9-4e29-8c4e-a3724bbf79c7 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.235-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: bec68cc8-7437-4139-8526-a8559a9e199b: test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2 (eb1f68ad-9646-4e0b-a7b3-59019165c923 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.147-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.210-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (11ae90f2-60e4-4ba4-9937-49910b6ea6ca)'. Ident: 'index-1044--4104909142373009110', commit timestamp: 'Timestamp(1574796786, 882)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.235-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.150-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.210-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (11ae90f2-60e4-4ba4-9937-49910b6ea6ca)'. Ident: 'index-1049--4104909142373009110', commit timestamp: 'Timestamp(1574796786, 882)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.236-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.150-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.210-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1043--4104909142373009110, commit timestamp: Timestamp(1574796786, 882)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.237-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f (bf0549ab-8e22-4460-aac1-66a3f1facf75) to test5_fsmdb0.agg_out and drop c6cc280d-e61a-486c-829e-c2da1507d01b.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.150-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (bf0549ab-8e22-4460-aac1-66a3f1facf75) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 1830), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.213-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 340f942d-2245-42dc-bae9-9cd7c5214067: test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e ( 1f7a950b-5169-4f9e-b79c-19f7533a1b9f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.238-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.150-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (bf0549ab-8e22-4460-aac1-66a3f1facf75).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.215-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b with provided UUID: 132bb45b-faac-4bba-8b64-7801c263e2c3 and options: { uuid: UUID("132bb45b-faac-4bba-8b64-7801c263e2c3"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.238-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (c6cc280d-e61a-486c-829e-c2da1507d01b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 1323), t: 1 } and commit timestamp Timestamp(1574796786, 1323)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.150-0500 I STORAGE [conn108] renameCollection: renaming collection 1f7a950b-5169-4f9e-b79c-19f7533a1b9f from test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.229-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.238-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (c6cc280d-e61a-486c-829e-c2da1507d01b).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.150-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bf0549ab-8e22-4460-aac1-66a3f1facf75)'. Ident: 'index-1042-8224331490264904478', commit timestamp: 'Timestamp(1574796786, 1830)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.252-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.238-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection bf0549ab-8e22-4460-aac1-66a3f1facf75 from test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.150-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bf0549ab-8e22-4460-aac1-66a3f1facf75)'. Ident: 'index-1043-8224331490264904478', commit timestamp: 'Timestamp(1574796786, 1830)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.252-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.238-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c6cc280d-e61a-486c-829e-c2da1507d01b)'. Ident: 'index-1046--8000595249233899911', commit timestamp: 'Timestamp(1574796786, 1323)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.150-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1040-8224331490264904478, commit timestamp: Timestamp(1574796786, 1830)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.252-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 4875e2eb-7461-4990-b3bd-1f75166e0f86: test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2 (eb1f68ad-9646-4e0b-a7b3-59019165c923 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.238-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c6cc280d-e61a-486c-829e-c2da1507d01b)'. Ident: 'index-1055--8000595249233899911', commit timestamp: 'Timestamp(1574796786, 1323)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.150-0500 I INDEX [conn110] Registering index build: 05074da3-f0fb-499c-99a0-1beca39caf06
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.253-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.238-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1045--8000595249233899911, commit timestamp: Timestamp(1574796786, 1323)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.151-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7698979242505368340, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5819470021743066254, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796785959), clusterTime: Timestamp(1574796785, 2341) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796785, 2341), signature: { hash: BinData(0, 1CD6CC44447C325D03E90CA58A77FD3F916A311A), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796785, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 188ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.253-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.239-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b with provided UUID: 2ec7bfe2-db2b-4950-8542-ca770fed7a50 and options: { uuid: UUID("2ec7bfe2-db2b-4950-8542-ca770fed7a50"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.152-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 8c49ab71-c8e1-498e-9223-df96e3a5d28c: test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b ( 132bb45b-faac-4bba-8b64-7801c263e2c3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.254-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f (bf0549ab-8e22-4460-aac1-66a3f1facf75) to test5_fsmdb0.agg_out and drop c6cc280d-e61a-486c-829e-c2da1507d01b.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.240-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: bec68cc8-7437-4139-8526-a8559a9e199b: test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2 ( eb1f68ad-9646-4e0b-a7b3-59019165c923 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.154-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2 with generated UUID: e9afcedc-a466-42a1-8f5c-988f1c5fba36 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.256-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.254-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.178-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.256-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (c6cc280d-e61a-486c-829e-c2da1507d01b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 1323), t: 1 } and commit timestamp Timestamp(1574796786, 1323)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.271-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.178-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.256-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (c6cc280d-e61a-486c-829e-c2da1507d01b).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.271-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.178-0500 I STORAGE [conn110] Index build initialized: 05074da3-f0fb-499c-99a0-1beca39caf06: test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b (2ec7bfe2-db2b-4950-8542-ca770fed7a50 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.256-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection bf0549ab-8e22-4460-aac1-66a3f1facf75 from test5_fsmdb0.tmp.agg_out.7649eee3-be12-426e-8d21-81b7d040115f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.272-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 789c2a3d-9126-4af9-a1b6-94e7918f29ce: test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839 (1268ca6e-19ba-4107-887b-b990537ae086 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.178-0500 I INDEX [conn110] Waiting for index build to complete: 05074da3-f0fb-499c-99a0-1beca39caf06
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.256-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c6cc280d-e61a-486c-829e-c2da1507d01b)'. Ident: 'index-1046--4104909142373009110', commit timestamp: 'Timestamp(1574796786, 1323)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.272-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.178-0500 I INDEX [conn112] Index build completed: 8c49ab71-c8e1-498e-9223-df96e3a5d28c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.256-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c6cc280d-e61a-486c-829e-c2da1507d01b)'. Ident: 'index-1055--4104909142373009110', commit timestamp: 'Timestamp(1574796786, 1323)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.272-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.185-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.256-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1045--4104909142373009110, commit timestamp: Timestamp(1574796786, 1323)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.275-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.185-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.257-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b with provided UUID: 2ec7bfe2-db2b-4950-8542-ca770fed7a50 and options: { uuid: UUID("2ec7bfe2-db2b-4950-8542-ca770fed7a50"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.277-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 789c2a3d-9126-4af9-a1b6-94e7918f29ce: test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839 ( 1268ca6e-19ba-4107-887b-b990537ae086 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.185-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (1f7a950b-5169-4f9e-b79c-19f7533a1b9f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 2461), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.257-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 4875e2eb-7461-4990-b3bd-1f75166e0f86: test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2 ( eb1f68ad-9646-4e0b-a7b3-59019165c923 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.294-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.185-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (1f7a950b-5169-4f9e-b79c-19f7533a1b9f).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.273-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.294-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.185-0500 I STORAGE [conn46] renameCollection: renaming collection eb1f68ad-9646-4e0b-a7b3-59019165c923 from test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.291-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.294-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: fe6cc748-a3fa-41f8-b886-f327b8c68d76: test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b (132bb45b-faac-4bba-8b64-7801c263e2c3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.185-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1f7a950b-5169-4f9e-b79c-19f7533a1b9f)'. Ident: 'index-1046-8224331490264904478', commit timestamp: 'Timestamp(1574796786, 2461)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.291-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.294-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.185-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1f7a950b-5169-4f9e-b79c-19f7533a1b9f)'. Ident: 'index-1047-8224331490264904478', commit timestamp: 'Timestamp(1574796786, 2461)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.291-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 70c4bf0b-09e2-4586-93a3-45483ca6841c: test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839 (1268ca6e-19ba-4107-887b-b990537ae086 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.295-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.185-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1044-8224331490264904478, commit timestamp: Timestamp(1574796786, 2461)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.291-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.296-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e (1f7a950b-5169-4f9e-b79c-19f7533a1b9f) to test5_fsmdb0.agg_out and drop bf0549ab-8e22-4460-aac1-66a3f1facf75.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.186-0500 I INDEX [conn108] Registering index build: 1dd99aec-f832-43a0-90f1-463dcd0eea21
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.292-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.298-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.186-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.293-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.298-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (bf0549ab-8e22-4460-aac1-66a3f1facf75) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 1830), t: 1 } and commit timestamp Timestamp(1574796786, 1830)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.186-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1796730368956721020, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7832599858501130249, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796785998), clusterTime: Timestamp(1574796785, 2654) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796785, 2654), signature: { hash: BinData(0, 1CD6CC44447C325D03E90CA58A77FD3F916A311A), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796785, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 186ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.296-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 70c4bf0b-09e2-4586-93a3-45483ca6841c: test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839 ( 1268ca6e-19ba-4107-887b-b990537ae086 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.298-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (bf0549ab-8e22-4460-aac1-66a3f1facf75).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.186-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.312-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.298-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 1f7a950b-5169-4f9e-b79c-19f7533a1b9f from test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.189-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a with generated UUID: 35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.313-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.298-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bf0549ab-8e22-4460-aac1-66a3f1facf75)'. Ident: 'index-1052--8000595249233899911', commit timestamp: 'Timestamp(1574796786, 1830)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.189-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.313-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 9ed91b29-f224-4c02-8caa-1ace1a27dcb4: test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b (132bb45b-faac-4bba-8b64-7801c263e2c3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.298-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bf0549ab-8e22-4460-aac1-66a3f1facf75)'. Ident: 'index-1059--8000595249233899911', commit timestamp: 'Timestamp(1574796786, 1830)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.210-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 05074da3-f0fb-499c-99a0-1beca39caf06: test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b ( 2ec7bfe2-db2b-4950-8542-ca770fed7a50 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.313-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.298-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1051--8000595249233899911, commit timestamp: Timestamp(1574796786, 1830)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.219-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.313-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.300-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: fe6cc748-a3fa-41f8-b886-f327b8c68d76: test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b ( 132bb45b-faac-4bba-8b64-7801c263e2c3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.219-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.314-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e (1f7a950b-5169-4f9e-b79c-19f7533a1b9f) to test5_fsmdb0.agg_out and drop bf0549ab-8e22-4460-aac1-66a3f1facf75.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.302-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2 with provided UUID: e9afcedc-a466-42a1-8f5c-988f1c5fba36 and options: { uuid: UUID("e9afcedc-a466-42a1-8f5c-988f1c5fba36"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.219-0500 I STORAGE [conn108] Index build initialized: 1dd99aec-f832-43a0-90f1-463dcd0eea21: test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2 (e9afcedc-a466-42a1-8f5c-988f1c5fba36 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.316-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.317-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.219-0500 I INDEX [conn108] Waiting for index build to complete: 1dd99aec-f832-43a0-90f1-463dcd0eea21
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.317-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (bf0549ab-8e22-4460-aac1-66a3f1facf75) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 1830), t: 1 } and commit timestamp Timestamp(1574796786, 1830)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.322-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2 (eb1f68ad-9646-4e0b-a7b3-59019165c923) to test5_fsmdb0.agg_out and drop 1f7a950b-5169-4f9e-b79c-19f7533a1b9f.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.219-0500 I INDEX [conn110] Index build completed: 05074da3-f0fb-499c-99a0-1beca39caf06
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.317-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (bf0549ab-8e22-4460-aac1-66a3f1facf75).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.322-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (1f7a950b-5169-4f9e-b79c-19f7533a1b9f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 2461), t: 1 } and commit timestamp Timestamp(1574796786, 2461)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.227-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.317-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection 1f7a950b-5169-4f9e-b79c-19f7533a1b9f from test5_fsmdb0.tmp.agg_out.163c2ff0-80b2-494d-a956-6bc9a2c78f1e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.322-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (1f7a950b-5169-4f9e-b79c-19f7533a1b9f).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.228-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.317-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bf0549ab-8e22-4460-aac1-66a3f1facf75)'. Ident: 'index-1052--4104909142373009110', commit timestamp: 'Timestamp(1574796786, 1830)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.322-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection eb1f68ad-9646-4e0b-a7b3-59019165c923 from test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.228-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (eb1f68ad-9646-4e0b-a7b3-59019165c923) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 2902), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.317-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bf0549ab-8e22-4460-aac1-66a3f1facf75)'. Ident: 'index-1059--4104909142373009110', commit timestamp: 'Timestamp(1574796786, 1830)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.322-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1f7a950b-5169-4f9e-b79c-19f7533a1b9f)'. Ident: 'index-1054--8000595249233899911', commit timestamp: 'Timestamp(1574796786, 2461)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.228-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (eb1f68ad-9646-4e0b-a7b3-59019165c923).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.317-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1051--4104909142373009110, commit timestamp: Timestamp(1574796786, 1830)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.322-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1f7a950b-5169-4f9e-b79c-19f7533a1b9f)'. Ident: 'index-1063--8000595249233899911', commit timestamp: 'Timestamp(1574796786, 2461)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.228-0500 I STORAGE [conn114] renameCollection: renaming collection 1268ca6e-19ba-4107-887b-b990537ae086 from test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.319-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 9ed91b29-f224-4c02-8caa-1ace1a27dcb4: test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b ( 132bb45b-faac-4bba-8b64-7801c263e2c3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.322-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1053--8000595249233899911, commit timestamp: Timestamp(1574796786, 2461)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.228-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (eb1f68ad-9646-4e0b-a7b3-59019165c923)'. Ident: 'index-1050-8224331490264904478', commit timestamp: 'Timestamp(1574796786, 2902)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.321-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2 with provided UUID: e9afcedc-a466-42a1-8f5c-988f1c5fba36 and options: { uuid: UUID("e9afcedc-a466-42a1-8f5c-988f1c5fba36"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.324-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a with provided UUID: 35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe and options: { uuid: UUID("35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.228-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (eb1f68ad-9646-4e0b-a7b3-59019165c923)'. Ident: 'index-1051-8224331490264904478', commit timestamp: 'Timestamp(1574796786, 2902)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.336-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.337-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.228-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1048-8224331490264904478, commit timestamp: Timestamp(1574796786, 2902)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.341-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796786, 2279) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796786, 2407), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 13068 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 167ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.351-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.228-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.342-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2 (eb1f68ad-9646-4e0b-a7b3-59019165c923) to test5_fsmdb0.agg_out and drop 1f7a950b-5169-4f9e-b79c-19f7533a1b9f.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.351-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.228-0500 I INDEX [conn46] Registering index build: 1036be89-61b9-4180-a6b2-46d0451d5e8b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.342-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (1f7a950b-5169-4f9e-b79c-19f7533a1b9f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 2461), t: 1 } and commit timestamp Timestamp(1574796786, 2461)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.351-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 36a8e4e3-94ce-4af8-89b3-4735e581890a: test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b (2ec7bfe2-db2b-4950-8542-ca770fed7a50 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.228-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2943537885462525948, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2407759998530148315, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796786037), clusterTime: Timestamp(1574796786, 441) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796786, 441), signature: { hash: BinData(0, CFEA4FE89D137DA05B4194320336D8365880E1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796785, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 190ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.342-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (1f7a950b-5169-4f9e-b79c-19f7533a1b9f).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.351-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.229-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.342-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection eb1f68ad-9646-4e0b-a7b3-59019165c923 from test5_fsmdb0.tmp.agg_out.de81386a-7590-4a9c-ade3-7f9827b017d2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.351-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.230-0500 I COMMAND [conn65] CMD: dropIndexes test5_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.342-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1f7a950b-5169-4f9e-b79c-19f7533a1b9f)'. Ident: 'index-1054--4104909142373009110', commit timestamp: 'Timestamp(1574796786, 2461)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.353-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.232-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.342-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1f7a950b-5169-4f9e-b79c-19f7533a1b9f)'. Ident: 'index-1063--4104909142373009110', commit timestamp: 'Timestamp(1574796786, 2461)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.357-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 36a8e4e3-94ce-4af8-89b3-4735e581890a: test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b ( 2ec7bfe2-db2b-4950-8542-ca770fed7a50 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.242-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 1dd99aec-f832-43a0-90f1-463dcd0eea21: test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2 ( e9afcedc-a466-42a1-8f5c-988f1c5fba36 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.342-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1053--4104909142373009110, commit timestamp: Timestamp(1574796786, 2461)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.359-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839 (1268ca6e-19ba-4107-887b-b990537ae086) to test5_fsmdb0.agg_out and drop eb1f68ad-9646-4e0b-a7b3-59019165c923.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.251-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.344-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a with provided UUID: 35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe and options: { uuid: UUID("35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.360-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.agg_out (eb1f68ad-9646-4e0b-a7b3-59019165c923) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 2902), t: 1 } and commit timestamp Timestamp(1574796786, 2902)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.251-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.358-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.360-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.agg_out (eb1f68ad-9646-4e0b-a7b3-59019165c923).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.251-0500 I STORAGE [conn46] Index build initialized: 1036be89-61b9-4180-a6b2-46d0451d5e8b: test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a (35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.373-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.360-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection 1268ca6e-19ba-4107-887b-b990537ae086 from test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.251-0500 I INDEX [conn46] Waiting for index build to complete: 1036be89-61b9-4180-a6b2-46d0451d5e8b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.373-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.360-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (eb1f68ad-9646-4e0b-a7b3-59019165c923)'. Ident: 'index-1058--8000595249233899911', commit timestamp: 'Timestamp(1574796786, 2902)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.251-0500 I INDEX [conn108] Index build completed: 1dd99aec-f832-43a0-90f1-463dcd0eea21
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.373-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 04fe2bf6-df41-4f5c-87f1-ddba8b78eeaf: test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b (2ec7bfe2-db2b-4950-8542-ca770fed7a50 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.360-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (eb1f68ad-9646-4e0b-a7b3-59019165c923)'. Ident: 'index-1067--8000595249233899911', commit timestamp: 'Timestamp(1574796786, 2902)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.251-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.374-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.360-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1057--8000595249233899911, commit timestamp: Timestamp(1574796786, 2902)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.251-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (1268ca6e-19ba-4107-887b-b990537ae086) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 3534), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.374-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.377-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.251-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (1268ca6e-19ba-4107-887b-b990537ae086).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.377-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.377-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.251-0500 I STORAGE [conn112] renameCollection: renaming collection 132bb45b-faac-4bba-8b64-7801c263e2c3 from test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.379-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839 (1268ca6e-19ba-4107-887b-b990537ae086) to test5_fsmdb0.agg_out and drop eb1f68ad-9646-4e0b-a7b3-59019165c923.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.377-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: a43a2bb1-8c45-4064-9f3d-d723d9fcd0bb: test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2 (e9afcedc-a466-42a1-8f5c-988f1c5fba36 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.251-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1268ca6e-19ba-4107-887b-b990537ae086)'. Ident: 'index-1054-8224331490264904478', commit timestamp: 'Timestamp(1574796786, 3534)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.379-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (eb1f68ad-9646-4e0b-a7b3-59019165c923) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 2902), t: 1 } and commit timestamp Timestamp(1574796786, 2902)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.377-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.251-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1268ca6e-19ba-4107-887b-b990537ae086)'. Ident: 'index-1055-8224331490264904478', commit timestamp: 'Timestamp(1574796786, 3534)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.379-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (eb1f68ad-9646-4e0b-a7b3-59019165c923).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.377-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.251-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1052-8224331490264904478, commit timestamp: Timestamp(1574796786, 3534)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.379-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 1268ca6e-19ba-4107-887b-b990537ae086 from test5_fsmdb0.tmp.agg_out.16ba2237-10ff-4abf-9bc3-7904ffb6c839 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.380-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.251-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.379-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (eb1f68ad-9646-4e0b-a7b3-59019165c923)'. Ident: 'index-1058--4104909142373009110', commit timestamp: 'Timestamp(1574796786, 2902)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.384-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b (132bb45b-faac-4bba-8b64-7801c263e2c3) to test5_fsmdb0.agg_out and drop 1268ca6e-19ba-4107-887b-b990537ae086.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.251-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8173244757538332939, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4331918874611758439, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796786073), clusterTime: Timestamp(1574796786, 946) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796786, 946), signature: { hash: BinData(0, CFEA4FE89D137DA05B4194320336D8365880E1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796785, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 176ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.379-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (eb1f68ad-9646-4e0b-a7b3-59019165c923)'. Ident: 'index-1067--4104909142373009110', commit timestamp: 'Timestamp(1574796786, 2902)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.384-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (1268ca6e-19ba-4107-887b-b990537ae086) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 3534), t: 1 } and commit timestamp Timestamp(1574796786, 3534)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.253-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c with generated UUID: 3fc9e48e-6e3c-45da-8698-64331d35b1fe and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.379-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1057--4104909142373009110, commit timestamp: Timestamp(1574796786, 2902)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.384-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (1268ca6e-19ba-4107-887b-b990537ae086).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.254-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 with generated UUID: 02a743a3-42ee-45fe-b5eb-5ff4dad76330 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.380-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 04fe2bf6-df41-4f5c-87f1-ddba8b78eeaf: test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b ( 2ec7bfe2-db2b-4950-8542-ca770fed7a50 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.384-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 132bb45b-faac-4bba-8b64-7801c263e2c3 from test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.254-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.397-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.384-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1268ca6e-19ba-4107-887b-b990537ae086)'. Ident: 'index-1062--8000595249233899911', commit timestamp: 'Timestamp(1574796786, 3534)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.276-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.397-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.384-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1268ca6e-19ba-4107-887b-b990537ae086)'. Ident: 'index-1071--8000595249233899911', commit timestamp: 'Timestamp(1574796786, 3534)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.285-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.397-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 519a97ba-1ac2-40eb-9aa1-8a64c0c99e18: test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2 (e9afcedc-a466-42a1-8f5c-988f1c5fba36 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.384-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1061--8000595249233899911, commit timestamp: Timestamp(1574796786, 3534)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.294-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.398-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.385-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: a43a2bb1-8c45-4064-9f3d-d723d9fcd0bb: test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2 ( e9afcedc-a466-42a1-8f5c-988f1c5fba36 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.295-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.398-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.388-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c with provided UUID: 3fc9e48e-6e3c-45da-8698-64331d35b1fe and options: { uuid: UUID("3fc9e48e-6e3c-45da-8698-64331d35b1fe"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.295-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (132bb45b-faac-4bba-8b64-7801c263e2c3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 3911), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.400-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.403-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.295-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (132bb45b-faac-4bba-8b64-7801c263e2c3).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.404-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 519a97ba-1ac2-40eb-9aa1-8a64c0c99e18: test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2 ( e9afcedc-a466-42a1-8f5c-988f1c5fba36 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.404-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 with provided UUID: 02a743a3-42ee-45fe-b5eb-5ff4dad76330 and options: { uuid: UUID("02a743a3-42ee-45fe-b5eb-5ff4dad76330"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.295-0500 I STORAGE [conn110] renameCollection: renaming collection 2ec7bfe2-db2b-4950-8542-ca770fed7a50 from test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.407-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b (132bb45b-faac-4bba-8b64-7801c263e2c3) to test5_fsmdb0.agg_out and drop 1268ca6e-19ba-4107-887b-b990537ae086.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.418-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.295-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (132bb45b-faac-4bba-8b64-7801c263e2c3)'. Ident: 'index-1058-8224331490264904478', commit timestamp: 'Timestamp(1574796786, 3911)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.408-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (1268ca6e-19ba-4107-887b-b990537ae086) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 3534), t: 1 } and commit timestamp Timestamp(1574796786, 3534)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.438-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.295-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (132bb45b-faac-4bba-8b64-7801c263e2c3)'. Ident: 'index-1059-8224331490264904478', commit timestamp: 'Timestamp(1574796786, 3911)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.408-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (1268ca6e-19ba-4107-887b-b990537ae086).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.438-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.295-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1056-8224331490264904478, commit timestamp: Timestamp(1574796786, 3911)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.408-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 132bb45b-faac-4bba-8b64-7801c263e2c3 from test5_fsmdb0.tmp.agg_out.f91f50e8-bf17-483f-80bb-3420522fdc8b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.438-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 0278fd39-e0fb-49aa-b3d5-bc02680df486: test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a (35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.295-0500 I INDEX [conn112] Registering index build: 11e31648-a082-41e1-81e5-e7a17c481fbb
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.408-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1268ca6e-19ba-4107-887b-b990537ae086)'. Ident: 'index-1062--4104909142373009110', commit timestamp: 'Timestamp(1574796786, 3534)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.438-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.295-0500 I INDEX [conn114] Registering index build: 246831d7-77c9-4970-8bd3-9787f75587d8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.408-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1268ca6e-19ba-4107-887b-b990537ae086)'. Ident: 'index-1071--4104909142373009110', commit timestamp: 'Timestamp(1574796786, 3534)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.439-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.295-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1580267697324391318, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6613719214454785080, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796786108), clusterTime: Timestamp(1574796786, 1323) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796786, 1323), signature: { hash: BinData(0, CFEA4FE89D137DA05B4194320336D8365880E1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796785, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 186ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.408-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1061--4104909142373009110, commit timestamp: Timestamp(1574796786, 3534)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.440-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b (2ec7bfe2-db2b-4950-8542-ca770fed7a50) to test5_fsmdb0.agg_out and drop 132bb45b-faac-4bba-8b64-7801c263e2c3.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.295-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 1036be89-61b9-4180-a6b2-46d0451d5e8b: test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a ( 35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.410-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c with provided UUID: 3fc9e48e-6e3c-45da-8698-64331d35b1fe and options: { uuid: UUID("3fc9e48e-6e3c-45da-8698-64331d35b1fe"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.442-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.299-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 with generated UUID: 9d3ec432-6e42-4475-b7a8-6e458f944706 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.428-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.442-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (132bb45b-faac-4bba-8b64-7801c263e2c3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 3911), t: 1 } and commit timestamp Timestamp(1574796786, 3911)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.322-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.428-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 with provided UUID: 02a743a3-42ee-45fe-b5eb-5ff4dad76330 and options: { uuid: UUID("02a743a3-42ee-45fe-b5eb-5ff4dad76330"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.442-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (132bb45b-faac-4bba-8b64-7801c263e2c3).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.322-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.441-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.442-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 2ec7bfe2-db2b-4950-8542-ca770fed7a50 from test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.322-0500 I STORAGE [conn112] Index build initialized: 11e31648-a082-41e1-81e5-e7a17c481fbb: test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c (3fc9e48e-6e3c-45da-8698-64331d35b1fe ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.459-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.442-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (132bb45b-faac-4bba-8b64-7801c263e2c3)'. Ident: 'index-1066--8000595249233899911', commit timestamp: 'Timestamp(1574796786, 3911)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.323-0500 I INDEX [conn112] Waiting for index build to complete: 11e31648-a082-41e1-81e5-e7a17c481fbb
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.459-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.442-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (132bb45b-faac-4bba-8b64-7801c263e2c3)'. Ident: 'index-1073--8000595249233899911', commit timestamp: 'Timestamp(1574796786, 3911)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.323-0500 I INDEX [conn46] Index build completed: 1036be89-61b9-4180-a6b2-46d0451d5e8b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.459-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 48dcbbf7-0421-4aee-ad27-0c49d3a7b4ee: test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a (35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.442-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1065--8000595249233899911, commit timestamp: Timestamp(1574796786, 3911)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.330-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.459-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.443-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 0278fd39-e0fb-49aa-b3d5-bc02680df486: test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a ( 35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.331-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.459-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.445-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 with provided UUID: 9d3ec432-6e42-4475-b7a8-6e458f944706 and options: { uuid: UUID("9d3ec432-6e42-4475-b7a8-6e458f944706"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.331-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (2ec7bfe2-db2b-4950-8542-ca770fed7a50) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 4350), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.460-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b (2ec7bfe2-db2b-4950-8542-ca770fed7a50) to test5_fsmdb0.agg_out and drop 132bb45b-faac-4bba-8b64-7801c263e2c3.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.460-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.331-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (2ec7bfe2-db2b-4950-8542-ca770fed7a50).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.462-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.464-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2 (e9afcedc-a466-42a1-8f5c-988f1c5fba36) to test5_fsmdb0.agg_out and drop 2ec7bfe2-db2b-4950-8542-ca770fed7a50.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.331-0500 I STORAGE [conn108] renameCollection: renaming collection e9afcedc-a466-42a1-8f5c-988f1c5fba36 from test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.462-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (132bb45b-faac-4bba-8b64-7801c263e2c3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 3911), t: 1 } and commit timestamp Timestamp(1574796786, 3911)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.464-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.agg_out (2ec7bfe2-db2b-4950-8542-ca770fed7a50) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 4350), t: 1 } and commit timestamp Timestamp(1574796786, 4350)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.331-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (2ec7bfe2-db2b-4950-8542-ca770fed7a50)'. Ident: 'index-1062-8224331490264904478', commit timestamp: 'Timestamp(1574796786, 4350)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.462-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (132bb45b-faac-4bba-8b64-7801c263e2c3).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.462-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection 2ec7bfe2-db2b-4950-8542-ca770fed7a50 from test5_fsmdb0.tmp.agg_out.4546ada2-139d-4e9c-aa33-58966f0ae08b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.331-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (2ec7bfe2-db2b-4950-8542-ca770fed7a50)'. Ident: 'index-1063-8224331490264904478', commit timestamp: 'Timestamp(1574796786, 4350)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.464-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.agg_out (2ec7bfe2-db2b-4950-8542-ca770fed7a50).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.462-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (132bb45b-faac-4bba-8b64-7801c263e2c3)'. Ident: 'index-1066--4104909142373009110', commit timestamp: 'Timestamp(1574796786, 3911)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.331-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1060-8224331490264904478, commit timestamp: Timestamp(1574796786, 4350)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.464-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection e9afcedc-a466-42a1-8f5c-988f1c5fba36 from test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.462-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (132bb45b-faac-4bba-8b64-7801c263e2c3)'. Ident: 'index-1073--4104909142373009110', commit timestamp: 'Timestamp(1574796786, 3911)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.331-0500 I INDEX [conn110] Registering index build: 70038865-179a-40ab-9961-43e1e6f9c316
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.331-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.462-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1065--4104909142373009110, commit timestamp: Timestamp(1574796786, 3911)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.464-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (2ec7bfe2-db2b-4950-8542-ca770fed7a50)'. Ident: 'index-1070--8000595249233899911', commit timestamp: 'Timestamp(1574796786, 4350)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.331-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4473826690610967092, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7572884353730278794, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796786152), clusterTime: Timestamp(1574796786, 1894) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796786, 1894), signature: { hash: BinData(0, CFEA4FE89D137DA05B4194320336D8365880E1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796785, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 177ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.463-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 48dcbbf7-0421-4aee-ad27-0c49d3a7b4ee: test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a ( 35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.465-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (2ec7bfe2-db2b-4950-8542-ca770fed7a50)'. Ident: 'index-1079--8000595249233899911', commit timestamp: 'Timestamp(1574796786, 4350)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.331-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.466-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 with provided UUID: 9d3ec432-6e42-4475-b7a8-6e458f944706 and options: { uuid: UUID("9d3ec432-6e42-4475-b7a8-6e458f944706"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.465-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1069--8000595249233899911, commit timestamp: Timestamp(1574796786, 4350)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.334-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f with generated UUID: 92598634-43f2-4a1c-bc97-da99ca495322 and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.480-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.465-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f with provided UUID: 92598634-43f2-4a1c-bc97-da99ca495322 and options: { uuid: UUID("92598634-43f2-4a1c-bc97-da99ca495322"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.334-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.484-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2 (e9afcedc-a466-42a1-8f5c-988f1c5fba36) to test5_fsmdb0.agg_out and drop 2ec7bfe2-db2b-4950-8542-ca770fed7a50.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.482-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.349-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 11e31648-a082-41e1-81e5-e7a17c481fbb: test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c ( 3fc9e48e-6e3c-45da-8698-64331d35b1fe ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.358-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.497-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.484-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (2ec7bfe2-db2b-4950-8542-ca770fed7a50) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 4350), t: 1 } and commit timestamp Timestamp(1574796786, 4350)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.358-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.497-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.484-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (2ec7bfe2-db2b-4950-8542-ca770fed7a50).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.358-0500 I STORAGE [conn114] Index build initialized: 246831d7-77c9-4970-8bd3-9787f75587d8: test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 (02a743a3-42ee-45fe-b5eb-5ff4dad76330 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.497-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 6109a5da-afe4-48cb-b9b7-519fd88b1650: test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c (3fc9e48e-6e3c-45da-8698-64331d35b1fe ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.485-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection e9afcedc-a466-42a1-8f5c-988f1c5fba36 from test5_fsmdb0.tmp.agg_out.736153c6-0916-48ac-a620-7f2d9be8c2c2 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.358-0500 I INDEX [conn114] Waiting for index build to complete: 246831d7-77c9-4970-8bd3-9787f75587d8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.497-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.485-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (2ec7bfe2-db2b-4950-8542-ca770fed7a50)'. Ident: 'index-1070--4104909142373009110', commit timestamp: 'Timestamp(1574796786, 4350)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.358-0500 I INDEX [conn112] Index build completed: 11e31648-a082-41e1-81e5-e7a17c481fbb
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.498-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.499-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.502-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 6109a5da-afe4-48cb-b9b7-519fd88b1650: test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c ( 3fc9e48e-6e3c-45da-8698-64331d35b1fe ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.518-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.518-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.485-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (2ec7bfe2-db2b-4950-8542-ca770fed7a50)'. Ident: 'index-1079--4104909142373009110', commit timestamp: 'Timestamp(1574796786, 4350)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.518-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 074e0f32-92dc-42a9-972c-916892140e4f: test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 (02a743a3-42ee-45fe-b5eb-5ff4dad76330 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.358-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.485-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1069--4104909142373009110, commit timestamp: Timestamp(1574796786, 4350)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.518-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.364-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.485-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f with provided UUID: 92598634-43f2-4a1c-bc97-da99ca495322 and options: { uuid: UUID("92598634-43f2-4a1c-bc97-da99ca495322"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.519-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.365-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.500-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.520-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a (35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe) to test5_fsmdb0.agg_out and drop e9afcedc-a466-42a1-8f5c-988f1c5fba36.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.373-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.380-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.522-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.380-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.380-0500 I STORAGE [conn110] Index build initialized: 70038865-179a-40ab-9961-43e1e6f9c316: test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 (9d3ec432-6e42-4475-b7a8-6e458f944706 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.522-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (e9afcedc-a466-42a1-8f5c-988f1c5fba36) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 4858), t: 1 } and commit timestamp Timestamp(1574796786, 4858)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.381-0500 I INDEX [conn110] Waiting for index build to complete: 70038865-179a-40ab-9961-43e1e6f9c316
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.381-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.522-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (e9afcedc-a466-42a1-8f5c-988f1c5fba36).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.381-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (e9afcedc-a466-42a1-8f5c-988f1c5fba36) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 4858), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.513-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.522-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe from test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.381-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (e9afcedc-a466-42a1-8f5c-988f1c5fba36).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.513-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.522-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e9afcedc-a466-42a1-8f5c-988f1c5fba36)'. Ident: 'index-1076--8000595249233899911', commit timestamp: 'Timestamp(1574796786, 4858)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.381-0500 I STORAGE [conn46] renameCollection: renaming collection 35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe from test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.513-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 98231aa3-f565-4961-bcf1-de495200855d: test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c (3fc9e48e-6e3c-45da-8698-64331d35b1fe ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.522-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e9afcedc-a466-42a1-8f5c-988f1c5fba36)'. Ident: 'index-1081--8000595249233899911', commit timestamp: 'Timestamp(1574796786, 4858)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.381-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e9afcedc-a466-42a1-8f5c-988f1c5fba36)'. Ident: 'index-1066-8224331490264904478', commit timestamp: 'Timestamp(1574796786, 4858)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.513-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.522-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1075--8000595249233899911, commit timestamp: Timestamp(1574796786, 4858)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.381-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e9afcedc-a466-42a1-8f5c-988f1c5fba36)'. Ident: 'index-1067-8224331490264904478', commit timestamp: 'Timestamp(1574796786, 4858)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.514-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:06.524-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 074e0f32-92dc-42a9-972c-916892140e4f: test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 ( 02a743a3-42ee-45fe-b5eb-5ff4dad76330 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.381-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1064-8224331490264904478, commit timestamp: Timestamp(1574796786, 4858)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.516-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.381-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.046-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 with provided UUID: c0ec03d8-f02d-43d6-bc25-7eeb517f5e6b and options: { uuid: UUID("c0ec03d8-f02d-43d6-bc25-7eeb517f5e6b"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.520-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796786, 4802) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796786, 4802), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 14464 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 159ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.381-0500 I INDEX [conn108] Registering index build: 50e18507-e3f5-4b74-a43a-b07107963ef9
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.060-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.520-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 98231aa3-f565-4961-bcf1-de495200855d: test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c ( 3fc9e48e-6e3c-45da-8698-64331d35b1fe ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.381-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 246831d7-77c9-4970-8bd3-9787f75587d8: test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 ( 02a743a3-42ee-45fe-b5eb-5ff4dad76330 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.075-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.533-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.381-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4480688470074394895, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8887303796746940897, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796786187), clusterTime: Timestamp(1574796786, 2525) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796786, 2525), signature: { hash: BinData(0, CFEA4FE89D137DA05B4194320336D8365880E1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796785, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 193ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.075-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.533-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.382-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.075-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 42bd45e0-9643-45cd-a0f7-14af5e883667: test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 (9d3ec432-6e42-4475-b7a8-6e458f944706 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.533-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: ca90bf06-6784-44d0-b993-e35d88abf427: test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 (02a743a3-42ee-45fe-b5eb-5ff4dad76330 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.384-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 with generated UUID: c0ec03d8-f02d-43d6-bc25-7eeb517f5e6b and options: { temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.075-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.533-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.394-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.076-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.534-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.410-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.078-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.535-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a (35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe) to test5_fsmdb0.agg_out and drop e9afcedc-a466-42a1-8f5c-988f1c5fba36.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.410-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.080-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c (3fc9e48e-6e3c-45da-8698-64331d35b1fe) to test5_fsmdb0.agg_out and drop 35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.536-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.411-0500 I STORAGE [conn108] Index build initialized: 50e18507-e3f5-4b74-a43a-b07107963ef9: test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f (92598634-43f2-4a1c-bc97-da99ca495322 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.080-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 1), t: 1 } and commit timestamp Timestamp(1574796789, 1)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.536-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (e9afcedc-a466-42a1-8f5c-988f1c5fba36) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796786, 4858), t: 1 } and commit timestamp Timestamp(1574796786, 4858)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.411-0500 I INDEX [conn108] Waiting for index build to complete: 50e18507-e3f5-4b74-a43a-b07107963ef9
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.080-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.536-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (e9afcedc-a466-42a1-8f5c-988f1c5fba36).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.411-0500 I INDEX [conn114] Index build completed: 246831d7-77c9-4970-8bd3-9787f75587d8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.080-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 3fc9e48e-6e3c-45da-8698-64331d35b1fe from test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.536-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe from test5_fsmdb0.tmp.agg_out.7e42bb07-cbe7-4418-983f-6991e8c7082a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.411-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796786, 3911), signature: { hash: BinData(0, CFEA4FE89D137DA05B4194320336D8365880E1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796785, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 8444 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 115ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.080-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe)'. Ident: 'index-1078--8000595249233899911', commit timestamp: 'Timestamp(1574796789, 1)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.536-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e9afcedc-a466-42a1-8f5c-988f1c5fba36)'. Ident: 'index-1076--4104909142373009110', commit timestamp: 'Timestamp(1574796786, 4858)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.080-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe)'. Ident: 'index-1087--8000595249233899911', commit timestamp: 'Timestamp(1574796789, 1)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.412-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 70038865-179a-40ab-9961-43e1e6f9c316: test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 ( 9d3ec432-6e42-4475-b7a8-6e458f944706 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.536-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e9afcedc-a466-42a1-8f5c-988f1c5fba36)'. Ident: 'index-1081--4104909142373009110', commit timestamp: 'Timestamp(1574796786, 4858)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.080-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1077--8000595249233899911, commit timestamp: Timestamp(1574796789, 1)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:06.420-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.536-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1075--4104909142373009110, commit timestamp: Timestamp(1574796786, 4858)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.042-0500 I INDEX [conn110] Index build completed: 70038865-179a-40ab-9961-43e1e6f9c316
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:06.537-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: ca90bf06-6784-44d0-b993-e35d88abf427: test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 ( 02a743a3-42ee-45fe-b5eb-5ff4dad76330 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.042-0500 I COMMAND [conn110] command test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796786, 4350), signature: { hash: BinData(0, CFEA4FE89D137DA05B4194320336D8365880E1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796785, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 115 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2711ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.042-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 appName: "tid:1" command: create { create: "tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15", temp: true, validationLevel: "strict", validationAction: "warn", databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796786, 4922), signature: { hash: BinData(0, CFEA4FE89D137DA05B4194320336D8365880E1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796785, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2658ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.042-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.062-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 with provided UUID: c0ec03d8-f02d-43d6-bc25-7eeb517f5e6b and options: { uuid: UUID("c0ec03d8-f02d-43d6-bc25-7eeb517f5e6b"), temp: true, validationLevel: "strict", validationAction: "warn" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.043-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 1), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.076-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.043-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.043-0500 I STORAGE [conn112] renameCollection: renaming collection 3fc9e48e-6e3c-45da-8698-64331d35b1fe from test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.043-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe)'. Ident: 'index-1070-8224331490264904478', commit timestamp: 'Timestamp(1574796789, 1)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.043-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe)'. Ident: 'index-1071-8224331490264904478', commit timestamp: 'Timestamp(1574796789, 1)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.043-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1068-8224331490264904478, commit timestamp: Timestamp(1574796789, 1)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.043-0500 I INDEX [conn46] Registering index build: fae16be8-00a0-46ee-b9b3-e07d0e09d9d1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.043-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.043-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c appName: "tid:0" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c", to: "test5_fsmdb0.agg_out", collectionOptions: { validationLevel: "strict", validationAction: "warn" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796786, 5361), signature: { hash: BinData(0, CFEA4FE89D137DA05B4194320336D8365880E1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796785, 1), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 2634430 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2635ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.043-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3544213846055541268, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 903419176303175094, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796786252), clusterTime: Timestamp(1574796786, 3533) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796786, 3598), signature: { hash: BinData(0, CFEA4FE89D137DA05B4194320336D8365880E1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796785, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2790ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.043-0500 I COMMAND [conn220] command test5_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796786, 4802), lsid: { id: UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") }, $clusterTime: { clusterTime: Timestamp(1574796786, 4802), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796786, 4802). Collection minimum timestamp is Timestamp(1574796786, 4856)" errName:SnapshotUnavailable errCode:246 reslen:602 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2521853 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 2522ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.044-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.052-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.058-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.058-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.059-0500 I STORAGE [conn46] Index build initialized: fae16be8-00a0-46ee-b9b3-e07d0e09d9d1: test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 (c0ec03d8-f02d-43d6-bc25-7eeb517f5e6b ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.059-0500 I INDEX [conn46] Waiting for index build to complete: fae16be8-00a0-46ee-b9b3-e07d0e09d9d1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.059-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.059-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 50e18507-e3f5-4b74-a43a-b07107963ef9: test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f ( 92598634-43f2-4a1c-bc97-da99ca495322 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.059-0500 I INDEX [conn108] Index build completed: 50e18507-e3f5-4b74-a43a-b07107963ef9
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.059-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796786, 4854), signature: { hash: BinData(0, CFEA4FE89D137DA05B4194320336D8365880E1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796785, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 16198 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2694ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.060-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.061-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d with generated UUID: 3226e0b7-0cd2-4ad9-829a-463d8939f49e and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.064-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.074-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: fae16be8-00a0-46ee-b9b3-e07d0e09d9d1: test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 ( c0ec03d8-f02d-43d6-bc25-7eeb517f5e6b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.074-0500 I INDEX [conn46] Index build completed: fae16be8-00a0-46ee-b9b3-e07d0e09d9d1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.083-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.084-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 42bd45e0-9643-45cd-a0f7-14af5e883667: test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 ( 9d3ec432-6e42-4475-b7a8-6e458f944706 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.084-0500 I COMMAND [conn110] CMD: drop test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.088-0500 I INDEX [conn112] Registering index build: 588d5aae-d49d-4c31-9644-7bdd466c5240
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.088-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 (02a743a3-42ee-45fe-b5eb-5ff4dad76330) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.088-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 (02a743a3-42ee-45fe-b5eb-5ff4dad76330).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.088-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 (02a743a3-42ee-45fe-b5eb-5ff4dad76330)'. Ident: 'index-1076-8224331490264904478', commit timestamp: 'Timestamp(1574796789, 1137)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.088-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 (02a743a3-42ee-45fe-b5eb-5ff4dad76330)'. Ident: 'index-1081-8224331490264904478', commit timestamp: 'Timestamp(1574796789, 1137)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.088-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5'. Ident: collection-1074-8224331490264904478, commit timestamp: Timestamp(1574796789, 1137)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.088-0500 I COMMAND [conn46] CMD: drop test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.088-0500 I COMMAND [conn71] command test5_fsmdb0.agg_out appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6776467318961594633, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8821694510595082558, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796786253), clusterTime: Timestamp(1574796786, 3598) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796786, 3598), signature: { hash: BinData(0, CFEA4FE89D137DA05B4194320336D8365880E1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796785, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:990 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2834ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:09.089-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796786, 3598), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:820 protocol:op_msg 2835ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.091-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.091-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.091-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: f366df49-fa61-47f2-b921-b07b79448436: test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 (9d3ec432-6e42-4475-b7a8-6e458f944706 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.091-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.091-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977 with generated UUID: bd7130a2-8dba-4c6c-94c8-4afde90fa511 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.092-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.095-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.096-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c (3fc9e48e-6e3c-45da-8698-64331d35b1fe) to test5_fsmdb0.agg_out and drop 35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.097-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.agg_out (35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 1), t: 1 } and commit timestamp Timestamp(1574796789, 1)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.097-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.agg_out (35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.097-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection 3fc9e48e-6e3c-45da-8698-64331d35b1fe from test5_fsmdb0.tmp.agg_out.fbede301-6bb6-44a4-b363-15a922582f1c to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.097-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe)'. Ident: 'index-1078--4104909142373009110', commit timestamp: 'Timestamp(1574796789, 1)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.097-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (35a25fb2-eb71-4c4e-ad97-31bb86b4e5fe)'. Ident: 'index-1087--4104909142373009110', commit timestamp: 'Timestamp(1574796789, 1)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.097-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1077--4104909142373009110, commit timestamp: Timestamp(1574796789, 1)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.099-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: f366df49-fa61-47f2-b921-b07b79448436: test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 ( 9d3ec432-6e42-4475-b7a8-6e458f944706 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.100-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.113-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:09.114-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796786, 3975), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:820 protocol:op_msg 2817ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.116-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:09.181-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796786, 4858), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:820 protocol:op_msg 2797ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:11.571-0500 I CONNPOOL [TaskExecutorPool-0] Ending idle connection to host localhost:20004 because the pool meets constraints; 4 connections to that host remain open
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.113-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.100-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:09.149-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796786, 4350), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:820 protocol:op_msg 2816ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.116-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:09.259-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796789, 1137), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 169ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:09.354-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796789, 2022), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 171ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.100-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 6c4cc7ae-b35b-4ef6-b47a-8fa6c3418e49: test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f (92598634-43f2-4a1c-bc97-da99ca495322 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.113-0500 I STORAGE [conn112] Index build initialized: 588d5aae-d49d-4c31-9644-7bdd466c5240: test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d (3226e0b7-0cd2-4ad9-829a-463d8939f49e ): indexes: 1
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:09.221-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796789, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 161ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:11.572-0500 I NETWORK [conn88] end connection 127.0.0.1:46146 (46 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.116-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 4a86fc75-c6b8-4d46-bd39-a55200562ad7: test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f (92598634-43f2-4a1c-bc97-da99ca495322 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.100-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.113-0500 I INDEX [conn112] Waiting for index build to complete: 588d5aae-d49d-4c31-9644-7bdd466c5240
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:09.298-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796789, 2017), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 147ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.117-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.100-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.113-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 (9d3ec432-6e42-4475-b7a8-6e458f944706) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:09.298-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796789, 1576), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 183ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.117-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.103-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.114-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 (9d3ec432-6e42-4475-b7a8-6e458f944706).
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:09.399-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796789, 2527), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 176ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.119-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:12.142-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796789, 3162), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2881ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.106-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 6c4cc7ae-b35b-4ef6-b47a-8fa6c3418e49: test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f ( 92598634-43f2-4a1c-bc97-da99ca495322 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.114-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 (9d3ec432-6e42-4475-b7a8-6e458f944706)'. Ident: 'index-1080-8224331490264904478', commit timestamp: 'Timestamp(1574796789, 1576)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.121-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 4a86fc75-c6b8-4d46-bd39-a55200562ad7: test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f ( 92598634-43f2-4a1c-bc97-da99ca495322 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.108-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d with provided UUID: 3226e0b7-0cd2-4ad9-829a-463d8939f49e and options: { uuid: UUID("3226e0b7-0cd2-4ad9-829a-463d8939f49e"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.114-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 (9d3ec432-6e42-4475-b7a8-6e458f944706)'. Ident: 'index-1085-8224331490264904478', commit timestamp: 'Timestamp(1574796789, 1576)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.125-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d with provided UUID: 3226e0b7-0cd2-4ad9-829a-463d8939f49e and options: { uuid: UUID("3226e0b7-0cd2-4ad9-829a-463d8939f49e"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.124-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.114-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0'. Ident: collection-1078-8224331490264904478, commit timestamp: Timestamp(1574796789, 1576)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.140-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.146-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.114-0500 I COMMAND [conn67] command test5_fsmdb0.agg_out appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3979039199119702873, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5041364876499053147, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796786297), clusterTime: Timestamp(1574796786, 3975) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796786, 4039), signature: { hash: BinData(0, CFEA4FE89D137DA05B4194320336D8365880E1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796785, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:990 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2815ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.163-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.146-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.120-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.163-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.146-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: b2f559a2-c93d-41ad-9d76-27061796fbe3: test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 (c0ec03d8-f02d-43d6-bc25-7eeb517f5e6b ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.120-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.163-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 63b2cf3e-3f62-475f-9a7a-618b206e673d: test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 (c0ec03d8-f02d-43d6-bc25-7eeb517f5e6b ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.146-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.120-0500 I INDEX [conn110] Registering index build: 03bd185f-f667-43ee-adc8-6b5f20ab05c3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.163-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.147-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.120-0500 I COMMAND [conn108] CMD: drop test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.164-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.150-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.121-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80 with generated UUID: f395395d-3793-453b-83f8-d93ef1a6cbed and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.166-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.152-0500 I COMMAND [ReplWriterWorker-6] CMD: drop test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.121-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.170-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 63b2cf3e-3f62-475f-9a7a-618b206e673d: test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 ( c0ec03d8-f02d-43d6-bc25-7eeb517f5e6b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.152-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 (02a743a3-42ee-45fe-b5eb-5ff4dad76330) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 1137), t: 1 } and commit timestamp Timestamp(1574796789, 1137)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.140-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.173-0500 I COMMAND [ReplWriterWorker-6] CMD: drop test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.152-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 (02a743a3-42ee-45fe-b5eb-5ff4dad76330).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.148-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.173-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 (02a743a3-42ee-45fe-b5eb-5ff4dad76330) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 1137), t: 1 } and commit timestamp Timestamp(1574796789, 1137)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.152-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 (02a743a3-42ee-45fe-b5eb-5ff4dad76330)'. Ident: 'index-1086--8000595249233899911', commit timestamp: 'Timestamp(1574796789, 1137)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.148-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.173-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 (02a743a3-42ee-45fe-b5eb-5ff4dad76330).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.152-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 (02a743a3-42ee-45fe-b5eb-5ff4dad76330)'. Ident: 'index-1095--8000595249233899911', commit timestamp: 'Timestamp(1574796789, 1137)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.148-0500 I STORAGE [conn110] Index build initialized: 03bd185f-f667-43ee-adc8-6b5f20ab05c3: test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977 (bd7130a2-8dba-4c6c-94c8-4afde90fa511 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.173-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 (02a743a3-42ee-45fe-b5eb-5ff4dad76330)'. Ident: 'index-1086--4104909142373009110', commit timestamp: 'Timestamp(1574796789, 1137)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.152-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5'. Ident: collection-1085--8000595249233899911, commit timestamp: Timestamp(1574796789, 1137)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.148-0500 I INDEX [conn110] Waiting for index build to complete: 03bd185f-f667-43ee-adc8-6b5f20ab05c3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.173-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5 (02a743a3-42ee-45fe-b5eb-5ff4dad76330)'. Ident: 'index-1095--4104909142373009110', commit timestamp: 'Timestamp(1574796789, 1137)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.152-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: b2f559a2-c93d-41ad-9d76-27061796fbe3: test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 ( c0ec03d8-f02d-43d6-bc25-7eeb517f5e6b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.148-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f (92598634-43f2-4a1c-bc97-da99ca495322) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.173-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.1c80e877-a598-4f61-a0a5-59b52b8a2db5'. Ident: collection-1085--4104909142373009110, commit timestamp: Timestamp(1574796789, 1137)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.158-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977 with provided UUID: bd7130a2-8dba-4c6c-94c8-4afde90fa511 and options: { uuid: UUID("bd7130a2-8dba-4c6c-94c8-4afde90fa511"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.148-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f (92598634-43f2-4a1c-bc97-da99ca495322).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.178-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977 with provided UUID: bd7130a2-8dba-4c6c-94c8-4afde90fa511 and options: { uuid: UUID("bd7130a2-8dba-4c6c-94c8-4afde90fa511"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.176-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.148-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f (92598634-43f2-4a1c-bc97-da99ca495322)'. Ident: 'index-1084-8224331490264904478', commit timestamp: 'Timestamp(1574796789, 2017)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.193-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.180-0500 I COMMAND [ReplWriterWorker-13] CMD: drop test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.148-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f (92598634-43f2-4a1c-bc97-da99ca495322)'. Ident: 'index-1087-8224331490264904478', commit timestamp: 'Timestamp(1574796789, 2017)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.198-0500 I COMMAND [ReplWriterWorker-1] CMD: drop test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.180-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 (9d3ec432-6e42-4475-b7a8-6e458f944706) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 1576), t: 1 } and commit timestamp Timestamp(1574796789, 1576)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.148-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f'. Ident: collection-1082-8224331490264904478, commit timestamp: Timestamp(1574796789, 2017)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.198-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 (9d3ec432-6e42-4475-b7a8-6e458f944706) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 1576), t: 1 } and commit timestamp Timestamp(1574796789, 1576)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.180-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 (9d3ec432-6e42-4475-b7a8-6e458f944706).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.149-0500 I COMMAND [conn68] command test5_fsmdb0.agg_out appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5388640822029307643, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 831208009195135601, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796786333), clusterTime: Timestamp(1574796786, 4350) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796786, 4350), signature: { hash: BinData(0, CFEA4FE89D137DA05B4194320336D8365880E1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796785, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:990 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2815ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.198-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 (9d3ec432-6e42-4475-b7a8-6e458f944706).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.180-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 (9d3ec432-6e42-4475-b7a8-6e458f944706)'. Ident: 'index-1090--8000595249233899911', commit timestamp: 'Timestamp(1574796789, 1576)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.155-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.198-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 (9d3ec432-6e42-4475-b7a8-6e458f944706)'. Ident: 'index-1090--4104909142373009110', commit timestamp: 'Timestamp(1574796789, 1576)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.180-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 (9d3ec432-6e42-4475-b7a8-6e458f944706)'. Ident: 'index-1099--8000595249233899911', commit timestamp: 'Timestamp(1574796789, 1576)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.156-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.198-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0 (9d3ec432-6e42-4475-b7a8-6e458f944706)'. Ident: 'index-1099--4104909142373009110', commit timestamp: 'Timestamp(1574796789, 1576)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.180-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0'. Ident: collection-1089--8000595249233899911, commit timestamp: Timestamp(1574796789, 1576)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.156-0500 I INDEX [conn114] Registering index build: cfeb5bd7-0037-4087-80bd-7a313eaadd70
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.198-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.992a3c0a-476b-44e4-8a37-a095bd0247b0'. Ident: collection-1089--4104909142373009110, commit timestamp: Timestamp(1574796789, 1576)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.182-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80 with provided UUID: f395395d-3793-453b-83f8-d93ef1a6cbed and options: { uuid: UUID("f395395d-3793-453b-83f8-d93ef1a6cbed"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.156-0500 I COMMAND [conn46] CMD: drop test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.201-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80 with provided UUID: f395395d-3793-453b-83f8-d93ef1a6cbed and options: { uuid: UUID("f395395d-3793-453b-83f8-d93ef1a6cbed"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.196-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.156-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e with generated UUID: 025799c2-b472-4ff2-a8f6-a8bb51c3187f and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.213-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.216-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.157-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 588d5aae-d49d-4c31-9644-7bdd466c5240: test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d ( 3226e0b7-0cd2-4ad9-829a-463d8939f49e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.233-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.216-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.157-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.233-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.216-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 8722d520-2a14-4489-be27-aa02122bf516: test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d (3226e0b7-0cd2-4ad9-829a-463d8939f49e ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.172-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.233-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: d90679c7-b638-44b6-aadf-64123da04259: test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d (3226e0b7-0cd2-4ad9-829a-463d8939f49e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.217-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.179-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.234-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.217-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.179-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.234-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.219-0500 I COMMAND [ReplWriterWorker-8] CMD: drop test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.179-0500 I STORAGE [conn114] Index build initialized: cfeb5bd7-0037-4087-80bd-7a313eaadd70: test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80 (f395395d-3793-453b-83f8-d93ef1a6cbed ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.235-0500 I COMMAND [ReplWriterWorker-5] CMD: drop test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.219-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f (92598634-43f2-4a1c-bc97-da99ca495322) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 2017), t: 1 } and commit timestamp Timestamp(1574796789, 2017)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.179-0500 I INDEX [conn114] Waiting for index build to complete: cfeb5bd7-0037-4087-80bd-7a313eaadd70
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.236-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f (92598634-43f2-4a1c-bc97-da99ca495322) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 2017), t: 1 } and commit timestamp Timestamp(1574796789, 2017)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.219-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f (92598634-43f2-4a1c-bc97-da99ca495322).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.180-0500 I INDEX [conn112] Index build completed: 588d5aae-d49d-4c31-9644-7bdd466c5240
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.236-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f (92598634-43f2-4a1c-bc97-da99ca495322).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.219-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f (92598634-43f2-4a1c-bc97-da99ca495322)'. Ident: 'index-1092--8000595249233899911', commit timestamp: 'Timestamp(1574796789, 2017)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.180-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 (c0ec03d8-f02d-43d6-bc25-7eeb517f5e6b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.236-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f (92598634-43f2-4a1c-bc97-da99ca495322)'. Ident: 'index-1092--4104909142373009110', commit timestamp: 'Timestamp(1574796789, 2017)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.219-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f (92598634-43f2-4a1c-bc97-da99ca495322)'. Ident: 'index-1101--8000595249233899911', commit timestamp: 'Timestamp(1574796789, 2017)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.180-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.236-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f (92598634-43f2-4a1c-bc97-da99ca495322)'. Ident: 'index-1101--4104909142373009110', commit timestamp: 'Timestamp(1574796789, 2017)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.219-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f'. Ident: collection-1091--8000595249233899911, commit timestamp: Timestamp(1574796789, 2017)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.180-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 (c0ec03d8-f02d-43d6-bc25-7eeb517f5e6b).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.236-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.573c8906-13f5-489c-8430-1cbcbe095a8f'. Ident: collection-1091--4104909142373009110, commit timestamp: Timestamp(1574796789, 2017)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.219-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e with provided UUID: 025799c2-b472-4ff2-a8f6-a8bb51c3187f and options: { uuid: UUID("025799c2-b472-4ff2-a8f6-a8bb51c3187f"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.180-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 (c0ec03d8-f02d-43d6-bc25-7eeb517f5e6b)'. Ident: 'index-1090-8224331490264904478', commit timestamp: 'Timestamp(1574796789, 2022)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.237-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.220-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.180-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 (c0ec03d8-f02d-43d6-bc25-7eeb517f5e6b)'. Ident: 'index-1091-8224331490264904478', commit timestamp: 'Timestamp(1574796789, 2022)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.237-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e with provided UUID: 025799c2-b472-4ff2-a8f6-a8bb51c3187f and options: { uuid: UUID("025799c2-b472-4ff2-a8f6-a8bb51c3187f"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.228-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 8722d520-2a14-4489-be27-aa02122bf516: test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d ( 3226e0b7-0cd2-4ad9-829a-463d8939f49e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.180-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15'. Ident: collection-1088-8224331490264904478, commit timestamp: Timestamp(1574796789, 2022)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.240-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: d90679c7-b638-44b6-aadf-64123da04259: test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d ( 3226e0b7-0cd2-4ad9-829a-463d8939f49e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.236-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.180-0500 I COMMAND [conn70] command test5_fsmdb0.agg_out appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7394880000108572641, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4268099816535325349, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796786383), clusterTime: Timestamp(1574796786, 4858) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796786, 4922), signature: { hash: BinData(0, CFEA4FE89D137DA05B4194320336D8365880E1A3), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796785, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"warn\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"warn\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:990 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2796ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.254-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.251-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.183-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7 with generated UUID: bce36599-70f6-43da-9a33-3a72c4bf19a7 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.270-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.251-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.187-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.270-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.251-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 1a23da9d-bd38-4d3b-b05d-9744705a32e5: test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977 (bd7130a2-8dba-4c6c-94c8-4afde90fa511 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.188-0500 I INDEX [conn108] Registering index build: 8cb9f1cb-cc17-4211-ac69-79446e138a09
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.270-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 70375ebd-05c7-4ef1-832d-daa0618f87ec: test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977 (bd7130a2-8dba-4c6c-94c8-4afde90fa511 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.251-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.189-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 03bd185f-f667-43ee-adc8-6b5f20ab05c3: test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977 ( bd7130a2-8dba-4c6c-94c8-4afde90fa511 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.270-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.252-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.190-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.271-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.253-0500 I COMMAND [ReplWriterWorker-4] CMD: drop test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.207-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.272-0500 I COMMAND [ReplWriterWorker-6] CMD: drop test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.253-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 (c0ec03d8-f02d-43d6-bc25-7eeb517f5e6b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 2022), t: 1 } and commit timestamp Timestamp(1574796789, 2022)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.215-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.272-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 (c0ec03d8-f02d-43d6-bc25-7eeb517f5e6b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 2022), t: 1 } and commit timestamp Timestamp(1574796789, 2022)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.253-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 (c0ec03d8-f02d-43d6-bc25-7eeb517f5e6b).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.220-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.272-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 (c0ec03d8-f02d-43d6-bc25-7eeb517f5e6b).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.253-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 (c0ec03d8-f02d-43d6-bc25-7eeb517f5e6b)'. Ident: 'index-1098--8000595249233899911', commit timestamp: 'Timestamp(1574796789, 2022)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.220-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.272-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 (c0ec03d8-f02d-43d6-bc25-7eeb517f5e6b)'. Ident: 'index-1098--4104909142373009110', commit timestamp: 'Timestamp(1574796789, 2022)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.253-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 (c0ec03d8-f02d-43d6-bc25-7eeb517f5e6b)'. Ident: 'index-1105--8000595249233899911', commit timestamp: 'Timestamp(1574796789, 2022)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.220-0500 I STORAGE [conn108] Index build initialized: 8cb9f1cb-cc17-4211-ac69-79446e138a09: test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e (025799c2-b472-4ff2-a8f6-a8bb51c3187f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.272-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15 (c0ec03d8-f02d-43d6-bc25-7eeb517f5e6b)'. Ident: 'index-1105--4104909142373009110', commit timestamp: 'Timestamp(1574796789, 2022)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.253-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15'. Ident: collection-1097--8000595249233899911, commit timestamp: Timestamp(1574796789, 2022)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.220-0500 I INDEX [conn108] Waiting for index build to complete: 8cb9f1cb-cc17-4211-ac69-79446e138a09
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.272-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.1b977b76-b6b3-4a62-b668-21f13d982a15'. Ident: collection-1097--4104909142373009110, commit timestamp: Timestamp(1574796789, 2022)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.253-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.220-0500 I INDEX [conn110] Index build completed: 03bd185f-f667-43ee-adc8-6b5f20ab05c3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.273-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7 with provided UUID: bce36599-70f6-43da-9a33-3a72c4bf19a7 and options: { uuid: UUID("bce36599-70f6-43da-9a33-3a72c4bf19a7"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.254-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7 with provided UUID: bce36599-70f6-43da-9a33-3a72c4bf19a7 and options: { uuid: UUID("bce36599-70f6-43da-9a33-3a72c4bf19a7"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.220-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.273-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.257-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 1a23da9d-bd38-4d3b-b05d-9744705a32e5: test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977 ( bd7130a2-8dba-4c6c-94c8-4afde90fa511 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.220-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (3fc9e48e-6e3c-45da-8698-64331d35b1fe) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 2527), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.281-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 70375ebd-05c7-4ef1-832d-daa0618f87ec: test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977 ( bd7130a2-8dba-4c6c-94c8-4afde90fa511 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.271-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.220-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (3fc9e48e-6e3c-45da-8698-64331d35b1fe).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.289-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.294-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.220-0500 I STORAGE [conn46] renameCollection: renaming collection 3226e0b7-0cd2-4ad9-829a-463d8939f49e from test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.307-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.294-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.220-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3fc9e48e-6e3c-45da-8698-64331d35b1fe)'. Ident: 'index-1075-8224331490264904478', commit timestamp: 'Timestamp(1574796789, 2527)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.307-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.295-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: a6de5bb4-a112-40db-b4b1-796912288820: test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80 (f395395d-3793-453b-83f8-d93ef1a6cbed ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.220-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3fc9e48e-6e3c-45da-8698-64331d35b1fe)'. Ident: 'index-1077-8224331490264904478', commit timestamp: 'Timestamp(1574796789, 2527)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.307-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: a0a039fb-507a-43e9-8b2c-4cf8569762fa: test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80 (f395395d-3793-453b-83f8-d93ef1a6cbed ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.295-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.220-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1073-8224331490264904478, commit timestamp: Timestamp(1574796789, 2527)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.308-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.295-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.221-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.308-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.297-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d (3226e0b7-0cd2-4ad9-829a-463d8939f49e) to test5_fsmdb0.agg_out and drop 3fc9e48e-6e3c-45da-8698-64331d35b1fe.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.221-0500 I INDEX [conn112] Registering index build: e6958050-a7f0-4b51-ac3d-c9ff99b330e9
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.310-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d (3226e0b7-0cd2-4ad9-829a-463d8939f49e) to test5_fsmdb0.agg_out and drop 3fc9e48e-6e3c-45da-8698-64331d35b1fe.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.298-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.221-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4559246222000946714, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3850459063908741736, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796789060), clusterTime: Timestamp(1574796789, 5) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796789, 133), signature: { hash: BinData(0, 4BFA038737D34AE1D651316DBEC48FD4C22031B9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796785, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 160ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.310-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.298-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.agg_out (3fc9e48e-6e3c-45da-8698-64331d35b1fe) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 2527), t: 1 } and commit timestamp Timestamp(1574796789, 2527)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.222-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: cfeb5bd7-0037-4087-80bd-7a313eaadd70: test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80 ( f395395d-3793-453b-83f8-d93ef1a6cbed ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.310-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (3fc9e48e-6e3c-45da-8698-64331d35b1fe) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 2527), t: 1 } and commit timestamp Timestamp(1574796789, 2527)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.298-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.agg_out (3fc9e48e-6e3c-45da-8698-64331d35b1fe).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.223-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.310-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (3fc9e48e-6e3c-45da-8698-64331d35b1fe).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.298-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection 3226e0b7-0cd2-4ad9-829a-463d8939f49e from test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.224-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e with generated UUID: 05e67b35-ed16-408c-8b96-78990be763eb and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.310-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 3226e0b7-0cd2-4ad9-829a-463d8939f49e from test5_fsmdb0.tmp.agg_out.6390c598-956c-41ae-9b1b-1e3dd14bd24d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.298-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3fc9e48e-6e3c-45da-8698-64331d35b1fe)'. Ident: 'index-1084--8000595249233899911', commit timestamp: 'Timestamp(1574796789, 2527)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.225-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.310-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3fc9e48e-6e3c-45da-8698-64331d35b1fe)'. Ident: 'index-1084--4104909142373009110', commit timestamp: 'Timestamp(1574796789, 2527)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.298-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3fc9e48e-6e3c-45da-8698-64331d35b1fe)'. Ident: 'index-1093--8000595249233899911', commit timestamp: 'Timestamp(1574796789, 2527)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.240-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 8cb9f1cb-cc17-4211-ac69-79446e138a09: test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e ( 025799c2-b472-4ff2-a8f6-a8bb51c3187f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.310-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3fc9e48e-6e3c-45da-8698-64331d35b1fe)'. Ident: 'index-1093--4104909142373009110', commit timestamp: 'Timestamp(1574796789, 2527)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.298-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1083--8000595249233899911, commit timestamp: Timestamp(1574796789, 2527)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.248-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.310-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1083--4104909142373009110, commit timestamp: Timestamp(1574796789, 2527)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.299-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e with provided UUID: 05e67b35-ed16-408c-8b96-78990be763eb and options: { uuid: UUID("05e67b35-ed16-408c-8b96-78990be763eb"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.248-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.312-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: a0a039fb-507a-43e9-8b2c-4cf8569762fa: test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80 ( f395395d-3793-453b-83f8-d93ef1a6cbed ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.301-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: a6de5bb4-a112-40db-b4b1-796912288820: test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80 ( f395395d-3793-453b-83f8-d93ef1a6cbed ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.248-0500 I STORAGE [conn112] Index build initialized: e6958050-a7f0-4b51-ac3d-c9ff99b330e9: test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7 (bce36599-70f6-43da-9a33-3a72c4bf19a7 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.318-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e with provided UUID: 05e67b35-ed16-408c-8b96-78990be763eb and options: { uuid: UUID("05e67b35-ed16-408c-8b96-78990be763eb"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.317-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.248-0500 I INDEX [conn112] Waiting for index build to complete: e6958050-a7f0-4b51-ac3d-c9ff99b330e9
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.333-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.332-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.248-0500 I INDEX [conn114] Index build completed: cfeb5bd7-0037-4087-80bd-7a313eaadd70
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.349-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.332-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.248-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.349-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.333-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: d8a38ad4-8dd2-40ca-a96e-d6bb3e08f6e0: test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e (025799c2-b472-4ff2-a8f6-a8bb51c3187f ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.248-0500 I INDEX [conn108] Index build completed: 8cb9f1cb-cc17-4211-ac69-79446e138a09
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.349-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 6eeeca65-0381-48ab-b4cc-16be4b95dae5: test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e (025799c2-b472-4ff2-a8f6-a8bb51c3187f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.333-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.256-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.349-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.333-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.256-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.350-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.336-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.258-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.352-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.339-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: d8a38ad4-8dd2-40ca-a96e-d6bb3e08f6e0: test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e ( 025799c2-b472-4ff2-a8f6-a8bb51c3187f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.258-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.356-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 6eeeca65-0381-48ab-b4cc-16be4b95dae5: test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e ( 025799c2-b472-4ff2-a8f6-a8bb51c3187f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.356-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.258-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (3226e0b7-0cd2-4ad9-829a-463d8939f49e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 3034), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.371-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.356-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.258-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (3226e0b7-0cd2-4ad9-829a-463d8939f49e).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.371-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.356-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 108a6781-61a3-414a-b390-da0f9894b1e7: test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7 (bce36599-70f6-43da-9a33-3a72c4bf19a7 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.259-0500 I STORAGE [conn110] renameCollection: renaming collection bd7130a2-8dba-4c6c-94c8-4afde90fa511 from test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.371-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: a9921f62-115a-498d-b3f0-04ec3b321fea: test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7 (bce36599-70f6-43da-9a33-3a72c4bf19a7 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.356-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.259-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3226e0b7-0cd2-4ad9-829a-463d8939f49e)'. Ident: 'index-1094-8224331490264904478', commit timestamp: 'Timestamp(1574796789, 3034)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.371-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.357-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.259-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3226e0b7-0cd2-4ad9-829a-463d8939f49e)'. Ident: 'index-1095-8224331490264904478', commit timestamp: 'Timestamp(1574796789, 3034)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.372-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.358-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977 (bd7130a2-8dba-4c6c-94c8-4afde90fa511) to test5_fsmdb0.agg_out and drop 3226e0b7-0cd2-4ad9-829a-463d8939f49e.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.259-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1093-8224331490264904478, commit timestamp: Timestamp(1574796789, 3034)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.372-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977 (bd7130a2-8dba-4c6c-94c8-4afde90fa511) to test5_fsmdb0.agg_out and drop 3226e0b7-0cd2-4ad9-829a-463d8939f49e.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.359-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.259-0500 I INDEX [conn46] Registering index build: 0c941332-1231-421e-991f-b11eb219f9d0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.375-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.359-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (3226e0b7-0cd2-4ad9-829a-463d8939f49e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 3034), t: 1 } and commit timestamp Timestamp(1574796789, 3034)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.259-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4068840847894760600, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2910116441342950777, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796789090), clusterTime: Timestamp(1574796789, 1137) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796789, 1201), signature: { hash: BinData(0, 4BFA038737D34AE1D651316DBEC48FD4C22031B9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 168ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.375-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (3226e0b7-0cd2-4ad9-829a-463d8939f49e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 3034), t: 1 } and commit timestamp Timestamp(1574796789, 3034)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.359-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (3226e0b7-0cd2-4ad9-829a-463d8939f49e).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.261-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: e6958050-a7f0-4b51-ac3d-c9ff99b330e9: test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7 ( bce36599-70f6-43da-9a33-3a72c4bf19a7 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.375-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (3226e0b7-0cd2-4ad9-829a-463d8939f49e).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.360-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection bd7130a2-8dba-4c6c-94c8-4afde90fa511 from test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.262-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23 with generated UUID: ca8449ec-a514-46da-ab40-1a3f0ed4aa0e and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.375-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection bd7130a2-8dba-4c6c-94c8-4afde90fa511 from test5_fsmdb0.tmp.agg_out.35d4c13c-0b46-4dde-80ad-5eee9d685977 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.360-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3226e0b7-0cd2-4ad9-829a-463d8939f49e)'. Ident: 'index-1104--8000595249233899911', commit timestamp: 'Timestamp(1574796789, 3034)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.288-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.375-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3226e0b7-0cd2-4ad9-829a-463d8939f49e)'. Ident: 'index-1104--4104909142373009110', commit timestamp: 'Timestamp(1574796789, 3034)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.360-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3226e0b7-0cd2-4ad9-829a-463d8939f49e)'. Ident: 'index-1111--8000595249233899911', commit timestamp: 'Timestamp(1574796789, 3034)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.288-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.375-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3226e0b7-0cd2-4ad9-829a-463d8939f49e)'. Ident: 'index-1111--4104909142373009110', commit timestamp: 'Timestamp(1574796789, 3034)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.360-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1103--8000595249233899911, commit timestamp: Timestamp(1574796789, 3034)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.288-0500 I STORAGE [conn46] Index build initialized: 0c941332-1231-421e-991f-b11eb219f9d0: test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e (05e67b35-ed16-408c-8b96-78990be763eb ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.375-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1103--4104909142373009110, commit timestamp: Timestamp(1574796789, 3034)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.362-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 108a6781-61a3-414a-b390-da0f9894b1e7: test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7 ( bce36599-70f6-43da-9a33-3a72c4bf19a7 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.288-0500 I INDEX [conn46] Waiting for index build to complete: 0c941332-1231-421e-991f-b11eb219f9d0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.376-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: a9921f62-115a-498d-b3f0-04ec3b321fea: test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7 ( bce36599-70f6-43da-9a33-3a72c4bf19a7 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.364-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23 with provided UUID: ca8449ec-a514-46da-ab40-1a3f0ed4aa0e and options: { uuid: UUID("ca8449ec-a514-46da-ab40-1a3f0ed4aa0e"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.288-0500 I INDEX [conn112] Index build completed: e6958050-a7f0-4b51-ac3d-c9ff99b330e9
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.391-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23 with provided UUID: ca8449ec-a514-46da-ab40-1a3f0ed4aa0e and options: { uuid: UUID("ca8449ec-a514-46da-ab40-1a3f0ed4aa0e"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.381-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.297-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.436-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.388-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e (025799c2-b472-4ff2-a8f6-a8bb51c3187f) to test5_fsmdb0.agg_out and drop bd7130a2-8dba-4c6c-94c8-4afde90fa511.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.297-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.442-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e (025799c2-b472-4ff2-a8f6-a8bb51c3187f) to test5_fsmdb0.agg_out and drop bd7130a2-8dba-4c6c-94c8-4afde90fa511.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.388-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.agg_out (bd7130a2-8dba-4c6c-94c8-4afde90fa511) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 4037), t: 1 } and commit timestamp Timestamp(1574796789, 4037)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.297-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (bd7130a2-8dba-4c6c-94c8-4afde90fa511) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 4037), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.442-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (bd7130a2-8dba-4c6c-94c8-4afde90fa511) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 4037), t: 1 } and commit timestamp Timestamp(1574796789, 4037)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.388-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.agg_out (bd7130a2-8dba-4c6c-94c8-4afde90fa511).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.297-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (bd7130a2-8dba-4c6c-94c8-4afde90fa511).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.442-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (bd7130a2-8dba-4c6c-94c8-4afde90fa511).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.388-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection 025799c2-b472-4ff2-a8f6-a8bb51c3187f from test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.297-0500 I STORAGE [conn108] renameCollection: renaming collection 025799c2-b472-4ff2-a8f6-a8bb51c3187f from test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.442-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 025799c2-b472-4ff2-a8f6-a8bb51c3187f from test5_fsmdb0.tmp.agg_out.f1ff0170-2446-4792-8b99-1768c387408e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.388-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bd7130a2-8dba-4c6c-94c8-4afde90fa511)'. Ident: 'index-1108--8000595249233899911', commit timestamp: 'Timestamp(1574796789, 4037)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.297-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bd7130a2-8dba-4c6c-94c8-4afde90fa511)'. Ident: 'index-1098-8224331490264904478', commit timestamp: 'Timestamp(1574796789, 4037)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.442-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bd7130a2-8dba-4c6c-94c8-4afde90fa511)'. Ident: 'index-1108--4104909142373009110', commit timestamp: 'Timestamp(1574796789, 4037)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.388-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bd7130a2-8dba-4c6c-94c8-4afde90fa511)'. Ident: 'index-1115--8000595249233899911', commit timestamp: 'Timestamp(1574796789, 4037)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.297-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bd7130a2-8dba-4c6c-94c8-4afde90fa511)'. Ident: 'index-1099-8224331490264904478', commit timestamp: 'Timestamp(1574796789, 4037)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.442-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bd7130a2-8dba-4c6c-94c8-4afde90fa511)'. Ident: 'index-1115--4104909142373009110', commit timestamp: 'Timestamp(1574796789, 4037)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.388-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1107--8000595249233899911, commit timestamp: Timestamp(1574796789, 4037)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.389-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80 (f395395d-3793-453b-83f8-d93ef1a6cbed) to test5_fsmdb0.agg_out and drop 025799c2-b472-4ff2-a8f6-a8bb51c3187f.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.442-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1107--4104909142373009110, commit timestamp: Timestamp(1574796789, 4037)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.389-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (025799c2-b472-4ff2-a8f6-a8bb51c3187f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 4038), t: 1 } and commit timestamp Timestamp(1574796789, 4038)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.297-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1096-8224331490264904478, commit timestamp: Timestamp(1574796789, 4037)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.443-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80 (f395395d-3793-453b-83f8-d93ef1a6cbed) to test5_fsmdb0.agg_out and drop 025799c2-b472-4ff2-a8f6-a8bb51c3187f.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.389-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (025799c2-b472-4ff2-a8f6-a8bb51c3187f).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.298-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.443-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test5_fsmdb0.agg_out (025799c2-b472-4ff2-a8f6-a8bb51c3187f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 4038), t: 1 } and commit timestamp Timestamp(1574796789, 4038)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.389-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection f395395d-3793-453b-83f8-d93ef1a6cbed from test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.298-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5873462510788602555, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3743233135316686601, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796789150), clusterTime: Timestamp(1574796789, 2017) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796789, 2017), signature: { hash: BinData(0, 4BFA038737D34AE1D651316DBEC48FD4C22031B9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 141ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.443-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test5_fsmdb0.agg_out (025799c2-b472-4ff2-a8f6-a8bb51c3187f).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.389-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (025799c2-b472-4ff2-a8f6-a8bb51c3187f)'. Ident: 'index-1114--8000595249233899911', commit timestamp: 'Timestamp(1574796789, 4038)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.298-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (025799c2-b472-4ff2-a8f6-a8bb51c3187f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 4038), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.443-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection f395395d-3793-453b-83f8-d93ef1a6cbed from test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.389-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (025799c2-b472-4ff2-a8f6-a8bb51c3187f)'. Ident: 'index-1123--8000595249233899911', commit timestamp: 'Timestamp(1574796789, 4038)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.298-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (025799c2-b472-4ff2-a8f6-a8bb51c3187f).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.443-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (025799c2-b472-4ff2-a8f6-a8bb51c3187f)'. Ident: 'index-1114--4104909142373009110', commit timestamp: 'Timestamp(1574796789, 4038)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.389-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1113--8000595249233899911, commit timestamp: Timestamp(1574796789, 4038)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.298-0500 I STORAGE [conn114] renameCollection: renaming collection f395395d-3793-453b-83f8-d93ef1a6cbed from test5_fsmdb0.tmp.agg_out.a6f79ee6-1cef-415c-a383-16afbf1d1e80 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.443-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (025799c2-b472-4ff2-a8f6-a8bb51c3187f)'. Ident: 'index-1123--4104909142373009110', commit timestamp: 'Timestamp(1574796789, 4038)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.390-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f with provided UUID: d5f9613a-4d97-4c07-9794-3819bcaa8514 and options: { uuid: UUID("d5f9613a-4d97-4c07-9794-3819bcaa8514"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.298-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (025799c2-b472-4ff2-a8f6-a8bb51c3187f)'. Ident: 'index-1106-8224331490264904478', commit timestamp: 'Timestamp(1574796789, 4038)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.443-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1113--4104909142373009110, commit timestamp: Timestamp(1574796789, 4038)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.406-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.298-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (025799c2-b472-4ff2-a8f6-a8bb51c3187f)'. Ident: 'index-1108-8224331490264904478', commit timestamp: 'Timestamp(1574796789, 4038)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.444-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f with provided UUID: d5f9613a-4d97-4c07-9794-3819bcaa8514 and options: { uuid: UUID("d5f9613a-4d97-4c07-9794-3819bcaa8514"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.449-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.298-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1104-8224331490264904478, commit timestamp: Timestamp(1574796789, 4038)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.458-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.449-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.298-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.472-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.449-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 8f917ab7-dc10-4cbc-b6d9-04187a5058a4: test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e (05e67b35-ed16-408c-8b96-78990be763eb ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.298-0500 I INDEX [conn110] Registering index build: 577d32b2-ff9e-422e-8d3f-d0cb8c4bf63f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.472-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.449-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.298-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7801234377612302521, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4023547193465757243, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796789115), clusterTime: Timestamp(1574796789, 1576) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796789, 1640), signature: { hash: BinData(0, 4BFA038737D34AE1D651316DBEC48FD4C22031B9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 177ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.472-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: e108b35a-b161-4641-bf0f-ef59b910c5d0: test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e (05e67b35-ed16-408c-8b96-78990be763eb ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.449-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.299-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.472-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.450-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706 with provided UUID: 8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f and options: { uuid: UUID("8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.300-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f with generated UUID: d5f9613a-4d97-4c07-9794-3819bcaa8514 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.473-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.452-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.301-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.474-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706 with provided UUID: 8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f and options: { uuid: UUID("8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.462-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 8f917ab7-dc10-4cbc-b6d9-04187a5058a4: test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e ( 05e67b35-ed16-408c-8b96-78990be763eb ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.301-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706 with generated UUID: 8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.475-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.469-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.317-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 0c941332-1231-421e-991f-b11eb219f9d0: test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e ( 05e67b35-ed16-408c-8b96-78990be763eb ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.484-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: e108b35a-b161-4641-bf0f-ef59b910c5d0: test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e ( 05e67b35-ed16-408c-8b96-78990be763eb ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.488-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.332-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.492-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.488-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.333-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.492-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796789, 4041) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796789, 4106), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 161ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.488-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: a5852c79-35d6-4e34-8c94-1cb0a454a20b: test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23 (ca8449ec-a514-46da-ab40-1a3f0ed4aa0e ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.333-0500 I STORAGE [conn110] Index build initialized: 577d32b2-ff9e-422e-8d3f-d0cb8c4bf63f: test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23 (ca8449ec-a514-46da-ab40-1a3f0ed4aa0e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.512-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.488-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.333-0500 I INDEX [conn110] Waiting for index build to complete: 577d32b2-ff9e-422e-8d3f-d0cb8c4bf63f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.512-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.489-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.333-0500 I INDEX [conn46] Index build completed: 0c941332-1231-421e-991f-b11eb219f9d0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.512-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 8b45e463-8356-4888-913f-66aefab94152: test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23 (ca8449ec-a514-46da-ab40-1a3f0ed4aa0e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.490-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7 (bce36599-70f6-43da-9a33-3a72c4bf19a7) to test5_fsmdb0.agg_out and drop f395395d-3793-453b-83f8-d93ef1a6cbed.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.333-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.512-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.490-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.341-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.512-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.491-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (f395395d-3793-453b-83f8-d93ef1a6cbed) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 4546), t: 1 } and commit timestamp Timestamp(1574796789, 4546)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.348-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.513-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7 (bce36599-70f6-43da-9a33-3a72c4bf19a7) to test5_fsmdb0.agg_out and drop f395395d-3793-453b-83f8-d93ef1a6cbed.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.491-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (f395395d-3793-453b-83f8-d93ef1a6cbed).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.349-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.515-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.491-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection bce36599-70f6-43da-9a33-3a72c4bf19a7 from test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.352-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.515-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (f395395d-3793-453b-83f8-d93ef1a6cbed) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 4546), t: 1 } and commit timestamp Timestamp(1574796789, 4546)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.491-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f395395d-3793-453b-83f8-d93ef1a6cbed)'. Ident: 'index-1110--8000595249233899911', commit timestamp: 'Timestamp(1574796789, 4546)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.353-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.515-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (f395395d-3793-453b-83f8-d93ef1a6cbed).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.491-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f395395d-3793-453b-83f8-d93ef1a6cbed)'. Ident: 'index-1119--8000595249233899911', commit timestamp: 'Timestamp(1574796789, 4546)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.353-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (f395395d-3793-453b-83f8-d93ef1a6cbed) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 4546), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.515-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection bce36599-70f6-43da-9a33-3a72c4bf19a7 from test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.491-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1109--8000595249233899911, commit timestamp: Timestamp(1574796789, 4546)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.353-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (f395395d-3793-453b-83f8-d93ef1a6cbed).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.515-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f395395d-3793-453b-83f8-d93ef1a6cbed)'. Ident: 'index-1110--4104909142373009110', commit timestamp: 'Timestamp(1574796789, 4546)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.492-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55 with provided UUID: 9fe00dcc-df8c-4077-a771-d04a1680f2ec and options: { uuid: UUID("9fe00dcc-df8c-4077-a771-d04a1680f2ec"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.353-0500 I STORAGE [conn112] renameCollection: renaming collection bce36599-70f6-43da-9a33-3a72c4bf19a7 from test5_fsmdb0.tmp.agg_out.944f7b89-9fc4-4f74-97d7-d45fd08829b7 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.515-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f395395d-3793-453b-83f8-d93ef1a6cbed)'. Ident: 'index-1119--4104909142373009110', commit timestamp: 'Timestamp(1574796789, 4546)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.493-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: a5852c79-35d6-4e34-8c94-1cb0a454a20b: test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23 ( ca8449ec-a514-46da-ab40-1a3f0ed4aa0e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.353-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f395395d-3793-453b-83f8-d93ef1a6cbed)'. Ident: 'index-1102-8224331490264904478', commit timestamp: 'Timestamp(1574796789, 4546)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.515-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1109--4104909142373009110, commit timestamp: Timestamp(1574796789, 4546)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.506-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.353-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f395395d-3793-453b-83f8-d93ef1a6cbed)'. Ident: 'index-1103-8224331490264904478', commit timestamp: 'Timestamp(1574796789, 4546)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.516-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55 with provided UUID: 9fe00dcc-df8c-4077-a771-d04a1680f2ec and options: { uuid: UUID("9fe00dcc-df8c-4077-a771-d04a1680f2ec"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.513-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e (05e67b35-ed16-408c-8b96-78990be763eb) to test5_fsmdb0.agg_out and drop bce36599-70f6-43da-9a33-3a72c4bf19a7.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.353-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1100-8224331490264904478, commit timestamp: Timestamp(1574796789, 4546)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.516-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 8b45e463-8356-4888-913f-66aefab94152: test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23 ( ca8449ec-a514-46da-ab40-1a3f0ed4aa0e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.513-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (bce36599-70f6-43da-9a33-3a72c4bf19a7) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 5050), t: 1 } and commit timestamp Timestamp(1574796789, 5050)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.353-0500 I INDEX [conn108] Registering index build: 35304202-d6c8-4c62-8b88-93e946505854
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.527-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.513-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (bce36599-70f6-43da-9a33-3a72c4bf19a7).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.353-0500 I INDEX [conn114] Registering index build: 1d2d5e6d-f199-44da-ae50-3532880c4836
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.535-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e (05e67b35-ed16-408c-8b96-78990be763eb) to test5_fsmdb0.agg_out and drop bce36599-70f6-43da-9a33-3a72c4bf19a7.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.513-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 05e67b35-ed16-408c-8b96-78990be763eb from test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.353-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5409934723759659278, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5910440794961330126, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796789182), clusterTime: Timestamp(1574796789, 2022) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796789, 2022), signature: { hash: BinData(0, 4BFA038737D34AE1D651316DBEC48FD4C22031B9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 170ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.535-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.agg_out (bce36599-70f6-43da-9a33-3a72c4bf19a7) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 5050), t: 1 } and commit timestamp Timestamp(1574796789, 5050)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.513-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bce36599-70f6-43da-9a33-3a72c4bf19a7)'. Ident: 'index-1118--8000595249233899911', commit timestamp: 'Timestamp(1574796789, 5050)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.356-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 577d32b2-ff9e-422e-8d3f-d0cb8c4bf63f: test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23 ( ca8449ec-a514-46da-ab40-1a3f0ed4aa0e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.535-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.agg_out (bce36599-70f6-43da-9a33-3a72c4bf19a7).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.513-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bce36599-70f6-43da-9a33-3a72c4bf19a7)'. Ident: 'index-1125--8000595249233899911', commit timestamp: 'Timestamp(1574796789, 5050)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.356-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55 with generated UUID: 9fe00dcc-df8c-4077-a771-d04a1680f2ec and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.535-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection 05e67b35-ed16-408c-8b96-78990be763eb from test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:09.513-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1117--8000595249233899911, commit timestamp: Timestamp(1574796789, 5050)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.378-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.535-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bce36599-70f6-43da-9a33-3a72c4bf19a7)'. Ident: 'index-1118--4104909142373009110', commit timestamp: 'Timestamp(1574796789, 5050)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.378-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.145-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395 with provided UUID: d011c864-b521-48ec-b932-9d4ea8f87fef and options: { uuid: UUID("d011c864-b521-48ec-b932-9d4ea8f87fef"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.535-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bce36599-70f6-43da-9a33-3a72c4bf19a7)'. Ident: 'index-1125--4104909142373009110', commit timestamp: 'Timestamp(1574796789, 5050)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.378-0500 I STORAGE [conn108] Index build initialized: 35304202-d6c8-4c62-8b88-93e946505854: test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706 (8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.158-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:09.535-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1117--4104909142373009110, commit timestamp: Timestamp(1574796789, 5050)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.378-0500 I INDEX [conn108] Waiting for index build to complete: 35304202-d6c8-4c62-8b88-93e946505854
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.160-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395 with provided UUID: d011c864-b521-48ec-b932-9d4ea8f87fef and options: { uuid: UUID("d011c864-b521-48ec-b932-9d4ea8f87fef"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.378-0500 I INDEX [conn110] Index build completed: 577d32b2-ff9e-422e-8d3f-d0cb8c4bf63f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.385-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.398-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.398-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.398-0500 I STORAGE [conn114] Index build initialized: 1d2d5e6d-f199-44da-ae50-3532880c4836: test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f (d5f9613a-4d97-4c07-9794-3819bcaa8514 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.398-0500 I INDEX [conn114] Waiting for index build to complete: 1d2d5e6d-f199-44da-ae50-3532880c4836
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.398-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.398-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (bce36599-70f6-43da-9a33-3a72c4bf19a7) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796789, 5050), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.398-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (bce36599-70f6-43da-9a33-3a72c4bf19a7).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.398-0500 I STORAGE [conn46] renameCollection: renaming collection 05e67b35-ed16-408c-8b96-78990be763eb from test5_fsmdb0.tmp.agg_out.4489761f-a368-4f47-8972-ddd2d59df26e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.399-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bce36599-70f6-43da-9a33-3a72c4bf19a7)'. Ident: 'index-1109-8224331490264904478', commit timestamp: 'Timestamp(1574796789, 5050)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.399-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bce36599-70f6-43da-9a33-3a72c4bf19a7)'. Ident: 'index-1111-8224331490264904478', commit timestamp: 'Timestamp(1574796789, 5050)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.399-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1107-8224331490264904478, commit timestamp: Timestamp(1574796789, 5050)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.399-0500 I INDEX [conn112] Registering index build: 774ec9eb-8c18-4f49-bb2b-0f59bbdbda28
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.399-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.399-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.399-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 127467837603124434, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3081856387881088722, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796789223), clusterTime: Timestamp(1574796789, 2527) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796789, 2527), signature: { hash: BinData(0, 4BFA038737D34AE1D651316DBEC48FD4C22031B9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 175ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.399-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.402-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395 with generated UUID: d011c864-b521-48ec-b932-9d4ea8f87fef and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.402-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.439-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.450-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 35304202-d6c8-4c62-8b88-93e946505854: test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706 ( 8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.457-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.457-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.458-0500 I STORAGE [conn112] Index build initialized: 774ec9eb-8c18-4f49-bb2b-0f59bbdbda28: test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55 (9fe00dcc-df8c-4077-a771-d04a1680f2ec ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.458-0500 I INDEX [conn112] Waiting for index build to complete: 774ec9eb-8c18-4f49-bb2b-0f59bbdbda28
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.458-0500 I INDEX [conn108] Index build completed: 35304202-d6c8-4c62-8b88-93e946505854
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.458-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706 appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796789, 4543), signature: { hash: BinData(0, 4BFA038737D34AE1D651316DBEC48FD4C22031B9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 4036 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 108ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.459-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:09.466-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.141-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395 appName: "tid:0" command: create { create: "tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395", temp: true, validationLevel: "strict", validationAction: "error", databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796789, 5114), signature: { hash: BinData(0, 4BFA038737D34AE1D651316DBEC48FD4C22031B9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2738ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.141-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.141-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (05e67b35-ed16-408c-8b96-78990be763eb) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 2), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.141-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (05e67b35-ed16-408c-8b96-78990be763eb).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.141-0500 I STORAGE [conn110] renameCollection: renaming collection ca8449ec-a514-46da-ab40-1a3f0ed4aa0e from test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.141-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (05e67b35-ed16-408c-8b96-78990be763eb)'. Ident: 'index-1114-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 2)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.141-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (05e67b35-ed16-408c-8b96-78990be763eb)'. Ident: 'index-1115-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 2)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.141-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1112-8224331490264904478, commit timestamp: Timestamp(1574796792, 2)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.141-0500 I COMMAND [conn110] command test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23 appName: "tid:3" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23", to: "test5_fsmdb0.agg_out", collectionOptions: { validationLevel: "strict", validationAction: "error" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796789, 5553), signature: { hash: BinData(0, 4BFA038737D34AE1D651316DBEC48FD4C22031B9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 2690933 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2691ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.141-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.141-0500 I COMMAND [conn220] command test5_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796789, 4041), lsid: { id: UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") }, $clusterTime: { clusterTime: Timestamp(1574796789, 4106), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796789, 4041). Collection minimum timestamp is Timestamp(1574796789, 5117)" errName:SnapshotUnavailable errCode:246 reslen:602 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2647261 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 2647ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.141-0500 I INDEX [conn46] Registering index build: 12e11311-f602-4ee4-b9dc-200deb4c5623
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.142-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3708274702819253014, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1322896240584443563, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796789260), clusterTime: Timestamp(1574796789, 3162) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796789, 3162), signature: { hash: BinData(0, 4BFA038737D34AE1D651316DBEC48FD4C22031B9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2880ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.143-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 1d2d5e6d-f199-44da-ae50-3532880c4836: test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f ( d5f9613a-4d97-4c07-9794-3819bcaa8514 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.143-0500 I COMMAND [conn71] CMD: dropIndexes test5_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.143-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.146-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.154-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 774ec9eb-8c18-4f49-bb2b-0f59bbdbda28: test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55 ( 9fe00dcc-df8c-4077-a771-d04a1680f2ec ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.161-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.161-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.161-0500 I STORAGE [conn46] Index build initialized: 12e11311-f602-4ee4-b9dc-200deb4c5623: test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395 (d011c864-b521-48ec-b932-9d4ea8f87fef ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.161-0500 I INDEX [conn46] Waiting for index build to complete: 12e11311-f602-4ee4-b9dc-200deb4c5623
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.161-0500 I INDEX [conn114] Index build completed: 1d2d5e6d-f199-44da-ae50-3532880c4836
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.161-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.161-0500 I INDEX [conn112] Index build completed: 774ec9eb-8c18-4f49-bb2b-0f59bbdbda28
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.161-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796789, 4543), signature: { hash: BinData(0, 4BFA038737D34AE1D651316DBEC48FD4C22031B9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 11178 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2819ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.161-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55 appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796789, 5048), signature: { hash: BinData(0, 4BFA038737D34AE1D651316DBEC48FD4C22031B9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 13565 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2775ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.161-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.163-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.164-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea with generated UUID: edf68a44-2fee-4aae-971c-7af2eb1a4c6f and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.166-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 12e11311-f602-4ee4-b9dc-200deb4c5623: test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395 ( d011c864-b521-48ec-b932-9d4ea8f87fef ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.166-0500 I INDEX [conn46] Index build completed: 12e11311-f602-4ee4-b9dc-200deb4c5623
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.176-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.176-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.176-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: fc256e87-ce78-4094-b66d-99bdaeb863a0: test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706 (8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.176-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.176-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.176-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.178-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.181-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.181-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.182-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (ca8449ec-a514-46da-ab40-1a3f0ed4aa0e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 637), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.182-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (ca8449ec-a514-46da-ab40-1a3f0ed4aa0e).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.182-0500 I STORAGE [conn108] renameCollection: renaming collection 8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f from test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.182-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: fc256e87-ce78-4094-b66d-99bdaeb863a0: test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706 ( 8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.182-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ca8449ec-a514-46da-ab40-1a3f0ed4aa0e)'. Ident: 'index-1118-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 637)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.182-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ca8449ec-a514-46da-ab40-1a3f0ed4aa0e)'. Ident: 'index-1119-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 637)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.182-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1116-8224331490264904478, commit timestamp: Timestamp(1574796792, 637)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.182-0500 I INDEX [conn114] Registering index build: a156d744-7ec3-436f-b8db-bb7033b97f31
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.182-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8411679151130409859, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4776869606168938092, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796789300), clusterTime: Timestamp(1574796789, 4038) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796789, 4038), signature: { hash: BinData(0, 4BFA038737D34AE1D651316DBEC48FD4C22031B9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2881ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:12.182-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796789, 4038), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2882ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.186-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c with generated UUID: 750ee2fe-dc6f-47f1-94e3-2dcfd4af545f and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.192-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.192-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.192-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: e89c3b41-3745-4f42-90d7-9acffe56bba3: test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706 (8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.192-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.193-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.196-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.196-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.196-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.196-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 6f1458c4-0c4b-46ae-ad10-ffdb36f204bf: test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f (d5f9613a-4d97-4c07-9794-3819bcaa8514 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.196-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.197-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.198-0500 I COMMAND [ReplWriterWorker-0] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23 (ca8449ec-a514-46da-ab40-1a3f0ed4aa0e) to test5_fsmdb0.agg_out and drop 05e67b35-ed16-408c-8b96-78990be763eb.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.200-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.200-0500 I STORAGE [ReplWriterWorker-0] dropCollection: test5_fsmdb0.agg_out (05e67b35-ed16-408c-8b96-78990be763eb) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 2), t: 1 } and commit timestamp Timestamp(1574796792, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.200-0500 I STORAGE [ReplWriterWorker-0] Finishing collection drop for test5_fsmdb0.agg_out (05e67b35-ed16-408c-8b96-78990be763eb).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.200-0500 I STORAGE [ReplWriterWorker-0] renameCollection: renaming collection ca8449ec-a514-46da-ab40-1a3f0ed4aa0e from test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.200-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (05e67b35-ed16-408c-8b96-78990be763eb)'. Ident: 'index-1122--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 2)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.200-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (05e67b35-ed16-408c-8b96-78990be763eb)'. Ident: 'index-1131--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 2)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.200-0500 I STORAGE [ReplWriterWorker-0] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1121--8000595249233899911, commit timestamp: Timestamp(1574796792, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.201-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: e89c3b41-3745-4f42-90d7-9acffe56bba3: test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706 ( 8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.202-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 6f1458c4-0c4b-46ae-ad10-ffdb36f204bf: test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f ( d5f9613a-4d97-4c07-9794-3819bcaa8514 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.210-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.210-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.210-0500 I STORAGE [conn114] Index build initialized: a156d744-7ec3-436f-b8db-bb7033b97f31: test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea (edf68a44-2fee-4aae-971c-7af2eb1a4c6f ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.210-0500 I INDEX [conn114] Waiting for index build to complete: a156d744-7ec3-436f-b8db-bb7033b97f31
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.217-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.217-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.217-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: a81c4103-cdd8-4f5d-abf6-073318d1c8f5: test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f (d5f9613a-4d97-4c07-9794-3819bcaa8514 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.217-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.217-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.217-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.218-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.218-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 1832), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.218-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.218-0500 I STORAGE [conn112] renameCollection: renaming collection d5f9613a-4d97-4c07-9794-3819bcaa8514 from test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.218-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f)'. Ident: 'index-1124-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 1832)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.218-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f)'. Ident: 'index-1125-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 1832)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.218-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1121-8224331490264904478, commit timestamp: Timestamp(1574796792, 1832)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.218-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.218-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (d5f9613a-4d97-4c07-9794-3819bcaa8514) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 1833), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.218-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (d5f9613a-4d97-4c07-9794-3819bcaa8514).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.218-0500 I STORAGE [conn110] renameCollection: renaming collection 9fe00dcc-df8c-4077-a771-d04a1680f2ec from test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.218-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3642196010957081654, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2546147806207628403, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796789299), clusterTime: Timestamp(1574796789, 4038) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796789, 4038), signature: { hash: BinData(0, 4BFA038737D34AE1D651316DBEC48FD4C22031B9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2918ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.218-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d5f9613a-4d97-4c07-9794-3819bcaa8514)'. Ident: 'index-1123-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 1833)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.218-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d5f9613a-4d97-4c07-9794-3819bcaa8514)'. Ident: 'index-1129-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 1833)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.218-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23 (ca8449ec-a514-46da-ab40-1a3f0ed4aa0e) to test5_fsmdb0.agg_out and drop 05e67b35-ed16-408c-8b96-78990be763eb.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.218-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:12.219-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796789, 4038), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2919ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.218-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1120-8224331490264904478, commit timestamp: Timestamp(1574796792, 1833)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.219-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:12.219-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796789, 4546), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2864ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.218-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.219-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 4954b114-ca55-4a55-9386-c1f71ff48cd8: test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55 (9fe00dcc-df8c-4077-a771-d04a1680f2ec ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.218-0500 I INDEX [conn108] Registering index build: aad3386c-9c90-4b57-9a95-789bee71e960
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.219-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.219-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8284906467656628042, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 9183984890483755833, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796789355), clusterTime: Timestamp(1574796789, 4546) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796789, 4546), signature: { hash: BinData(0, 4BFA038737D34AE1D651316DBEC48FD4C22031B9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2863ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.219-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.219-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.220-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.221-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.agg_out (05e67b35-ed16-408c-8b96-78990be763eb) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 2), t: 1 } and commit timestamp Timestamp(1574796792, 2)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.221-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.agg_out (05e67b35-ed16-408c-8b96-78990be763eb).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.221-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection ca8449ec-a514-46da-ab40-1a3f0ed4aa0e from test5_fsmdb0.tmp.agg_out.4b62834d-4a8d-4b3c-9ff9-85465baa4a23 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.221-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (05e67b35-ed16-408c-8b96-78990be763eb)'. Ident: 'index-1122--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 2)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.221-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (05e67b35-ed16-408c-8b96-78990be763eb)'. Ident: 'index-1131--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 2)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.221-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1121--4104909142373009110, commit timestamp: Timestamp(1574796792, 2)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.222-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.222-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: a81c4103-cdd8-4f5d-abf6-073318d1c8f5: test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f ( d5f9613a-4d97-4c07-9794-3819bcaa8514 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.226-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 4954b114-ca55-4a55-9386-c1f71ff48cd8: test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55 ( 9fe00dcc-df8c-4077-a771-d04a1680f2ec ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.230-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.237-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.237-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.237-0500 I STORAGE [conn108] Index build initialized: aad3386c-9c90-4b57-9a95-789bee71e960: test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c (750ee2fe-dc6f-47f1-94e3-2dcfd4af545f ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.237-0500 I INDEX [conn108] Waiting for index build to complete: aad3386c-9c90-4b57-9a95-789bee71e960
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.237-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.237-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.237-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 4efacda6-c0b1-400a-ad95-c29d3f7bd59a: test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55 (9fe00dcc-df8c-4077-a771-d04a1680f2ec ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.237-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.237-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.238-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.238-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: a156d744-7ec3-436f-b8db-bb7033b97f31: test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea ( edf68a44-2fee-4aae-971c-7af2eb1a4c6f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.238-0500 I INDEX [conn114] Index build completed: a156d744-7ec3-436f-b8db-bb7033b97f31
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.239-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc with generated UUID: 574a7edf-b1c1-4cd2-abce-cd3715bacfbf and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.239-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.241-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.241-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.241-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.241-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: e19f1418-9d25-47bf-a917-d80dba74d1f3: test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395 (d011c864-b521-48ec-b932-9d4ea8f87fef ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.241-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.242-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.244-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea with provided UUID: edf68a44-2fee-4aae-971c-7af2eb1a4c6f and options: { uuid: UUID("edf68a44-2fee-4aae-971c-7af2eb1a4c6f"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.245-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 4efacda6-c0b1-400a-ad95-c29d3f7bd59a: test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55 ( 9fe00dcc-df8c-4077-a771-d04a1680f2ec ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.245-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.248-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.253-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: e19f1418-9d25-47bf-a917-d80dba74d1f3: test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395 ( d011c864-b521-48ec-b932-9d4ea8f87fef ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.256-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:12.257-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796789, 5114), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2856ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.259-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.262-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:12.306-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796792, 261), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 143ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:12.377-0500 I CONNPOOL [TaskExecutorPool-0] Ending idle connection to host localhost:20004 because the pool meets constraints; 3 connections to that host remain open
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:12.432-0500 I NETWORK [conn22] end connection 127.0.0.1:55594 (24 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.256-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:12.336-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796792, 829), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 152ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.262-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.267-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706 (8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f) to test5_fsmdb0.agg_out and drop ca8449ec-a514-46da-ab40-1a3f0ed4aa0e.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:12.357-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796792, 1897), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 136ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.256-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (9fe00dcc-df8c-4077-a771-d04a1680f2ec) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 2021), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:12.578-0500 I NETWORK [conn19] end connection 127.0.0.1:55580 (23 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:12.378-0500 I NETWORK [conn77] end connection 127.0.0.1:46134 (45 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:12.447-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796792, 1965), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 208ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.262-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 43ec2e72-917f-4287-b70e-a5073bb3be72: test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395 (d011c864-b521-48ec-b932-9d4ea8f87fef ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.267-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (ca8449ec-a514-46da-ab40-1a3f0ed4aa0e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 637), t: 1 } and commit timestamp Timestamp(1574796792, 637)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:12.432-0500 I CONNPOOL [ShardRegistry] Ending idle connection to host localhost:20000 because the pool meets constraints; 1 connections to that host remain open
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.256-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (9fe00dcc-df8c-4077-a771-d04a1680f2ec).
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:12.384-0500 I CONNPOOL [TaskExecutorPool-0] Ending idle connection to host localhost:20001 because the pool meets constraints; 2 connections to that host remain open
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:12.448-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796792, 2021), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 189ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.262-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.267-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (ca8449ec-a514-46da-ab40-1a3f0ed4aa0e).
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:12.502-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796792, 2591), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:819 protocol:op_msg 195ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.256-0500 I STORAGE [conn46] renameCollection: renaming collection d011c864-b521-48ec-b932-9d4ea8f87fef from test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:12.386-0500 I CONNPOOL [TaskExecutorPool-0] Ending idle connection to host localhost:20001 because the pool meets constraints; 1 connections to that host remain open
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:12.521-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796792, 3533), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:819 protocol:op_msg 164ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.263-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.267-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection 8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f from test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:12.522-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796792, 3534), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:819 protocol:op_msg 164ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.256-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (9fe00dcc-df8c-4077-a771-d04a1680f2ec)'. Ident: 'index-1128-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 2021)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:12.423-0500 I CONNPOOL [TaskExecutorPool-0] Ending idle connection to host localhost:20004 because the pool meets constraints; 2 connections to that host remain open
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:12.578-0500 I CONNPOOL [ShardRegistry] Ending idle connection to host localhost:20000 because the pool meets constraints; 1 connections to that host remain open
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.264-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.267-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ca8449ec-a514-46da-ab40-1a3f0ed4aa0e)'. Ident: 'index-1128--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 637)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.257-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (9fe00dcc-df8c-4077-a771-d04a1680f2ec)'. Ident: 'index-1131-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 2021)'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:12.425-0500 I NETWORK [conn85] end connection 127.0.0.1:46144 (44 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.266-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea with provided UUID: edf68a44-2fee-4aae-971c-7af2eb1a4c6f and options: { uuid: UUID("edf68a44-2fee-4aae-971c-7af2eb1a4c6f"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:15.013-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796792, 4546), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2565ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.267-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ca8449ec-a514-46da-ab40-1a3f0ed4aa0e)'. Ident: 'index-1135--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 637)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.257-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1126-8224331490264904478, commit timestamp: Timestamp(1574796792, 2021)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:12.426-0500 I CONNPOOL [ShardRegistry] Ending idle connection to host localhost:20004 because the pool meets constraints; 1 connections to that host remain open
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.267-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 43ec2e72-917f-4287-b70e-a5073bb3be72: test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395 ( d011c864-b521-48ec-b932-9d4ea8f87fef ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.267-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1127--8000595249233899911, commit timestamp: Timestamp(1574796792, 637)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.257-0500 I INDEX [conn114] Registering index build: 3085c287-14c5-4e66-81d0-79d995bf57d8
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:12.426-0500 I NETWORK [conn71] end connection 127.0.0.1:46124 (43 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.281-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.275-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c with provided UUID: 750ee2fe-dc6f-47f1-94e3-2dcfd4af545f and options: { uuid: UUID("750ee2fe-dc6f-47f1-94e3-2dcfd4af545f"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.257-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3374338581245298085, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5747807207551691846, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796789400), clusterTime: Timestamp(1574796789, 5114) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796789, 5114), signature: { hash: BinData(0, 4BFA038737D34AE1D651316DBEC48FD4C22031B9), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2855ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:12.548-0500 I CONNPOOL [TaskExecutorPool-0] Ending idle connection to host localhost:20004 because the pool meets constraints; 1 connections to that host remain open
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.287-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706 (8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f) to test5_fsmdb0.agg_out and drop ca8449ec-a514-46da-ab40-1a3f0ed4aa0e.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.290-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.257-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: aad3386c-9c90-4b57-9a95-789bee71e960: test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c ( 750ee2fe-dc6f-47f1-94e3-2dcfd4af545f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:12.549-0500 I NETWORK [conn82] end connection 127.0.0.1:46138 (42 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.287-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (ca8449ec-a514-46da-ab40-1a3f0ed4aa0e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 637), t: 1 } and commit timestamp Timestamp(1574796792, 637)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.287-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (ca8449ec-a514-46da-ab40-1a3f0ed4aa0e).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.257-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f with generated UUID: cf573385-9ac7-483e-8e97-676bbcdce685 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.287-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection 8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f from test5_fsmdb0.tmp.agg_out.30c1465d-426a-4d4c-800a-cb0d598c8706 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.260-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b with generated UUID: b64fbbdd-e86a-4c0f-b217-33c2381b1e50 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.287-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ca8449ec-a514-46da-ab40-1a3f0ed4aa0e)'. Ident: 'index-1128--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 637)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.288-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.287-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ca8449ec-a514-46da-ab40-1a3f0ed4aa0e)'. Ident: 'index-1135--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 637)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.288-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.302-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f (d5f9613a-4d97-4c07-9794-3819bcaa8514) to test5_fsmdb0.agg_out and drop 8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.287-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1127--4104909142373009110, commit timestamp: Timestamp(1574796792, 637)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.288-0500 I STORAGE [conn114] Index build initialized: 3085c287-14c5-4e66-81d0-79d995bf57d8: test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc (574a7edf-b1c1-4cd2-abce-cd3715bacfbf ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.302-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 1832), t: 1 } and commit timestamp Timestamp(1574796792, 1832)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.293-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c with provided UUID: 750ee2fe-dc6f-47f1-94e3-2dcfd4af545f and options: { uuid: UUID("750ee2fe-dc6f-47f1-94e3-2dcfd4af545f"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.288-0500 I INDEX [conn114] Waiting for index build to complete: 3085c287-14c5-4e66-81d0-79d995bf57d8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.302-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.307-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.288-0500 I INDEX [conn108] Index build completed: aad3386c-9c90-4b57-9a95-789bee71e960
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.302-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection d5f9613a-4d97-4c07-9794-3819bcaa8514 from test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.315-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f (d5f9613a-4d97-4c07-9794-3819bcaa8514) to test5_fsmdb0.agg_out and drop 8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.288-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.302-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f)'. Ident: 'index-1134--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 1832)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.315-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.agg_out (8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 1832), t: 1 } and commit timestamp Timestamp(1574796792, 1832)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.296-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.302-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f)'. Ident: 'index-1141--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 1832)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.315-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.agg_out (8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.302-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.302-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1133--8000595249233899911, commit timestamp: Timestamp(1574796792, 1832)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.315-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection d5f9613a-4d97-4c07-9794-3819bcaa8514 from test5_fsmdb0.tmp.agg_out.144c947f-13a6-4c98-b8cf-10b1ed77f61f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.302-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.303-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55 (9fe00dcc-df8c-4077-a771-d04a1680f2ec) to test5_fsmdb0.agg_out and drop d5f9613a-4d97-4c07-9794-3819bcaa8514.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.315-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f)'. Ident: 'index-1134--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 1832)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.305-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.303-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (d5f9613a-4d97-4c07-9794-3819bcaa8514) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 1833), t: 1 } and commit timestamp Timestamp(1574796792, 1833)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.315-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (8d4edf22-fe8c-40b3-8ed8-d56fe8fb861f)'. Ident: 'index-1141--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 1832)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.305-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.303-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (d5f9613a-4d97-4c07-9794-3819bcaa8514).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.315-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1133--4104909142373009110, commit timestamp: Timestamp(1574796792, 1832)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.305-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (d011c864-b521-48ec-b932-9d4ea8f87fef) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 2527), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.303-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection 9fe00dcc-df8c-4077-a771-d04a1680f2ec from test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.316-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55 (9fe00dcc-df8c-4077-a771-d04a1680f2ec) to test5_fsmdb0.agg_out and drop d5f9613a-4d97-4c07-9794-3819bcaa8514.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.305-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (d011c864-b521-48ec-b932-9d4ea8f87fef).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.303-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d5f9613a-4d97-4c07-9794-3819bcaa8514)'. Ident: 'index-1130--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 1833)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.316-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (d5f9613a-4d97-4c07-9794-3819bcaa8514) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 1833), t: 1 } and commit timestamp Timestamp(1574796792, 1833)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.305-0500 I STORAGE [conn110] renameCollection: renaming collection edf68a44-2fee-4aae-971c-7af2eb1a4c6f from test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.303-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d5f9613a-4d97-4c07-9794-3819bcaa8514)'. Ident: 'index-1143--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 1833)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.316-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (d5f9613a-4d97-4c07-9794-3819bcaa8514).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.305-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d011c864-b521-48ec-b932-9d4ea8f87fef)'. Ident: 'index-1134-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 2527)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.303-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1129--8000595249233899911, commit timestamp: Timestamp(1574796792, 1833)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.316-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 9fe00dcc-df8c-4077-a771-d04a1680f2ec from test5_fsmdb0.tmp.agg_out.968d706b-c6aa-4f94-8f8b-17220060de55 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.305-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d011c864-b521-48ec-b932-9d4ea8f87fef)'. Ident: 'index-1135-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 2527)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.322-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.316-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d5f9613a-4d97-4c07-9794-3819bcaa8514)'. Ident: 'index-1130--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 1833)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.305-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1132-8224331490264904478, commit timestamp: Timestamp(1574796792, 2527)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.322-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.316-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d5f9613a-4d97-4c07-9794-3819bcaa8514)'. Ident: 'index-1143--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 1833)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.305-0500 I INDEX [conn46] Registering index build: 46a8d6f4-835b-4799-9f23-c23fe1e57eb4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.322-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 1b0cc7f3-d830-452f-8b63-43fb0d9020b6: test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea (edf68a44-2fee-4aae-971c-7af2eb1a4c6f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.316-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1129--4104909142373009110, commit timestamp: Timestamp(1574796792, 1833)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.305-0500 I INDEX [conn112] Registering index build: 367c1f8e-91a4-42d5-81f1-724777547676
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.323-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.338-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.305-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5960806345953937052, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7217962295094790856, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796792163), clusterTime: Timestamp(1574796792, 261) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796792, 326), signature: { hash: BinData(0, 2C8835570DFB884F25D7D05CBD70D8EDB84D3542), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 141ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.324-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.338-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.307-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 3085c287-14c5-4e66-81d0-79d995bf57d8: test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc ( 574a7edf-b1c1-4cd2-abce-cd3715bacfbf ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.326-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.338-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 916b60b7-8766-496f-b151-9a9f3ae83809: test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea (edf68a44-2fee-4aae-971c-7af2eb1a4c6f ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.309-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae with generated UUID: 00d1b616-2fb5-45cd-bbb0-e0be4cbdc10f and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.329-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc with provided UUID: 574a7edf-b1c1-4cd2-abce-cd3715bacfbf and options: { uuid: UUID("574a7edf-b1c1-4cd2-abce-cd3715bacfbf"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.338-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.328-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.331-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 1b0cc7f3-d830-452f-8b63-43fb0d9020b6: test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea ( edf68a44-2fee-4aae-971c-7af2eb1a4c6f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.339-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.328-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.347-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.342-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.328-0500 I STORAGE [conn46] Index build initialized: 46a8d6f4-835b-4799-9f23-c23fe1e57eb4: test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f (cf573385-9ac7-483e-8e97-676bbcdce685 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.363-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.344-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796792, 1965) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796792, 2017), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 3747 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 103ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.328-0500 I INDEX [conn46] Waiting for index build to complete: 46a8d6f4-835b-4799-9f23-c23fe1e57eb4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.363-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.344-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 916b60b7-8766-496f-b151-9a9f3ae83809: test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea ( edf68a44-2fee-4aae-971c-7af2eb1a4c6f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.328-0500 I INDEX [conn114] Index build completed: 3085c287-14c5-4e66-81d0-79d995bf57d8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.363-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 909dea5c-a980-4c94-af36-6335d625b51e: test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c (750ee2fe-dc6f-47f1-94e3-2dcfd4af545f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.349-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc with provided UUID: 574a7edf-b1c1-4cd2-abce-cd3715bacfbf and options: { uuid: UUID("574a7edf-b1c1-4cd2-abce-cd3715bacfbf"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.335-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.363-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.364-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.335-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.364-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.381-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.335-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (edf68a44-2fee-4aae-971c-7af2eb1a4c6f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 3030), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.364-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395 (d011c864-b521-48ec-b932-9d4ea8f87fef) to test5_fsmdb0.agg_out and drop 9fe00dcc-df8c-4077-a771-d04a1680f2ec.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.381-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.335-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (edf68a44-2fee-4aae-971c-7af2eb1a4c6f).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.365-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.381-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 03065de4-5259-439e-84c3-569ca25fa783: test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c (750ee2fe-dc6f-47f1-94e3-2dcfd4af545f ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.335-0500 I STORAGE [conn108] renameCollection: renaming collection 750ee2fe-dc6f-47f1-94e3-2dcfd4af545f from test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.366-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (9fe00dcc-df8c-4077-a771-d04a1680f2ec) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 2021), t: 1 } and commit timestamp Timestamp(1574796792, 2021)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.381-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.335-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (edf68a44-2fee-4aae-971c-7af2eb1a4c6f)'. Ident: 'index-1138-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 3030)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.366-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (9fe00dcc-df8c-4077-a771-d04a1680f2ec).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.382-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.335-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (edf68a44-2fee-4aae-971c-7af2eb1a4c6f)'. Ident: 'index-1139-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 3030)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.366-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection d011c864-b521-48ec-b932-9d4ea8f87fef from test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.383-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395 (d011c864-b521-48ec-b932-9d4ea8f87fef) to test5_fsmdb0.agg_out and drop 9fe00dcc-df8c-4077-a771-d04a1680f2ec.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.335-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1137-8224331490264904478, commit timestamp: Timestamp(1574796792, 3030)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.366-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (9fe00dcc-df8c-4077-a771-d04a1680f2ec)'. Ident: 'index-1138--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 2021)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.384-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.336-0500 I INDEX [conn110] Registering index build: eafcfe3d-0f0d-45ba-bbf5-4682897696a8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.366-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (9fe00dcc-df8c-4077-a771-d04a1680f2ec)'. Ident: 'index-1145--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 2021)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.385-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.agg_out (9fe00dcc-df8c-4077-a771-d04a1680f2ec) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 2021), t: 1 } and commit timestamp Timestamp(1574796792, 2021)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.336-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.366-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1137--8000595249233899911, commit timestamp: Timestamp(1574796792, 2021)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.385-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.agg_out (9fe00dcc-df8c-4077-a771-d04a1680f2ec).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.336-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2476816527508928268, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7269168352056567964, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796792184), clusterTime: Timestamp(1574796792, 829) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796792, 829), signature: { hash: BinData(0, 2C8835570DFB884F25D7D05CBD70D8EDB84D3542), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 151ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.366-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f with provided UUID: cf573385-9ac7-483e-8e97-676bbcdce685 and options: { uuid: UUID("cf573385-9ac7-483e-8e97-676bbcdce685"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.385-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection d011c864-b521-48ec-b932-9d4ea8f87fef from test5_fsmdb0.tmp.agg_out.4edf2deb-c8ff-4eef-8cc2-9242a87f6395 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.336-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.367-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 909dea5c-a980-4c94-af36-6335d625b51e: test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c ( 750ee2fe-dc6f-47f1-94e3-2dcfd4af545f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.385-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (9fe00dcc-df8c-4077-a771-d04a1680f2ec)'. Ident: 'index-1138--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 2021)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.337-0500 I COMMAND [conn68] CMD: dropIndexes test5_fsmdb0.agg_out: { randInt: -1.0 }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.382-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.385-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (9fe00dcc-df8c-4077-a771-d04a1680f2ec)'. Ident: 'index-1145--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 2021)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.346-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.386-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b with provided UUID: b64fbbdd-e86a-4c0f-b217-33c2381b1e50 and options: { uuid: UUID("b64fbbdd-e86a-4c0f-b217-33c2381b1e50"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.385-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1137--4104909142373009110, commit timestamp: Timestamp(1574796792, 2021)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.356-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.400-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.386-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f with provided UUID: cf573385-9ac7-483e-8e97-676bbcdce685 and options: { uuid: UUID("cf573385-9ac7-483e-8e97-676bbcdce685"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.356-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.422-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.386-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 03065de4-5259-439e-84c3-569ca25fa783: test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c ( 750ee2fe-dc6f-47f1-94e3-2dcfd4af545f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.356-0500 I STORAGE [conn112] Index build initialized: 367c1f8e-91a4-42d5-81f1-724777547676: test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b (b64fbbdd-e86a-4c0f-b217-33c2381b1e50 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.422-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.402-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.356-0500 I INDEX [conn112] Waiting for index build to complete: 367c1f8e-91a4-42d5-81f1-724777547676
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.422-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 36dbafef-1d8a-4c30-8457-8e7a9b8c11c3: test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc (574a7edf-b1c1-4cd2-abce-cd3715bacfbf ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.405-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b with provided UUID: b64fbbdd-e86a-4c0f-b217-33c2381b1e50 and options: { uuid: UUID("b64fbbdd-e86a-4c0f-b217-33c2381b1e50"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.356-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.422-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.421-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.356-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (750ee2fe-dc6f-47f1-94e3-2dcfd4af545f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 3534), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.423-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.443-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.356-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (750ee2fe-dc6f-47f1-94e3-2dcfd4af545f).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.424-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea (edf68a44-2fee-4aae-971c-7af2eb1a4c6f) to test5_fsmdb0.agg_out and drop d011c864-b521-48ec-b932-9d4ea8f87fef.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.443-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.356-0500 I STORAGE [conn114] renameCollection: renaming collection 574a7edf-b1c1-4cd2-abce-cd3715bacfbf from test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.425-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.443-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 8f85fe32-4b5d-45e9-b0b1-2d5c4527a61c: test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc (574a7edf-b1c1-4cd2-abce-cd3715bacfbf ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.356-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (750ee2fe-dc6f-47f1-94e3-2dcfd4af545f)'. Ident: 'index-1142-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 3534)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.425-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (d011c864-b521-48ec-b932-9d4ea8f87fef) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 2527), t: 1 } and commit timestamp Timestamp(1574796792, 2527)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.443-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.356-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (750ee2fe-dc6f-47f1-94e3-2dcfd4af545f)'. Ident: 'index-1143-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 3534)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.425-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (d011c864-b521-48ec-b932-9d4ea8f87fef).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.443-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.356-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1140-8224331490264904478, commit timestamp: Timestamp(1574796792, 3534)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.425-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection edf68a44-2fee-4aae-971c-7af2eb1a4c6f from test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.444-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea (edf68a44-2fee-4aae-971c-7af2eb1a4c6f) to test5_fsmdb0.agg_out and drop d011c864-b521-48ec-b932-9d4ea8f87fef.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.356-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.425-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d011c864-b521-48ec-b932-9d4ea8f87fef)'. Ident: 'index-1140--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 2527)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.445-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.357-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 185199074726211715, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1392517849716315859, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796792220), clusterTime: Timestamp(1574796792, 1897) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796792, 2017), signature: { hash: BinData(0, 2C8835570DFB884F25D7D05CBD70D8EDB84D3542), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 118ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.425-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d011c864-b521-48ec-b932-9d4ea8f87fef)'. Ident: 'index-1147--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 2527)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.445-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (d011c864-b521-48ec-b932-9d4ea8f87fef) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 2527), t: 1 } and commit timestamp Timestamp(1574796792, 2527)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.359-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 with generated UUID: f776f310-42e7-4f73-a375-1ae951f6e95e and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.425-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1139--8000595249233899911, commit timestamp: Timestamp(1574796792, 2527)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.445-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (d011c864-b521-48ec-b932-9d4ea8f87fef).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.359-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 with generated UUID: 567327c8-89a8-4cba-99b8-03d43c43b731 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.427-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 36dbafef-1d8a-4c30-8457-8e7a9b8c11c3: test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc ( 574a7edf-b1c1-4cd2-abce-cd3715bacfbf ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.445-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection edf68a44-2fee-4aae-971c-7af2eb1a4c6f from test5_fsmdb0.tmp.agg_out.6f32a718-7d07-429d-b030-0659b32937ea to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.361-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 46a8d6f4-835b-4799-9f23-c23fe1e57eb4: test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f ( cf573385-9ac7-483e-8e97-676bbcdce685 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.429-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae with provided UUID: 00d1b616-2fb5-45cd-bbb0-e0be4cbdc10f and options: { uuid: UUID("00d1b616-2fb5-45cd-bbb0-e0be4cbdc10f"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.445-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d011c864-b521-48ec-b932-9d4ea8f87fef)'. Ident: 'index-1140--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 2527)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.361-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.444-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.445-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d011c864-b521-48ec-b932-9d4ea8f87fef)'. Ident: 'index-1147--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 2527)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.364-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.448-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c (750ee2fe-dc6f-47f1-94e3-2dcfd4af545f) to test5_fsmdb0.agg_out and drop edf68a44-2fee-4aae-971c-7af2eb1a4c6f.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.445-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1139--4104909142373009110, commit timestamp: Timestamp(1574796792, 2527)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.384-0500 I NETWORK [conn48] end connection 127.0.0.1:38474 (49 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.448-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (edf68a44-2fee-4aae-971c-7af2eb1a4c6f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 3030), t: 1 } and commit timestamp Timestamp(1574796792, 3030)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.449-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 8f85fe32-4b5d-45e9-b0b1-2d5c4527a61c: test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc ( 574a7edf-b1c1-4cd2-abce-cd3715bacfbf ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.386-0500 I NETWORK [conn73] end connection 127.0.0.1:38684 (48 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.448-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (edf68a44-2fee-4aae-971c-7af2eb1a4c6f).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.451-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae with provided UUID: 00d1b616-2fb5-45cd-bbb0-e0be4cbdc10f and options: { uuid: UUID("00d1b616-2fb5-45cd-bbb0-e0be4cbdc10f"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.387-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 367c1f8e-91a4-42d5-81f1-724777547676: test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b ( b64fbbdd-e86a-4c0f-b217-33c2381b1e50 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.449-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 750ee2fe-dc6f-47f1-94e3-2dcfd4af545f from test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.466-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.394-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.449-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (edf68a44-2fee-4aae-971c-7af2eb1a4c6f)'. Ident: 'index-1150--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 3030)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.470-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c (750ee2fe-dc6f-47f1-94e3-2dcfd4af545f) to test5_fsmdb0.agg_out and drop edf68a44-2fee-4aae-971c-7af2eb1a4c6f.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.395-0500 I INDEX [conn114] Registering index build: c1600158-28a0-4621-83b0-70b744c8d04f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.449-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (edf68a44-2fee-4aae-971c-7af2eb1a4c6f)'. Ident: 'index-1153--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 3030)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.470-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (edf68a44-2fee-4aae-971c-7af2eb1a4c6f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 3030), t: 1 } and commit timestamp Timestamp(1574796792, 3030)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.402-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.449-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1149--8000595249233899911, commit timestamp: Timestamp(1574796792, 3030)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.470-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (edf68a44-2fee-4aae-971c-7af2eb1a4c6f).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.402-0500 I INDEX [conn108] Registering index build: a6c58db0-091d-44df-9099-29e89b1a173d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.470-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.470-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection 750ee2fe-dc6f-47f1-94e3-2dcfd4af545f from test5_fsmdb0.tmp.agg_out.ea9453ca-2b25-4c25-bb6c-e73be59d265c to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.406-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.470-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.470-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (edf68a44-2fee-4aae-971c-7af2eb1a4c6f)'. Ident: 'index-1150--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 3030)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.406-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.470-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: b05f2043-510c-4f50-9203-f84b018eb20e: test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f (cf573385-9ac7-483e-8e97-676bbcdce685 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.470-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (edf68a44-2fee-4aae-971c-7af2eb1a4c6f)'. Ident: 'index-1153--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 3030)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.406-0500 I STORAGE [conn110] Index build initialized: eafcfe3d-0f0d-45ba-bbf5-4682897696a8: test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae (00d1b616-2fb5-45cd-bbb0-e0be4cbdc10f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.470-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.470-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1149--4104909142373009110, commit timestamp: Timestamp(1574796792, 3030)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.406-0500 I INDEX [conn110] Waiting for index build to complete: eafcfe3d-0f0d-45ba-bbf5-4682897696a8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.471-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.472-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796792, 3030) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796792, 3094), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 123ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.407-0500 I INDEX [conn46] Index build completed: 46a8d6f4-835b-4799-9f23-c23fe1e57eb4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.473-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.488-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.407-0500 I INDEX [conn112] Index build completed: 367c1f8e-91a4-42d5-81f1-724777547676
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.475-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc (574a7edf-b1c1-4cd2-abce-cd3715bacfbf) to test5_fsmdb0.agg_out and drop 750ee2fe-dc6f-47f1-94e3-2dcfd4af545f.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.488-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.407-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.476-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (750ee2fe-dc6f-47f1-94e3-2dcfd4af545f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 3534), t: 1 } and commit timestamp Timestamp(1574796792, 3534)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.488-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 015b5225-a19b-4326-8605-a4180be3799a: test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f (cf573385-9ac7-483e-8e97-676bbcdce685 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.407-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796792, 2524), signature: { hash: BinData(0, 2C8835570DFB884F25D7D05CBD70D8EDB84D3542), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 9087 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 110ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.476-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (750ee2fe-dc6f-47f1-94e3-2dcfd4af545f).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.488-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.407-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796792, 2524), signature: { hash: BinData(0, 2C8835570DFB884F25D7D05CBD70D8EDB84D3542), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 10071 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 103ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.476-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 574a7edf-b1c1-4cd2-abce-cd3715bacfbf from test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.488-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.407-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.476-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (750ee2fe-dc6f-47f1-94e3-2dcfd4af545f)'. Ident: 'index-1152--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 3534)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.491-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.409-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.476-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (750ee2fe-dc6f-47f1-94e3-2dcfd4af545f)'. Ident: 'index-1157--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 3534)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.492-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc (574a7edf-b1c1-4cd2-abce-cd3715bacfbf) to test5_fsmdb0.agg_out and drop 750ee2fe-dc6f-47f1-94e3-2dcfd4af545f.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.417-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: eafcfe3d-0f0d-45ba-bbf5-4682897696a8: test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae ( 00d1b616-2fb5-45cd-bbb0-e0be4cbdc10f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.476-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1151--8000595249233899911, commit timestamp: Timestamp(1574796792, 3534)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.492-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (750ee2fe-dc6f-47f1-94e3-2dcfd4af545f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 3534), t: 1 } and commit timestamp Timestamp(1574796792, 3534)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.426-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.476-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 with provided UUID: f776f310-42e7-4f73-a375-1ae951f6e95e and options: { uuid: UUID("f776f310-42e7-4f73-a375-1ae951f6e95e"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.492-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (750ee2fe-dc6f-47f1-94e3-2dcfd4af545f).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.426-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.477-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: b05f2043-510c-4f50-9203-f84b018eb20e: test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f ( cf573385-9ac7-483e-8e97-676bbcdce685 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.492-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 574a7edf-b1c1-4cd2-abce-cd3715bacfbf from test5_fsmdb0.tmp.agg_out.ac297a36-9be5-4265-b282-0863ac8d3adc to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.427-0500 I STORAGE [conn114] Index build initialized: c1600158-28a0-4621-83b0-70b744c8d04f: test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 (f776f310-42e7-4f73-a375-1ae951f6e95e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.494-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.492-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (750ee2fe-dc6f-47f1-94e3-2dcfd4af545f)'. Ident: 'index-1152--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 3534)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.427-0500 I INDEX [conn114] Waiting for index build to complete: c1600158-28a0-4621-83b0-70b744c8d04f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.495-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 with provided UUID: 567327c8-89a8-4cba-99b8-03d43c43b731 and options: { uuid: UUID("567327c8-89a8-4cba-99b8-03d43c43b731"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.492-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (750ee2fe-dc6f-47f1-94e3-2dcfd4af545f)'. Ident: 'index-1157--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 3534)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.427-0500 I INDEX [conn110] Index build completed: eafcfe3d-0f0d-45ba-bbf5-4682897696a8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.510-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.492-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1151--4104909142373009110, commit timestamp: Timestamp(1574796792, 3534)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.427-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.525-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.494-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 015b5225-a19b-4326-8605-a4180be3799a: test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f ( cf573385-9ac7-483e-8e97-676bbcdce685 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.427-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.525-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.496-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 with provided UUID: f776f310-42e7-4f73-a375-1ae951f6e95e and options: { uuid: UUID("f776f310-42e7-4f73-a375-1ae951f6e95e"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.438-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.525-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: db521e5b-b87c-40fa-86bb-320f33836345: test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b (b64fbbdd-e86a-4c0f-b217-33c2381b1e50 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.510-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.446-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.525-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.511-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 with provided UUID: 567327c8-89a8-4cba-99b8-03d43c43b731 and options: { uuid: UUID("567327c8-89a8-4cba-99b8-03d43c43b731"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.446-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.526-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.525-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.446-0500 I STORAGE [conn108] Index build initialized: a6c58db0-091d-44df-9099-29e89b1a173d: test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 (567327c8-89a8-4cba-99b8-03d43c43b731 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.528-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.543-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.446-0500 I INDEX [conn108] Waiting for index build to complete: a6c58db0-091d-44df-9099-29e89b1a173d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.538-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: db521e5b-b87c-40fa-86bb-320f33836345: test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b ( b64fbbdd-e86a-4c0f-b217-33c2381b1e50 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.543-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.446-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.546-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.543-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 48a6637f-d718-4724-a7f8-0e1194746181: test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b (b64fbbdd-e86a-4c0f-b217-33c2381b1e50 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.446-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (574a7edf-b1c1-4cd2-abce-cd3715bacfbf) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 4546), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.546-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.543-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.446-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (574a7edf-b1c1-4cd2-abce-cd3715bacfbf).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.546-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 058f4227-aa5c-4bf9-af75-c3bcf5e42043: test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae (00d1b616-2fb5-45cd-bbb0-e0be4cbdc10f ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.544-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.446-0500 I STORAGE [conn112] renameCollection: renaming collection cf573385-9ac7-483e-8e97-676bbcdce685 from test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.546-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.546-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.446-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (574a7edf-b1c1-4cd2-abce-cd3715bacfbf)'. Ident: 'index-1146-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 4546)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.547-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.550-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 48a6637f-d718-4724-a7f8-0e1194746181: test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b ( b64fbbdd-e86a-4c0f-b217-33c2381b1e50 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.446-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (574a7edf-b1c1-4cd2-abce-cd3715bacfbf)'. Ident: 'index-1147-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 4546)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.549-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.565-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.446-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1145-8224331490264904478, commit timestamp: Timestamp(1574796792, 4546)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.553-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 058f4227-aa5c-4bf9-af75-c3bcf5e42043: test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae ( 00d1b616-2fb5-45cd-bbb0-e0be4cbdc10f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.565-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.447-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 378456744330285091, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6055600187388808868, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796792238), clusterTime: Timestamp(1574796792, 1965) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796792, 2018), signature: { hash: BinData(0, 2C8835570DFB884F25D7D05CBD70D8EDB84D3542), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 17363 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 207ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.578-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.565-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: ca94ad77-9b5b-4054-bde2-0a7c3ab96059: test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae (00d1b616-2fb5-45cd-bbb0-e0be4cbdc10f ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.447-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: c1600158-28a0-4621-83b0-70b744c8d04f: test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 ( f776f310-42e7-4f73-a375-1ae951f6e95e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.578-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.565-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.447-0500 I INDEX [conn114] Index build completed: c1600158-28a0-4621-83b0-70b744c8d04f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.578-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 391e0aeb-c92f-430e-a3fe-e63e953c7f23: test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 (f776f310-42e7-4f73-a375-1ae951f6e95e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.565-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.447-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.578-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.568-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.447-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (cf573385-9ac7-483e-8e97-676bbcdce685) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 4547), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.579-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.569-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: ca94ad77-9b5b-4054-bde2-0a7c3ab96059: test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae ( 00d1b616-2fb5-45cd-bbb0-e0be4cbdc10f ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.447-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (cf573385-9ac7-483e-8e97-676bbcdce685).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.581-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f (cf573385-9ac7-483e-8e97-676bbcdce685) to test5_fsmdb0.agg_out and drop 574a7edf-b1c1-4cd2-abce-cd3715bacfbf.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.594-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.447-0500 I STORAGE [conn46] renameCollection: renaming collection b64fbbdd-e86a-4c0f-b217-33c2381b1e50 from test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.581-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.594-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.447-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (cf573385-9ac7-483e-8e97-676bbcdce685)'. Ident: 'index-1151-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 4547)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.581-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (574a7edf-b1c1-4cd2-abce-cd3715bacfbf) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 4546), t: 1 } and commit timestamp Timestamp(1574796792, 4546)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.594-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 876f8527-3f41-4d87-b35d-85b366faca3c: test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 (f776f310-42e7-4f73-a375-1ae951f6e95e ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.447-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (cf573385-9ac7-483e-8e97-676bbcdce685)'. Ident: 'index-1153-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 4547)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.581-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (574a7edf-b1c1-4cd2-abce-cd3715bacfbf).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.594-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.447-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1148-8224331490264904478, commit timestamp: Timestamp(1574796792, 4547)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.581-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection cf573385-9ac7-483e-8e97-676bbcdce685 from test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.595-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.447-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.581-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (574a7edf-b1c1-4cd2-abce-cd3715bacfbf)'. Ident: 'index-1156--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 4546)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.596-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f (cf573385-9ac7-483e-8e97-676bbcdce685) to test5_fsmdb0.agg_out and drop 574a7edf-b1c1-4cd2-abce-cd3715bacfbf.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.447-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2391548972137908307, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2726635722023224815, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796792258), clusterTime: Timestamp(1574796792, 2021) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796792, 2022), signature: { hash: BinData(0, 2C8835570DFB884F25D7D05CBD70D8EDB84D3542), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 188ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.581-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (574a7edf-b1c1-4cd2-abce-cd3715bacfbf)'. Ident: 'index-1163--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 4546)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.597-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.448-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.581-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1155--8000595249233899911, commit timestamp: Timestamp(1574796792, 4546)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.598-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (574a7edf-b1c1-4cd2-abce-cd3715bacfbf) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 4546), t: 1 } and commit timestamp Timestamp(1574796792, 4546)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.450-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.582-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b (b64fbbdd-e86a-4c0f-b217-33c2381b1e50) to test5_fsmdb0.agg_out and drop cf573385-9ac7-483e-8e97-676bbcdce685.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.598-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (574a7edf-b1c1-4cd2-abce-cd3715bacfbf).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.452-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: a6c58db0-091d-44df-9099-29e89b1a173d: test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 ( 567327c8-89a8-4cba-99b8-03d43c43b731 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.582-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (cf573385-9ac7-483e-8e97-676bbcdce685) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 4547), t: 1 } and commit timestamp Timestamp(1574796792, 4547)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.598-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection cf573385-9ac7-483e-8e97-676bbcdce685 from test5_fsmdb0.tmp.agg_out.5abe01d2-292d-4b38-8abe-c49d5cd8da4f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.452-0500 I INDEX [conn108] Index build completed: a6c58db0-091d-44df-9099-29e89b1a173d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.582-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (cf573385-9ac7-483e-8e97-676bbcdce685).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.598-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (574a7edf-b1c1-4cd2-abce-cd3715bacfbf)'. Ident: 'index-1156--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 4546)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.452-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a with generated UUID: abe54ffe-5ab1-483d-86b6-d050f4c61002 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.582-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection b64fbbdd-e86a-4c0f-b217-33c2381b1e50 from test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.598-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (574a7edf-b1c1-4cd2-abce-cd3715bacfbf)'. Ident: 'index-1163--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 4546)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.456-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79 with generated UUID: a7913c01-2e98-410b-ad2b-5701fe9b5046 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.582-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (cf573385-9ac7-483e-8e97-676bbcdce685)'. Ident: 'index-1160--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 4547)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.598-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1155--4104909142373009110, commit timestamp: Timestamp(1574796792, 4546)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.478-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.582-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (cf573385-9ac7-483e-8e97-676bbcdce685)'. Ident: 'index-1167--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 4547)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.598-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b (b64fbbdd-e86a-4c0f-b217-33c2381b1e50) to test5_fsmdb0.agg_out and drop cf573385-9ac7-483e-8e97-676bbcdce685.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.478-0500 I INDEX [conn46] Registering index build: f6a059b0-fc4a-403d-a1be-09ff8052d4d0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.582-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1159--8000595249233899911, commit timestamp: Timestamp(1574796792, 4547)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.599-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (cf573385-9ac7-483e-8e97-676bbcdce685) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 4547), t: 1 } and commit timestamp Timestamp(1574796792, 4547)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.485-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.583-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 391e0aeb-c92f-430e-a3fe-e63e953c7f23: test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 ( f776f310-42e7-4f73-a375-1ae951f6e95e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.599-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (cf573385-9ac7-483e-8e97-676bbcdce685).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.500-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.599-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.599-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection b64fbbdd-e86a-4c0f-b217-33c2381b1e50 from test5_fsmdb0.tmp.agg_out.e48e1fc8-61d1-4f9a-a8fa-37e6f5342f8b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.500-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.599-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.599-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (cf573385-9ac7-483e-8e97-676bbcdce685)'. Ident: 'index-1160--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 4547)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.500-0500 I STORAGE [conn46] Index build initialized: f6a059b0-fc4a-403d-a1be-09ff8052d4d0: test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a (abe54ffe-5ab1-483d-86b6-d050f4c61002 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.599-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 8631f553-28ef-4649-ad5c-106e7e8a1361: test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 (567327c8-89a8-4cba-99b8-03d43c43b731 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.599-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (cf573385-9ac7-483e-8e97-676bbcdce685)'. Ident: 'index-1167--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 4547)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.500-0500 I INDEX [conn46] Waiting for index build to complete: f6a059b0-fc4a-403d-a1be-09ff8052d4d0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.599-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.599-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1159--4104909142373009110, commit timestamp: Timestamp(1574796792, 4547)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.501-0500 I COMMAND [conn110] CMD: drop test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.600-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.599-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 876f8527-3f41-4d87-b35d-85b366faca3c: test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 ( f776f310-42e7-4f73-a375-1ae951f6e95e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.501-0500 I INDEX [conn112] Registering index build: 75e2854f-04b2-44b5-a631-25ae23af89bd
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.602-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.615-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.501-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.603-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a with provided UUID: abe54ffe-5ab1-483d-86b6-d050f4c61002 and options: { uuid: UUID("abe54ffe-5ab1-483d-86b6-d050f4c61002"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.615-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.501-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae (00d1b616-2fb5-45cd-bbb0-e0be4cbdc10f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.605-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 8631f553-28ef-4649-ad5c-106e7e8a1361: test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 ( 567327c8-89a8-4cba-99b8-03d43c43b731 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.615-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: bb43c13c-0409-4355-844a-f1fdd174ca92: test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 (567327c8-89a8-4cba-99b8-03d43c43b731 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.501-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae (00d1b616-2fb5-45cd-bbb0-e0be4cbdc10f).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.619-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.616-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.501-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae (00d1b616-2fb5-45cd-bbb0-e0be4cbdc10f)'. Ident: 'index-1156-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 6054)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.622-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79 with provided UUID: a7913c01-2e98-410b-ad2b-5701fe9b5046 and options: { uuid: UUID("a7913c01-2e98-410b-ad2b-5701fe9b5046"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.616-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.502-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae (00d1b616-2fb5-45cd-bbb0-e0be4cbdc10f)'. Ident: 'index-1159-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 6054)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.636-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.618-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.502-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae'. Ident: collection-1154-8224331490264904478, commit timestamp: Timestamp(1574796792, 6054)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.646-0500 I COMMAND [ReplWriterWorker-14] CMD: drop test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.620-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a with provided UUID: abe54ffe-5ab1-483d-86b6-d050f4c61002 and options: { uuid: UUID("abe54ffe-5ab1-483d-86b6-d050f4c61002"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.502-0500 I COMMAND [conn114] CMD: drop test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.646-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae (00d1b616-2fb5-45cd-bbb0-e0be4cbdc10f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 6054), t: 1 } and commit timestamp Timestamp(1574796792, 6054)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.620-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: bb43c13c-0409-4355-844a-f1fdd174ca92: test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 ( 567327c8-89a8-4cba-99b8-03d43c43b731 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.502-0500 I COMMAND [conn71] command test5_fsmdb0.agg_out appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3619797852569396595, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3532181809359355785, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796792307), clusterTime: Timestamp(1574796792, 2591) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796792, 2591), signature: { hash: BinData(0, 2C8835570DFB884F25D7D05CBD70D8EDB84D3542), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 194ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.646-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae (00d1b616-2fb5-45cd-bbb0-e0be4cbdc10f).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.635-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.502-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.646-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae (00d1b616-2fb5-45cd-bbb0-e0be4cbdc10f)'. Ident: 'index-1166--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 6054)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.638-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79 with provided UUID: a7913c01-2e98-410b-ad2b-5701fe9b5046 and options: { uuid: UUID("a7913c01-2e98-410b-ad2b-5701fe9b5046"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.504-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.646-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae (00d1b616-2fb5-45cd-bbb0-e0be4cbdc10f)'. Ident: 'index-1175--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 6054)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.652-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.513-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: f6a059b0-fc4a-403d-a1be-09ff8052d4d0: test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a ( abe54ffe-5ab1-483d-86b6-d050f4c61002 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.646-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae'. Ident: collection-1165--8000595249233899911, commit timestamp: Timestamp(1574796792, 6054)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.661-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796792, 5256) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796792, 5448), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 12528 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 184ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.520-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.658-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.663-0500 I COMMAND [ReplWriterWorker-6] CMD: drop test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.520-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.658-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.663-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae (00d1b616-2fb5-45cd-bbb0-e0be4cbdc10f) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 6054), t: 1 } and commit timestamp Timestamp(1574796792, 6054)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.520-0500 I STORAGE [conn112] Index build initialized: 75e2854f-04b2-44b5-a631-25ae23af89bd: test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79 (a7913c01-2e98-410b-ad2b-5701fe9b5046 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.658-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 31db9b8f-459a-43e0-90fb-e3e09351c219: test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a (abe54ffe-5ab1-483d-86b6-d050f4c61002 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.663-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae (00d1b616-2fb5-45cd-bbb0-e0be4cbdc10f).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.520-0500 I INDEX [conn112] Waiting for index build to complete: 75e2854f-04b2-44b5-a631-25ae23af89bd
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.658-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.663-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae (00d1b616-2fb5-45cd-bbb0-e0be4cbdc10f)'. Ident: 'index-1166--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 6054)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.520-0500 I INDEX [conn46] Index build completed: f6a059b0-fc4a-403d-a1be-09ff8052d4d0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.659-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.663-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae (00d1b616-2fb5-45cd-bbb0-e0be4cbdc10f)'. Ident: 'index-1175--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 6054)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.521-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 (f776f310-42e7-4f73-a375-1ae951f6e95e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.660-0500 I COMMAND [ReplWriterWorker-4] CMD: drop test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.663-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.e557346d-9840-4883-8a9e-00cb8af682ae'. Ident: collection-1165--4104909142373009110, commit timestamp: Timestamp(1574796792, 6054)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.521-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 (f776f310-42e7-4f73-a375-1ae951f6e95e).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.660-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 (f776f310-42e7-4f73-a375-1ae951f6e95e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 6058), t: 1 } and commit timestamp Timestamp(1574796792, 6058)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.675-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.521-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 (f776f310-42e7-4f73-a375-1ae951f6e95e)'. Ident: 'index-1162-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 6058)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.660-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 (f776f310-42e7-4f73-a375-1ae951f6e95e).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.675-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.521-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 (f776f310-42e7-4f73-a375-1ae951f6e95e)'. Ident: 'index-1165-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 6058)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.661-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 (f776f310-42e7-4f73-a375-1ae951f6e95e)'. Ident: 'index-1170--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 6058)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.675-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: eea6a72c-df6a-4217-a978-7a8788eaffb2: test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a (abe54ffe-5ab1-483d-86b6-d050f4c61002 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.521-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4'. Ident: collection-1160-8224331490264904478, commit timestamp: Timestamp(1574796792, 6058)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.661-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 (f776f310-42e7-4f73-a375-1ae951f6e95e)'. Ident: 'index-1177--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 6058)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.675-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.521-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.661-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4'. Ident: collection-1169--8000595249233899911, commit timestamp: Timestamp(1574796792, 6058)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.676-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.521-0500 I COMMAND [conn68] command test5_fsmdb0.agg_out appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1399890789446071074, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4713987132641047251, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796792357), clusterTime: Timestamp(1574796792, 3533) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796792, 3534), signature: { hash: BinData(0, 2C8835570DFB884F25D7D05CBD70D8EDB84D3542), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 162ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.661-0500 I COMMAND [ReplWriterWorker-2] CMD: drop test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.677-0500 I COMMAND [ReplWriterWorker-13] CMD: drop test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.521-0500 I COMMAND [conn108] CMD: drop test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.661-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 (567327c8-89a8-4cba-99b8-03d43c43b731) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 6059), t: 1 } and commit timestamp Timestamp(1574796792, 6059)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.677-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 (f776f310-42e7-4f73-a375-1ae951f6e95e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 6058), t: 1 } and commit timestamp Timestamp(1574796792, 6058)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.521-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 (567327c8-89a8-4cba-99b8-03d43c43b731) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.661-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 (567327c8-89a8-4cba-99b8-03d43c43b731).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.677-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 (f776f310-42e7-4f73-a375-1ae951f6e95e).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.521-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 (567327c8-89a8-4cba-99b8-03d43c43b731).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.661-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 (567327c8-89a8-4cba-99b8-03d43c43b731)'. Ident: 'index-1172--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 6059)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.677-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 (f776f310-42e7-4f73-a375-1ae951f6e95e)'. Ident: 'index-1170--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 6058)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.521-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 (567327c8-89a8-4cba-99b8-03d43c43b731)'. Ident: 'index-1163-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 6059)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.661-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 (567327c8-89a8-4cba-99b8-03d43c43b731)'. Ident: 'index-1179--8000595249233899911', commit timestamp: 'Timestamp(1574796792, 6059)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.677-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4 (f776f310-42e7-4f73-a375-1ae951f6e95e)'. Ident: 'index-1177--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 6058)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.521-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.661-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3'. Ident: collection-1171--8000595249233899911, commit timestamp: Timestamp(1574796792, 6059)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.677-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.62494107-947b-465f-aa91-00b99289cff4'. Ident: collection-1169--4104909142373009110, commit timestamp: Timestamp(1574796792, 6058)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.521-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 (567327c8-89a8-4cba-99b8-03d43c43b731)'. Ident: 'index-1167-8224331490264904478', commit timestamp: 'Timestamp(1574796792, 6059)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.662-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.678-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.521-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3'. Ident: collection-1161-8224331490264904478, commit timestamp: Timestamp(1574796792, 6059)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:12.665-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 31db9b8f-459a-43e0-90fb-e3e09351c219: test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a ( abe54ffe-5ab1-483d-86b6-d050f4c61002 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.678-0500 I COMMAND [ReplWriterWorker-5] CMD: drop test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.522-0500 I COMMAND [conn70] command test5_fsmdb0.agg_out appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3111370250592855974, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6044588157417556827, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796792358), clusterTime: Timestamp(1574796792, 3534) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796792, 3534), signature: { hash: BinData(0, 2C8835570DFB884F25D7D05CBD70D8EDB84D3542), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 163ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.678-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 (567327c8-89a8-4cba-99b8-03d43c43b731) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796792, 6059), t: 1 } and commit timestamp Timestamp(1574796792, 6059)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.015-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831 with provided UUID: bee24705-7ddb-4fb8-8cd7-10d8c7f97299 and options: { uuid: UUID("bee24705-7ddb-4fb8-8cd7-10d8c7f97299"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.522-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831 with generated UUID: bee24705-7ddb-4fb8-8cd7-10d8c7f97299 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.678-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 (567327c8-89a8-4cba-99b8-03d43c43b731).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.028-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.523-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.678-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 (567327c8-89a8-4cba-99b8-03d43c43b731)'. Ident: 'index-1172--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 6059)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.043-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.524-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b with generated UUID: d80dc9fb-79cc-409e-8ad7-bb6513d59b10 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.678-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3 (567327c8-89a8-4cba-99b8-03d43c43b731)'. Ident: 'index-1179--4104909142373009110', commit timestamp: 'Timestamp(1574796792, 6059)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.043-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.525-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25 with generated UUID: 0774d6e3-c392-4d67-a388-e2b8e3a01b4e and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.678-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.16ebd0ae-7633-43d8-897d-08b3acefdbf3'. Ident: collection-1171--4104909142373009110, commit timestamp: Timestamp(1574796792, 6059)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.043-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 60a9140f-f275-493f-997c-a4d9c2270bd8: test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79 (a7913c01-2e98-410b-ad2b-5701fe9b5046 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.534-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 75e2854f-04b2-44b5-a631-25ae23af89bd: test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79 ( a7913c01-2e98-410b-ad2b-5701fe9b5046 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:12.680-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: eea6a72c-df6a-4217-a978-7a8788eaffb2: test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a ( abe54ffe-5ab1-483d-86b6-d050f4c61002 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.043-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.558-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.044-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.030-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831 with provided UUID: bee24705-7ddb-4fb8-8cd7-10d8c7f97299 and options: { uuid: UUID("bee24705-7ddb-4fb8-8cd7-10d8c7f97299"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.566-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.044-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b with provided UUID: d80dc9fb-79cc-409e-8ad7-bb6513d59b10 and options: { uuid: UUID("d80dc9fb-79cc-409e-8ad7-bb6513d59b10"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.045-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.012-0500 I INDEX [conn112] Index build completed: 75e2854f-04b2-44b5-a631-25ae23af89bd
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:12.574-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.012-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796792, 6000), signature: { hash: BinData(0, 2C8835570DFB884F25D7D05CBD70D8EDB84D3542), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 15687 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2526ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.012-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831 appName: "tid:3" command: create { create: "tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831", temp: true, validationLevel: "off", validationAction: "error", databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796792, 6059), signature: { hash: BinData(0, 2C8835570DFB884F25D7D05CBD70D8EDB84D3542), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2490ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.012-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b appName: "tid:2" command: create { create: "tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b", temp: true, validationLevel: "off", validationAction: "error", databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796792, 6062), signature: { hash: BinData(0, 2C8835570DFB884F25D7D05CBD70D8EDB84D3542), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2488ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.012-0500 I COMMAND [conn110] command test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25 appName: "tid:1" command: create { create: "tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25", temp: true, validationLevel: "off", validationAction: "error", databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796792, 6063), signature: { hash: BinData(0, 2C8835570DFB884F25D7D05CBD70D8EDB84D3542), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2488ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.012-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.013-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (b64fbbdd-e86a-4c0f-b217-33c2381b1e50) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 1), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.013-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (b64fbbdd-e86a-4c0f-b217-33c2381b1e50).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.013-0500 I STORAGE [conn46] renameCollection: renaming collection abe54ffe-5ab1-483d-86b6-d050f4c61002 from test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.013-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (b64fbbdd-e86a-4c0f-b217-33c2381b1e50)'. Ident: 'index-1152-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 1)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.013-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (b64fbbdd-e86a-4c0f-b217-33c2381b1e50)'. Ident: 'index-1157-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 1)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.013-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1149-8224331490264904478, commit timestamp: Timestamp(1574796795, 1)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.013-0500 I INDEX [conn114] Registering index build: e7dcbf93-292f-4ea2-b49e-86b83dc581ea
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.013-0500 I INDEX [conn108] Registering index build: 9622cb64-8f21-4e11-8f47-c0cf1df71785
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.013-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a appName: "tid:4" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a", to: "test5_fsmdb0.agg_out", collectionOptions: { validationLevel: "off", validationAction: "error" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796792, 6564), signature: { hash: BinData(0, 2C8835570DFB884F25D7D05CBD70D8EDB84D3542), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 2452974 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2453ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.013-0500 I INDEX [conn112] Registering index build: 4320180c-b0b0-4a2f-964f-fa7a5d4abd17
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.013-0500 I COMMAND [conn220] command test5_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796792, 5256), lsid: { id: UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") }, $clusterTime: { clusterTime: Timestamp(1574796792, 5448), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796792, 5256). Collection minimum timestamp is Timestamp(1574796792, 6064)" errName:SnapshotUnavailable errCode:246 reslen:602 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2350478 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 2350ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.013-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7442200263300834835, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5335361345345430295, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796792448), clusterTime: Timestamp(1574796792, 4547) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796792, 4547), signature: { hash: BinData(0, 2C8835570DFB884F25D7D05CBD70D8EDB84D3542), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2174 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2564ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.017-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64 with generated UUID: 723ea8fc-6101-479a-909c-6470b497478a and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.034-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.034-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.034-0500 I STORAGE [conn114] Index build initialized: e7dcbf93-292f-4ea2-b49e-86b83dc581ea: test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831 (bee24705-7ddb-4fb8-8cd7-10d8c7f97299 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.034-0500 I INDEX [conn114] Waiting for index build to complete: e7dcbf93-292f-4ea2-b49e-86b83dc581ea
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.034-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.041-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.041-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.044-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.047-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.052-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: e7dcbf93-292f-4ea2-b49e-86b83dc581ea: test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831 ( bee24705-7ddb-4fb8-8cd7-10d8c7f97299 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.054-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 60a9140f-f275-493f-997c-a4d9c2270bd8: test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79 ( a7913c01-2e98-410b-ad2b-5701fe9b5046 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.058-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.058-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.058-0500 I STORAGE [conn108] Index build initialized: 9622cb64-8f21-4e11-8f47-c0cf1df71785: test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b (d80dc9fb-79cc-409e-8ad7-bb6513d59b10 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.058-0500 I INDEX [conn108] Waiting for index build to complete: 9622cb64-8f21-4e11-8f47-c0cf1df71785
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.058-0500 I INDEX [conn114] Index build completed: e7dcbf93-292f-4ea2-b49e-86b83dc581ea
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.058-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.059-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (abe54ffe-5ab1-483d-86b6-d050f4c61002) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 507), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.059-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (abe54ffe-5ab1-483d-86b6-d050f4c61002).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.059-0500 I STORAGE [conn46] renameCollection: renaming collection a7913c01-2e98-410b-ad2b-5701fe9b5046 from test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.059-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (abe54ffe-5ab1-483d-86b6-d050f4c61002)'. Ident: 'index-1171-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 507)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.059-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (abe54ffe-5ab1-483d-86b6-d050f4c61002)'. Ident: 'index-1173-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 507)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.059-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1169-8224331490264904478, commit timestamp: Timestamp(1574796795, 507)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.059-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.059-0500 I INDEX [conn110] Registering index build: 41bc3d6f-a181-4073-92b3-5fe1ea3800f9
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.059-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7883818558943096188, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8001158536357429037, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796792452), clusterTime: Timestamp(1574796792, 4550) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796792, 4551), signature: { hash: BinData(0, 2C8835570DFB884F25D7D05CBD70D8EDB84D3542), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2606ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.059-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:15.059-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796792, 4550), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2607ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.062-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b with generated UUID: e0f7cdd9-74d2-49bb-9738-2a28f28a2019 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.063-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.063-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.063-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.063-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 946973f9-25a3-49a7-9726-db970d1a02e4: test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79 (a7913c01-2e98-410b-ad2b-5701fe9b5046 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.063-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.064-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.064-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25 with provided UUID: 0774d6e3-c392-4d67-a388-e2b8e3a01b4e and options: { uuid: UUID("0774d6e3-c392-4d67-a388-e2b8e3a01b4e"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.065-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b with provided UUID: d80dc9fb-79cc-409e-8ad7-bb6513d59b10 and options: { uuid: UUID("d80dc9fb-79cc-409e-8ad7-bb6513d59b10"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.066-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.069-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.076-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 946973f9-25a3-49a7-9726-db970d1a02e4: test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79 ( a7913c01-2e98-410b-ad2b-5701fe9b5046 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.081-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.084-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.084-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25 with provided UUID: 0774d6e3-c392-4d67-a388-e2b8e3a01b4e and options: { uuid: UUID("0774d6e3-c392-4d67-a388-e2b8e3a01b4e"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.085-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.085-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.085-0500 I STORAGE [conn112] Index build initialized: 4320180c-b0b0-4a2f-964f-fa7a5d4abd17: test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25 (0774d6e3-c392-4d67-a388-e2b8e3a01b4e ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.085-0500 I INDEX [conn112] Waiting for index build to complete: 4320180c-b0b0-4a2f-964f-fa7a5d4abd17
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.085-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.085-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a (abe54ffe-5ab1-483d-86b6-d050f4c61002) to test5_fsmdb0.agg_out and drop b64fbbdd-e86a-4c0f-b217-33c2381b1e50.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.085-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (b64fbbdd-e86a-4c0f-b217-33c2381b1e50) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 1), t: 1 } and commit timestamp Timestamp(1574796795, 1)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.085-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (b64fbbdd-e86a-4c0f-b217-33c2381b1e50).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.085-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection abe54ffe-5ab1-483d-86b6-d050f4c61002 from test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.085-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (b64fbbdd-e86a-4c0f-b217-33c2381b1e50)'. Ident: 'index-1162--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 1)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.085-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (b64fbbdd-e86a-4c0f-b217-33c2381b1e50)'. Ident: 'index-1173--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 1)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.085-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1161--8000595249233899911, commit timestamp: Timestamp(1574796795, 1)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.086-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64 with provided UUID: 723ea8fc-6101-479a-909c-6470b497478a and options: { uuid: UUID("723ea8fc-6101-479a-909c-6470b497478a"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.086-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 9622cb64-8f21-4e11-8f47-c0cf1df71785: test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b ( d80dc9fb-79cc-409e-8ad7-bb6513d59b10 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.095-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.096-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.101-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.103-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.104-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.110-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.110-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.110-0500 I STORAGE [conn110] Index build initialized: 41bc3d6f-a181-4073-92b3-5fe1ea3800f9: test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64 (723ea8fc-6101-479a-909c-6470b497478a ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.110-0500 I INDEX [conn110] Waiting for index build to complete: 41bc3d6f-a181-4073-92b3-5fe1ea3800f9
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.110-0500 I INDEX [conn108] Index build completed: 9622cb64-8f21-4e11-8f47-c0cf1df71785
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.111-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.111-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (a7913c01-2e98-410b-ad2b-5701fe9b5046) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 1015), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.111-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (a7913c01-2e98-410b-ad2b-5701fe9b5046).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.111-0500 I STORAGE [conn46] renameCollection: renaming collection bee24705-7ddb-4fb8-8cd7-10d8c7f97299 from test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.111-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 4320180c-b0b0-4a2f-964f-fa7a5d4abd17: test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25 ( 0774d6e3-c392-4d67-a388-e2b8e3a01b4e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.111-0500 I INDEX [conn112] Index build completed: 4320180c-b0b0-4a2f-964f-fa7a5d4abd17
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.111-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (a7913c01-2e98-410b-ad2b-5701fe9b5046)'. Ident: 'index-1172-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 1015)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.111-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (a7913c01-2e98-410b-ad2b-5701fe9b5046)'. Ident: 'index-1175-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 1015)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.111-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1170-8224331490264904478, commit timestamp: Timestamp(1574796795, 1015)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.111-0500 I INDEX [conn114] Registering index build: e012b150-d80a-40e6-8da2-f83c8375cf57
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.111-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.111-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1322608944783355365, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 193754205998611468, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796792503), clusterTime: Timestamp(1574796792, 6054) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796792, 6058), signature: { hash: BinData(0, 2C8835570DFB884F25D7D05CBD70D8EDB84D3542), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2589ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:15.112-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796792, 6054), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2608ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.112-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.115-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e with generated UUID: 8a79d53b-f043-431a-8c1c-ba42c2728f12 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.115-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a (abe54ffe-5ab1-483d-86b6-d050f4c61002) to test5_fsmdb0.agg_out and drop b64fbbdd-e86a-4c0f-b217-33c2381b1e50.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.115-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (b64fbbdd-e86a-4c0f-b217-33c2381b1e50) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 1), t: 1 } and commit timestamp Timestamp(1574796795, 1)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.115-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (b64fbbdd-e86a-4c0f-b217-33c2381b1e50).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.115-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection abe54ffe-5ab1-483d-86b6-d050f4c61002 from test5_fsmdb0.tmp.agg_out.96a58cfd-69f3-4617-b23a-7d2cfe79254a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.115-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (b64fbbdd-e86a-4c0f-b217-33c2381b1e50)'. Ident: 'index-1162--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 1)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.115-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (b64fbbdd-e86a-4c0f-b217-33c2381b1e50)'. Ident: 'index-1173--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 1)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.115-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1161--4104909142373009110, commit timestamp: Timestamp(1574796795, 1)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.116-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64 with provided UUID: 723ea8fc-6101-479a-909c-6470b497478a and options: { uuid: UUID("723ea8fc-6101-479a-909c-6470b497478a"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.122-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.130-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.130-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.130-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 7a62d7ef-94c6-4761-a068-a9dd5a2aef02: test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831 (bee24705-7ddb-4fb8-8cd7-10d8c7f97299 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.130-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.130-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.130-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.131-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796795, 1) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 101ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.132-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79 (a7913c01-2e98-410b-ad2b-5701fe9b5046) to test5_fsmdb0.agg_out and drop abe54ffe-5ab1-483d-86b6-d050f4c61002.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.139-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:15.152-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796792, 6058), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2629ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:15.152-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796792, 6059), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2629ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.162-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.132-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:15.220-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 204ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.139-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:15.299-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796795, 1015), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 185ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.162-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.133-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (abe54ffe-5ab1-483d-86b6-d050f4c61002) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 507), t: 1 } and commit timestamp Timestamp(1574796795, 507)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:15.258-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796795, 507), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 197ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.139-0500 I STORAGE [conn114] Index build initialized: e012b150-d80a-40e6-8da2-f83c8375cf57: test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b (e0f7cdd9-74d2-49bb-9738-2a28f28a2019 ): indexes: 1
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:15.350-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796795, 2023), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 197ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.162-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 71a914cf-db57-4c1f-933c-1cebd2282dcd: test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831 (bee24705-7ddb-4fb8-8cd7-10d8c7f97299 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.133-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (abe54ffe-5ab1-483d-86b6-d050f4c61002).
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:15.299-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796795, 2023), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 146ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.139-0500 I INDEX [conn114] Waiting for index build to complete: e012b150-d80a-40e6-8da2-f83c8375cf57
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.162-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.133-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection a7913c01-2e98-410b-ad2b-5701fe9b5046 from test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:15.384-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796795, 2658), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 161ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.139-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.163-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.133-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (abe54ffe-5ab1-483d-86b6-d050f4c61002)'. Ident: 'index-1182--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 507)'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:15.421-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796795, 3035), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 162ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.140-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 41bc3d6f-a181-4073-92b3-5fe1ea3800f9: test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64 ( 723ea8fc-6101-479a-909c-6470b497478a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.164-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79 (a7913c01-2e98-410b-ad2b-5701fe9b5046) to test5_fsmdb0.agg_out and drop abe54ffe-5ab1-483d-86b6-d050f4c61002.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.133-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (abe54ffe-5ab1-483d-86b6-d050f4c61002)'. Ident: 'index-1185--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 507)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.140-0500 I INDEX [conn110] Index build completed: 41bc3d6f-a181-4073-92b3-5fe1ea3800f9
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:18.021-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796795, 4045), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2707ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.166-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.133-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1181--8000595249233899911, commit timestamp: Timestamp(1574796795, 507)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.148-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.166-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (abe54ffe-5ab1-483d-86b6-d050f4c61002) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 507), t: 1 } and commit timestamp Timestamp(1574796795, 507)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.134-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b with provided UUID: e0f7cdd9-74d2-49bb-9738-2a28f28a2019 and options: { uuid: UUID("e0f7cdd9-74d2-49bb-9738-2a28f28a2019"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.134-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 7a62d7ef-94c6-4761-a068-a9dd5a2aef02: test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831 ( bee24705-7ddb-4fb8-8cd7-10d8c7f97299 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.166-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (abe54ffe-5ab1-483d-86b6-d050f4c61002).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.148-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.149-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.166-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection a7913c01-2e98-410b-ad2b-5701fe9b5046 from test5_fsmdb0.tmp.agg_out.b9488b69-b619-421f-96ee-1adc37ccdc79 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.151-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.171-0500 I INDEX [ReplWriterWorker-12] index build: starting on test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.166-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (abe54ffe-5ab1-483d-86b6-d050f4c61002)'. Ident: 'index-1182--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 507)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.151-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.171-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.166-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (abe54ffe-5ab1-483d-86b6-d050f4c61002)'. Ident: 'index-1185--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 507)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.151-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (bee24705-7ddb-4fb8-8cd7-10d8c7f97299) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 2022), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.171-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 1e508a71-7c57-4c67-9273-616b91b87f61: test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b (d80dc9fb-79cc-409e-8ad7-bb6513d59b10 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.166-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1181--4104909142373009110, commit timestamp: Timestamp(1574796795, 507)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.151-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (bee24705-7ddb-4fb8-8cd7-10d8c7f97299).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.171-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.167-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b with provided UUID: e0f7cdd9-74d2-49bb-9738-2a28f28a2019 and options: { uuid: UUID("e0f7cdd9-74d2-49bb-9738-2a28f28a2019"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.151-0500 I STORAGE [conn46] renameCollection: renaming collection d80dc9fb-79cc-409e-8ad7-bb6513d59b10 from test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.172-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.167-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 71a914cf-db57-4c1f-933c-1cebd2282dcd: test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831 ( bee24705-7ddb-4fb8-8cd7-10d8c7f97299 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.151-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bee24705-7ddb-4fb8-8cd7-10d8c7f97299)'. Ident: 'index-1180-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 2022)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.174-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.183-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.151-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bee24705-7ddb-4fb8-8cd7-10d8c7f97299)'. Ident: 'index-1183-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 2022)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.181-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 1e508a71-7c57-4c67-9273-616b91b87f61: test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b ( d80dc9fb-79cc-409e-8ad7-bb6513d59b10 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.202-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.151-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1177-8224331490264904478, commit timestamp: Timestamp(1574796795, 2022)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.194-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.202-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.151-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.194-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.202-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 8319a1e2-df1e-4c86-bb0c-8e35ad99ce93: test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b (d80dc9fb-79cc-409e-8ad7-bb6513d59b10 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.151-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (d80dc9fb-79cc-409e-8ad7-bb6513d59b10) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 2023), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.194-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 0e89618e-5641-4d29-aab5-b89ec8bf542d: test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25 (0774d6e3-c392-4d67-a388-e2b8e3a01b4e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.202-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.151-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (d80dc9fb-79cc-409e-8ad7-bb6513d59b10).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.195-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.203-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.151-0500 I STORAGE [conn108] renameCollection: renaming collection 0774d6e3-c392-4d67-a388-e2b8e3a01b4e from test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.195-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.206-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.152-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 235391368033565749, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3773190136333439888, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796792522), clusterTime: Timestamp(1574796792, 6058) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796792, 6060), signature: { hash: BinData(0, 2C8835570DFB884F25D7D05CBD70D8EDB84D3542), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2628ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.196-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831 (bee24705-7ddb-4fb8-8cd7-10d8c7f97299) to test5_fsmdb0.agg_out and drop a7913c01-2e98-410b-ad2b-5701fe9b5046.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.210-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 8319a1e2-df1e-4c86-bb0c-8e35ad99ce93: test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b ( d80dc9fb-79cc-409e-8ad7-bb6513d59b10 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.152-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d80dc9fb-79cc-409e-8ad7-bb6513d59b10)'. Ident: 'index-1181-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 2023)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.198-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.225-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.152-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d80dc9fb-79cc-409e-8ad7-bb6513d59b10)'. Ident: 'index-1187-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 2023)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.198-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (a7913c01-2e98-410b-ad2b-5701fe9b5046) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 1015), t: 1 } and commit timestamp Timestamp(1574796795, 1015)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.225-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.152-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1178-8224331490264904478, commit timestamp: Timestamp(1574796795, 2023)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.198-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (a7913c01-2e98-410b-ad2b-5701fe9b5046).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.225-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 6da5fd14-4175-43c0-b55b-3a3e9b2fbed0: test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25 (0774d6e3-c392-4d67-a388-e2b8e3a01b4e ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.152-0500 I INDEX [conn112] Registering index build: d81d29c9-357e-45f5-97b9-35a5f3260ecd
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.198-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection bee24705-7ddb-4fb8-8cd7-10d8c7f97299 from test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:18.027-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796795, 4043), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2726ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.225-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.152-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5651321820113564061, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2956230906577455960, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796792523), clusterTime: Timestamp(1574796792, 6059) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796792, 6060), signature: { hash: BinData(0, 2C8835570DFB884F25D7D05CBD70D8EDB84D3542), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2628ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.198-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (a7913c01-2e98-410b-ad2b-5701fe9b5046)'. Ident: 'index-1184--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 1015)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.226-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.152-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: e012b150-d80a-40e6-8da2-f83c8375cf57: test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b ( e0f7cdd9-74d2-49bb-9738-2a28f28a2019 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.198-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (a7913c01-2e98-410b-ad2b-5701fe9b5046)'. Ident: 'index-1189--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 1015)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.227-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831 (bee24705-7ddb-4fb8-8cd7-10d8c7f97299) to test5_fsmdb0.agg_out and drop a7913c01-2e98-410b-ad2b-5701fe9b5046.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.154-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567 with generated UUID: e3c0c865-c32e-4505-9045-98f67127cb3b and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.198-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1183--8000595249233899911, commit timestamp: Timestamp(1574796795, 1015)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.228-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.157-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43 with generated UUID: a1b6195b-3d3b-4b66-9447-1b72f300791d and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.199-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e with provided UUID: 8a79d53b-f043-431a-8c1c-ba42c2728f12 and options: { uuid: UUID("8a79d53b-f043-431a-8c1c-ba42c2728f12"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.228-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (a7913c01-2e98-410b-ad2b-5701fe9b5046) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 1015), t: 1 } and commit timestamp Timestamp(1574796795, 1015)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.184-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.201-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 0e89618e-5641-4d29-aab5-b89ec8bf542d: test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25 ( 0774d6e3-c392-4d67-a388-e2b8e3a01b4e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.228-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (a7913c01-2e98-410b-ad2b-5701fe9b5046).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.184-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.217-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.228-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection bee24705-7ddb-4fb8-8cd7-10d8c7f97299 from test5_fsmdb0.tmp.agg_out.42d4d913-fd29-46e3-970f-864ba35ea831 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.184-0500 I STORAGE [conn112] Index build initialized: d81d29c9-357e-45f5-97b9-35a5f3260ecd: test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e (8a79d53b-f043-431a-8c1c-ba42c2728f12 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.241-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.228-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (a7913c01-2e98-410b-ad2b-5701fe9b5046)'. Ident: 'index-1184--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 1015)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.184-0500 I INDEX [conn112] Waiting for index build to complete: d81d29c9-357e-45f5-97b9-35a5f3260ecd
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.241-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.228-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (a7913c01-2e98-410b-ad2b-5701fe9b5046)'. Ident: 'index-1189--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 1015)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.184-0500 I INDEX [conn114] Index build completed: e012b150-d80a-40e6-8da2-f83c8375cf57
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.241-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 9ffff4ce-2457-4144-8d02-9163634a228b: test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64 (723ea8fc-6101-479a-909c-6470b497478a ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.228-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1183--4104909142373009110, commit timestamp: Timestamp(1574796795, 1015)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.184-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.241-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.229-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e with provided UUID: 8a79d53b-f043-431a-8c1c-ba42c2728f12 and options: { uuid: UUID("8a79d53b-f043-431a-8c1c-ba42c2728f12"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.192-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.242-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.231-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 6da5fd14-4175-43c0-b55b-3a3e9b2fbed0: test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25 ( 0774d6e3-c392-4d67-a388-e2b8e3a01b4e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.193-0500 I INDEX [conn108] Registering index build: 6af2ec26-16ea-4565-95f1-634f1bbc92dc
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.244-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.247-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.200-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.247-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 9ffff4ce-2457-4144-8d02-9163634a228b: test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64 ( 723ea8fc-6101-479a-909c-6470b497478a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.268-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.201-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.263-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.268-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.212-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.263-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.268-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: d5d14213-a4fb-477f-bb83-8cbc392f9ce3: test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64 (723ea8fc-6101-479a-909c-6470b497478a ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.219-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.263-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 8615561e-ad0a-4e14-8d50-5c72e54e0427: test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b (e0f7cdd9-74d2-49bb-9738-2a28f28a2019 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.268-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.219-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.263-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.269-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.219-0500 I STORAGE [conn108] Index build initialized: 6af2ec26-16ea-4565-95f1-634f1bbc92dc: test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567 (e3c0c865-c32e-4505-9045-98f67127cb3b ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.263-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.271-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.219-0500 I INDEX [conn108] Waiting for index build to complete: 6af2ec26-16ea-4565-95f1-634f1bbc92dc
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.264-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b (d80dc9fb-79cc-409e-8ad7-bb6513d59b10) to test5_fsmdb0.agg_out and drop bee24705-7ddb-4fb8-8cd7-10d8c7f97299.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.273-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796795, 1274) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796795, 1402), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 9909 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 138ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.219-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.266-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.276-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: d5d14213-a4fb-477f-bb83-8cbc392f9ce3: test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64 ( 723ea8fc-6101-479a-909c-6470b497478a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.219-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (0774d6e3-c392-4d67-a388-e2b8e3a01b4e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 2594), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.267-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (bee24705-7ddb-4fb8-8cd7-10d8c7f97299) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 2022), t: 1 } and commit timestamp Timestamp(1574796795, 2022)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.291-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.219-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (0774d6e3-c392-4d67-a388-e2b8e3a01b4e).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.267-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (bee24705-7ddb-4fb8-8cd7-10d8c7f97299).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.291-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.219-0500 I STORAGE [conn110] renameCollection: renaming collection 723ea8fc-6101-479a-909c-6470b497478a from test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.267-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection d80dc9fb-79cc-409e-8ad7-bb6513d59b10 from test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.291-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 35e09cef-2bfe-42ac-aca9-070615521f86: test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b (e0f7cdd9-74d2-49bb-9738-2a28f28a2019 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.220-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0774d6e3-c392-4d67-a388-e2b8e3a01b4e)'. Ident: 'index-1182-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 2594)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.267-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bee24705-7ddb-4fb8-8cd7-10d8c7f97299)'. Ident: 'index-1188--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 2022)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.291-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.220-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0774d6e3-c392-4d67-a388-e2b8e3a01b4e)'. Ident: 'index-1189-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 2594)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.267-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bee24705-7ddb-4fb8-8cd7-10d8c7f97299)'. Ident: 'index-1197--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 2022)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.291-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.220-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1179-8224331490264904478, commit timestamp: Timestamp(1574796795, 2594)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.267-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1187--8000595249233899911, commit timestamp: Timestamp(1574796795, 2022)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.292-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b (d80dc9fb-79cc-409e-8ad7-bb6513d59b10) to test5_fsmdb0.agg_out and drop bee24705-7ddb-4fb8-8cd7-10d8c7f97299.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.220-0500 I INDEX [conn46] Registering index build: 9fa08a50-f276-4fc3-8cb2-5f6c08479496
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.268-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25 (0774d6e3-c392-4d67-a388-e2b8e3a01b4e) to test5_fsmdb0.agg_out and drop d80dc9fb-79cc-409e-8ad7-bb6513d59b10.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.294-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.220-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.268-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 8615561e-ad0a-4e14-8d50-5c72e54e0427: test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b ( e0f7cdd9-74d2-49bb-9738-2a28f28a2019 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.294-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (bee24705-7ddb-4fb8-8cd7-10d8c7f97299) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 2022), t: 1 } and commit timestamp Timestamp(1574796795, 2022)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.220-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2581447926337991608, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1735815305214398540, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796795016), clusterTime: Timestamp(1574796795, 1) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796795, 1), signature: { hash: BinData(0, 9947F91B2699C8638F258810341B434EC1837A0B), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 203ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.268-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (d80dc9fb-79cc-409e-8ad7-bb6513d59b10) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 2023), t: 1 } and commit timestamp Timestamp(1574796795, 2023)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.294-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (bee24705-7ddb-4fb8-8cd7-10d8c7f97299).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.220-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: d81d29c9-357e-45f5-97b9-35a5f3260ecd: test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e ( 8a79d53b-f043-431a-8c1c-ba42c2728f12 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.268-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (d80dc9fb-79cc-409e-8ad7-bb6513d59b10).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.294-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection d80dc9fb-79cc-409e-8ad7-bb6513d59b10 from test5_fsmdb0.tmp.agg_out.7ceca448-ecc5-446d-8e9c-57a2dcd0b90b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.221-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.268-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection 0774d6e3-c392-4d67-a388-e2b8e3a01b4e from test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.294-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bee24705-7ddb-4fb8-8cd7-10d8c7f97299)'. Ident: 'index-1188--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 2022)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.224-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f with generated UUID: 1850821f-6c98-41fa-8acc-7a60d18b1674 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.268-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d80dc9fb-79cc-409e-8ad7-bb6513d59b10)'. Ident: 'index-1192--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 2023)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.294-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bee24705-7ddb-4fb8-8cd7-10d8c7f97299)'. Ident: 'index-1197--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 2022)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.232-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.268-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d80dc9fb-79cc-409e-8ad7-bb6513d59b10)'. Ident: 'index-1201--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 2023)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.294-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1187--4104909142373009110, commit timestamp: Timestamp(1574796795, 2022)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.248-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.268-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1191--8000595249233899911, commit timestamp: Timestamp(1574796795, 2023)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.295-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25 (0774d6e3-c392-4d67-a388-e2b8e3a01b4e) to test5_fsmdb0.agg_out and drop d80dc9fb-79cc-409e-8ad7-bb6513d59b10.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.248-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.269-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567 with provided UUID: e3c0c865-c32e-4505-9045-98f67127cb3b and options: { uuid: UUID("e3c0c865-c32e-4505-9045-98f67127cb3b"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.295-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (d80dc9fb-79cc-409e-8ad7-bb6513d59b10) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 2023), t: 1 } and commit timestamp Timestamp(1574796795, 2023)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.248-0500 I STORAGE [conn46] Index build initialized: 9fa08a50-f276-4fc3-8cb2-5f6c08479496: test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43 (a1b6195b-3d3b-4b66-9447-1b72f300791d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.282-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.295-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (d80dc9fb-79cc-409e-8ad7-bb6513d59b10).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.248-0500 I INDEX [conn46] Waiting for index build to complete: 9fa08a50-f276-4fc3-8cb2-5f6c08479496
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.283-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43 with provided UUID: a1b6195b-3d3b-4b66-9447-1b72f300791d and options: { uuid: UUID("a1b6195b-3d3b-4b66-9447-1b72f300791d"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.295-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 0774d6e3-c392-4d67-a388-e2b8e3a01b4e from test5_fsmdb0.tmp.agg_out.a2e89a43-2d41-48f8-91d5-d1ded406cf25 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.248-0500 I INDEX [conn112] Index build completed: d81d29c9-357e-45f5-97b9-35a5f3260ecd
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.300-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.295-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d80dc9fb-79cc-409e-8ad7-bb6513d59b10)'. Ident: 'index-1192--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 2023)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.249-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 6af2ec26-16ea-4565-95f1-634f1bbc92dc: test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567 ( e3c0c865-c32e-4505-9045-98f67127cb3b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.328-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.295-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d80dc9fb-79cc-409e-8ad7-bb6513d59b10)'. Ident: 'index-1201--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 2023)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.249-0500 I INDEX [conn108] Index build completed: 6af2ec26-16ea-4565-95f1-634f1bbc92dc
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.328-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.295-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1191--4104909142373009110, commit timestamp: Timestamp(1574796795, 2023)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.257-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.257-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.296-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567 with provided UUID: e3c0c865-c32e-4505-9045-98f67127cb3b and options: { uuid: UUID("e3c0c865-c32e-4505-9045-98f67127cb3b"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.328-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: e948382b-bbe4-44d7-a4f6-a8ee2d5c248e: test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e (8a79d53b-f043-431a-8c1c-ba42c2728f12 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.257-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (723ea8fc-6101-479a-909c-6470b497478a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 3035), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.296-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 35e09cef-2bfe-42ac-aca9-070615521f86: test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b ( e0f7cdd9-74d2-49bb-9738-2a28f28a2019 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.328-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.257-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (723ea8fc-6101-479a-909c-6470b497478a).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.310-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.329-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.257-0500 I STORAGE [conn114] renameCollection: renaming collection e0f7cdd9-74d2-49bb-9738-2a28f28a2019 from test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.311-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43 with provided UUID: a1b6195b-3d3b-4b66-9447-1b72f300791d and options: { uuid: UUID("a1b6195b-3d3b-4b66-9447-1b72f300791d"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.331-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64 (723ea8fc-6101-479a-909c-6470b497478a) to test5_fsmdb0.agg_out and drop 0774d6e3-c392-4d67-a388-e2b8e3a01b4e.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.257-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (723ea8fc-6101-479a-909c-6470b497478a)'. Ident: 'index-1186-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 3035)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.327-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.331-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.257-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (723ea8fc-6101-479a-909c-6470b497478a)'. Ident: 'index-1193-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 3035)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.349-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.331-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.agg_out (0774d6e3-c392-4d67-a388-e2b8e3a01b4e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 2594), t: 1 } and commit timestamp Timestamp(1574796795, 2594)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.257-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1184-8224331490264904478, commit timestamp: Timestamp(1574796795, 3035)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.349-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.331-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.agg_out (0774d6e3-c392-4d67-a388-e2b8e3a01b4e).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.257-0500 I INDEX [conn110] Registering index build: 7e494cd6-d095-4464-a68e-3a786caf4d3a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.349-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: e19ba465-f454-4acc-aa17-54465ea4a33c: test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e (8a79d53b-f043-431a-8c1c-ba42c2728f12 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.331-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection 723ea8fc-6101-479a-909c-6470b497478a from test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.257-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.349-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.332-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0774d6e3-c392-4d67-a388-e2b8e3a01b4e)'. Ident: 'index-1194--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 2594)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.258-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3770712737960071345, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2718546145114896652, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796795061), clusterTime: Timestamp(1574796795, 507) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796795, 507), signature: { hash: BinData(0, 9947F91B2699C8638F258810341B434EC1837A0B), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 196ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.350-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.332-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0774d6e3-c392-4d67-a388-e2b8e3a01b4e)'. Ident: 'index-1203--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 2594)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.258-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.351-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64 (723ea8fc-6101-479a-909c-6470b497478a) to test5_fsmdb0.agg_out and drop 0774d6e3-c392-4d67-a388-e2b8e3a01b4e.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.332-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1193--8000595249233899911, commit timestamp: Timestamp(1574796795, 2594)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.260-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e with generated UUID: 566efeed-eba9-4e46-b6fc-d19c2970371c and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.354-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.334-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: e948382b-bbe4-44d7-a4f6-a8ee2d5c248e: test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e ( 8a79d53b-f043-431a-8c1c-ba42c2728f12 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.268-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.354-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (0774d6e3-c392-4d67-a388-e2b8e3a01b4e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 2594), t: 1 } and commit timestamp Timestamp(1574796795, 2594)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.336-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f with provided UUID: 1850821f-6c98-41fa-8acc-7a60d18b1674 and options: { uuid: UUID("1850821f-6c98-41fa-8acc-7a60d18b1674"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.285-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.354-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (0774d6e3-c392-4d67-a388-e2b8e3a01b4e).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.352-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.285-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.354-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection 723ea8fc-6101-479a-909c-6470b497478a from test5_fsmdb0.tmp.agg_out.ceb3bbd5-dda4-411d-a15f-7db04da11c64 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.372-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.285-0500 I STORAGE [conn110] Index build initialized: 7e494cd6-d095-4464-a68e-3a786caf4d3a: test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f (1850821f-6c98-41fa-8acc-7a60d18b1674 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.354-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0774d6e3-c392-4d67-a388-e2b8e3a01b4e)'. Ident: 'index-1194--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 2594)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.372-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.285-0500 I INDEX [conn110] Waiting for index build to complete: 7e494cd6-d095-4464-a68e-3a786caf4d3a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.354-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0774d6e3-c392-4d67-a388-e2b8e3a01b4e)'. Ident: 'index-1203--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 2594)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.372-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 8e190a75-9295-4f86-8cd3-596b8326aebe: test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567 (e3c0c865-c32e-4505-9045-98f67127cb3b ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.285-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.354-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1193--4104909142373009110, commit timestamp: Timestamp(1574796795, 2594)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.373-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.287-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 9fa08a50-f276-4fc3-8cb2-5f6c08479496: test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43 ( a1b6195b-3d3b-4b66-9447-1b72f300791d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.355-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: e19ba465-f454-4acc-aa17-54465ea4a33c: test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e ( 8a79d53b-f043-431a-8c1c-ba42c2728f12 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.373-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.287-0500 I INDEX [conn46] Index build completed: 9fa08a50-f276-4fc3-8cb2-5f6c08479496
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.357-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f with provided UUID: 1850821f-6c98-41fa-8acc-7a60d18b1674 and options: { uuid: UUID("1850821f-6c98-41fa-8acc-7a60d18b1674"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.375-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b (e0f7cdd9-74d2-49bb-9738-2a28f28a2019) to test5_fsmdb0.agg_out and drop 723ea8fc-6101-479a-909c-6470b497478a.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.295-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.371-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.376-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.296-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.388-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.376-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (723ea8fc-6101-479a-909c-6470b497478a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 3035), t: 1 } and commit timestamp Timestamp(1574796795, 3035)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.298-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.388-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.376-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (723ea8fc-6101-479a-909c-6470b497478a).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.298-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.388-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 59743516-3efa-44df-8925-df88ee89d6c3: test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567 (e3c0c865-c32e-4505-9045-98f67127cb3b ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.376-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection e0f7cdd9-74d2-49bb-9738-2a28f28a2019 from test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.298-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (e0f7cdd9-74d2-49bb-9738-2a28f28a2019) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 4042), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.389-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.377-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (723ea8fc-6101-479a-909c-6470b497478a)'. Ident: 'index-1196--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 3035)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.298-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (e0f7cdd9-74d2-49bb-9738-2a28f28a2019).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.389-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.377-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (723ea8fc-6101-479a-909c-6470b497478a)'. Ident: 'index-1207--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 3035)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.298-0500 I STORAGE [conn112] renameCollection: renaming collection 8a79d53b-f043-431a-8c1c-ba42c2728f12 from test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.391-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b (e0f7cdd9-74d2-49bb-9738-2a28f28a2019) to test5_fsmdb0.agg_out and drop 723ea8fc-6101-479a-909c-6470b497478a.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.377-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1195--8000595249233899911, commit timestamp: Timestamp(1574796795, 3035)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.298-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e0f7cdd9-74d2-49bb-9738-2a28f28a2019)'. Ident: 'index-1192-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 4042)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.391-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.377-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e with provided UUID: 566efeed-eba9-4e46-b6fc-d19c2970371c and options: { uuid: UUID("566efeed-eba9-4e46-b6fc-d19c2970371c"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.298-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e0f7cdd9-74d2-49bb-9738-2a28f28a2019)'. Ident: 'index-1195-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 4042)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.391-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.agg_out (723ea8fc-6101-479a-909c-6470b497478a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 3035), t: 1 } and commit timestamp Timestamp(1574796795, 3035)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.380-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 8e190a75-9295-4f86-8cd3-596b8326aebe: test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567 ( e3c0c865-c32e-4505-9045-98f67127cb3b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.298-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1190-8224331490264904478, commit timestamp: Timestamp(1574796795, 4042)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.391-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.agg_out (723ea8fc-6101-479a-909c-6470b497478a).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.395-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.298-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.391-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection e0f7cdd9-74d2-49bb-9738-2a28f28a2019 from test5_fsmdb0.tmp.agg_out.e5ce096e-1051-4d26-bd57-1fc4e921c04b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.415-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.299-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (8a79d53b-f043-431a-8c1c-ba42c2728f12) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 4043), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.391-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (723ea8fc-6101-479a-909c-6470b497478a)'. Ident: 'index-1196--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 3035)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.415-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.299-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (8a79d53b-f043-431a-8c1c-ba42c2728f12).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.391-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (723ea8fc-6101-479a-909c-6470b497478a)'. Ident: 'index-1207--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 3035)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.415-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: d7b352e2-beab-4d94-88f9-0a37a75a4361: test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43 (a1b6195b-3d3b-4b66-9447-1b72f300791d ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.299-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 9084725184402086154, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7780589368853379571, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796795113), clusterTime: Timestamp(1574796795, 1015) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796795, 1015), signature: { hash: BinData(0, 9947F91B2699C8638F258810341B434EC1837A0B), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 184ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.391-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1195--4104909142373009110, commit timestamp: Timestamp(1574796795, 3035)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.415-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.299-0500 I STORAGE [conn46] renameCollection: renaming collection e3c0c865-c32e-4505-9045-98f67127cb3b from test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.393-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 59743516-3efa-44df-8925-df88ee89d6c3: test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567 ( e3c0c865-c32e-4505-9045-98f67127cb3b ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.416-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.299-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (8a79d53b-f043-431a-8c1c-ba42c2728f12)'. Ident: 'index-1198-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 4043)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.396-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e with provided UUID: 566efeed-eba9-4e46-b6fc-d19c2970371c and options: { uuid: UUID("566efeed-eba9-4e46-b6fc-d19c2970371c"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.419-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.299-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (8a79d53b-f043-431a-8c1c-ba42c2728f12)'. Ident: 'index-1199-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 4043)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.412-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.421-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: d7b352e2-beab-4d94-88f9-0a37a75a4361: test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43 ( a1b6195b-3d3b-4b66-9447-1b72f300791d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.299-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1196-8224331490264904478, commit timestamp: Timestamp(1574796795, 4043)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.432-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.442-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.299-0500 I INDEX [conn114] Registering index build: f130b44d-9cdf-4798-a59b-1c064982a799
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.432-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.442-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.299-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4712913181310692387, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7998717639144598381, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796795153), clusterTime: Timestamp(1574796795, 2023) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796795, 2023), signature: { hash: BinData(0, 9947F91B2699C8638F258810341B434EC1837A0B), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 145ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.432-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 00d32b40-4557-4d9b-9078-6c081627f725: test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43 (a1b6195b-3d3b-4b66-9447-1b72f300791d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.442-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 523e3744-b3a7-4bf4-bc02-b349d138b531: test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f (1850821f-6c98-41fa-8acc-7a60d18b1674 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.299-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 7e494cd6-d095-4464-a68e-3a786caf4d3a: test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f ( 1850821f-6c98-41fa-8acc-7a60d18b1674 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.432-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.442-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.312-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.432-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.443-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.312-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.435-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.444-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e (8a79d53b-f043-431a-8c1c-ba42c2728f12) to test5_fsmdb0.agg_out and drop e0f7cdd9-74d2-49bb-9738-2a28f28a2019.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.312-0500 I STORAGE [conn114] Index build initialized: f130b44d-9cdf-4798-a59b-1c064982a799: test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e (566efeed-eba9-4e46-b6fc-d19c2970371c ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.439-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796795, 3871) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796795, 3935), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 4447 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 150ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.445-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.312-0500 I INDEX [conn114] Waiting for index build to complete: f130b44d-9cdf-4798-a59b-1c064982a799
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.439-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 00d32b40-4557-4d9b-9078-6c081627f725: test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43 ( a1b6195b-3d3b-4b66-9447-1b72f300791d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.445-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (e0f7cdd9-74d2-49bb-9738-2a28f28a2019) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 4042), t: 1 } and commit timestamp Timestamp(1574796795, 4042)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.312-0500 I INDEX [conn110] Index build completed: 7e494cd6-d095-4464-a68e-3a786caf4d3a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.458-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.445-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (e0f7cdd9-74d2-49bb-9738-2a28f28a2019).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.313-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.458-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.445-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 8a79d53b-f043-431a-8c1c-ba42c2728f12 from test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.313-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6 with generated UUID: 24268169-6668-4b6b-9dde-f171a7a5d0bc and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.458-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: eb29dcbf-903d-49a8-9059-70a0d51a5504: test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f (1850821f-6c98-41fa-8acc-7a60d18b1674 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.445-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e0f7cdd9-74d2-49bb-9738-2a28f28a2019)'. Ident: 'index-1200--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 4042)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.313-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.458-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.445-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e0f7cdd9-74d2-49bb-9738-2a28f28a2019)'. Ident: 'index-1209--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 4042)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.315-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de with generated UUID: 5ebdd2b6-953b-46b8-9dbe-17d0421e1a99 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.459-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.445-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1199--8000595249233899911, commit timestamp: Timestamp(1574796795, 4042)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.317-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.460-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e (8a79d53b-f043-431a-8c1c-ba42c2728f12) to test5_fsmdb0.agg_out and drop e0f7cdd9-74d2-49bb-9738-2a28f28a2019.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.446-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567 (e3c0c865-c32e-4505-9045-98f67127cb3b) to test5_fsmdb0.agg_out and drop 8a79d53b-f043-431a-8c1c-ba42c2728f12.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.333-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: f130b44d-9cdf-4798-a59b-1c064982a799: test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e ( 566efeed-eba9-4e46-b6fc-d19c2970371c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.462-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.446-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (8a79d53b-f043-431a-8c1c-ba42c2728f12) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 4043), t: 1 } and commit timestamp Timestamp(1574796795, 4043)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.333-0500 I INDEX [conn114] Index build completed: f130b44d-9cdf-4798-a59b-1c064982a799
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.462-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (e0f7cdd9-74d2-49bb-9738-2a28f28a2019) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 4042), t: 1 } and commit timestamp Timestamp(1574796795, 4042)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.446-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (8a79d53b-f043-431a-8c1c-ba42c2728f12).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.343-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.462-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (e0f7cdd9-74d2-49bb-9738-2a28f28a2019).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.446-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection e3c0c865-c32e-4505-9045-98f67127cb3b from test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.349-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.462-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection 8a79d53b-f043-431a-8c1c-ba42c2728f12 from test5_fsmdb0.tmp.agg_out.b7aebe78-52a7-493f-9acd-4d94151cce5e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.446-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (8a79d53b-f043-431a-8c1c-ba42c2728f12)'. Ident: 'index-1206--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 4043)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.349-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.462-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e0f7cdd9-74d2-49bb-9738-2a28f28a2019)'. Ident: 'index-1200--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 4042)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.446-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (8a79d53b-f043-431a-8c1c-ba42c2728f12)'. Ident: 'index-1215--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 4043)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.350-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (e3c0c865-c32e-4505-9045-98f67127cb3b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 4614), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.462-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e0f7cdd9-74d2-49bb-9738-2a28f28a2019)'. Ident: 'index-1209--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 4042)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.446-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1205--8000595249233899911, commit timestamp: Timestamp(1574796795, 4043)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.350-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (e3c0c865-c32e-4505-9045-98f67127cb3b).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.462-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1199--4104909142373009110, commit timestamp: Timestamp(1574796795, 4042)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.447-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 523e3744-b3a7-4bf4-bc02-b349d138b531: test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f ( 1850821f-6c98-41fa-8acc-7a60d18b1674 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.350-0500 I STORAGE [conn108] renameCollection: renaming collection a1b6195b-3d3b-4b66-9447-1b72f300791d from test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.463-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567 (e3c0c865-c32e-4505-9045-98f67127cb3b) to test5_fsmdb0.agg_out and drop 8a79d53b-f043-431a-8c1c-ba42c2728f12.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.450-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6 with provided UUID: 24268169-6668-4b6b-9dde-f171a7a5d0bc and options: { uuid: UUID("24268169-6668-4b6b-9dde-f171a7a5d0bc"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.350-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e3c0c865-c32e-4505-9045-98f67127cb3b)'. Ident: 'index-1203-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 4614)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.463-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (8a79d53b-f043-431a-8c1c-ba42c2728f12) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 4043), t: 1 } and commit timestamp Timestamp(1574796795, 4043)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.465-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.350-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e3c0c865-c32e-4505-9045-98f67127cb3b)'. Ident: 'index-1205-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 4614)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.463-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (8a79d53b-f043-431a-8c1c-ba42c2728f12).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.468-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de with provided UUID: 5ebdd2b6-953b-46b8-9dbe-17d0421e1a99 and options: { uuid: UUID("5ebdd2b6-953b-46b8-9dbe-17d0421e1a99"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.350-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1200-8224331490264904478, commit timestamp: Timestamp(1574796795, 4614)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.463-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection e3c0c865-c32e-4505-9045-98f67127cb3b from test5_fsmdb0.tmp.agg_out.f6afaa60-696a-42a9-99b3-e728bf024567 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.484-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.350-0500 I INDEX [conn110] Registering index build: 4dd9cfa3-fda3-40f2-973c-e099d81e3e14
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.463-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (8a79d53b-f043-431a-8c1c-ba42c2728f12)'. Ident: 'index-1206--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 4043)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.502-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.350-0500 I INDEX [conn46] Registering index build: 4c922606-5f5d-48a6-8772-5a63f601fcf0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.463-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (8a79d53b-f043-431a-8c1c-ba42c2728f12)'. Ident: 'index-1215--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 4043)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.502-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.350-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6590494973437516212, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3880711068156640626, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796795153), clusterTime: Timestamp(1574796795, 2023) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796795, 2023), signature: { hash: BinData(0, 9947F91B2699C8638F258810341B434EC1837A0B), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 196ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.463-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1205--4104909142373009110, commit timestamp: Timestamp(1574796795, 4043)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.502-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 861d8e16-63d5-4e58-b13f-87f8984b4839: test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e (566efeed-eba9-4e46-b6fc-d19c2970371c ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.353-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 with generated UUID: 3f6b7f7c-4fe4-444d-bab1-2df49f8c7d5d and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.464-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: eb29dcbf-903d-49a8-9059-70a0d51a5504: test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f ( 1850821f-6c98-41fa-8acc-7a60d18b1674 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.502-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.375-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.467-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6 with provided UUID: 24268169-6668-4b6b-9dde-f171a7a5d0bc and options: { uuid: UUID("24268169-6668-4b6b-9dde-f171a7a5d0bc"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.503-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.375-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.482-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.505-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.375-0500 I STORAGE [conn110] Index build initialized: 4dd9cfa3-fda3-40f2-973c-e099d81e3e14: test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de (5ebdd2b6-953b-46b8-9dbe-17d0421e1a99 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.485-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de with provided UUID: 5ebdd2b6-953b-46b8-9dbe-17d0421e1a99 and options: { uuid: UUID("5ebdd2b6-953b-46b8-9dbe-17d0421e1a99"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.506-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43 (a1b6195b-3d3b-4b66-9447-1b72f300791d) to test5_fsmdb0.agg_out and drop e3c0c865-c32e-4505-9045-98f67127cb3b.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.375-0500 I INDEX [conn110] Waiting for index build to complete: 4dd9cfa3-fda3-40f2-973c-e099d81e3e14
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.498-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.506-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (e3c0c865-c32e-4505-9045-98f67127cb3b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 4614), t: 1 } and commit timestamp Timestamp(1574796795, 4614)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.383-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.514-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.506-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (e3c0c865-c32e-4505-9045-98f67127cb3b).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.384-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.514-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.506-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection a1b6195b-3d3b-4b66-9447-1b72f300791d from test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.384-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (a1b6195b-3d3b-4b66-9447-1b72f300791d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 5117), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.514-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 3efb20d0-c0f5-4ef9-82d3-3b46dad21e2f: test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e (566efeed-eba9-4e46-b6fc-d19c2970371c ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.506-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e3c0c865-c32e-4505-9045-98f67127cb3b)'. Ident: 'index-1212--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 4614)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.384-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (a1b6195b-3d3b-4b66-9447-1b72f300791d).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.515-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.506-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e3c0c865-c32e-4505-9045-98f67127cb3b)'. Ident: 'index-1219--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 4614)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.384-0500 I STORAGE [conn112] renameCollection: renaming collection 1850821f-6c98-41fa-8acc-7a60d18b1674 from test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.515-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.506-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1211--8000595249233899911, commit timestamp: Timestamp(1574796795, 4614)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.384-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (a1b6195b-3d3b-4b66-9447-1b72f300791d)'. Ident: 'index-1204-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 5117)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.517-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.508-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 861d8e16-63d5-4e58-b13f-87f8984b4839: test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e ( 566efeed-eba9-4e46-b6fc-d19c2970371c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.384-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (a1b6195b-3d3b-4b66-9447-1b72f300791d)'. Ident: 'index-1207-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 5117)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.519-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43 (a1b6195b-3d3b-4b66-9447-1b72f300791d) to test5_fsmdb0.agg_out and drop e3c0c865-c32e-4505-9045-98f67127cb3b.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.513-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 with provided UUID: 3f6b7f7c-4fe4-444d-bab1-2df49f8c7d5d and options: { uuid: UUID("3f6b7f7c-4fe4-444d-bab1-2df49f8c7d5d"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.384-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1201-8224331490264904478, commit timestamp: Timestamp(1574796795, 5117)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.519-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (e3c0c865-c32e-4505-9045-98f67127cb3b) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 4614), t: 1 } and commit timestamp Timestamp(1574796795, 4614)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.529-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.384-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.519-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (e3c0c865-c32e-4505-9045-98f67127cb3b).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.533-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f (1850821f-6c98-41fa-8acc-7a60d18b1674) to test5_fsmdb0.agg_out and drop a1b6195b-3d3b-4b66-9447-1b72f300791d.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.384-0500 I INDEX [conn108] Registering index build: 0a36665a-0254-4317-af6e-6b15d8000bf0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.519-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection a1b6195b-3d3b-4b66-9447-1b72f300791d from test5_fsmdb0.tmp.agg_out.82c8562d-26dc-44ed-8432-163763eeba43 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.533-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (a1b6195b-3d3b-4b66-9447-1b72f300791d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 5117), t: 1 } and commit timestamp Timestamp(1574796795, 5117)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.384-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5106861085946073700, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4748231061119778046, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796795223), clusterTime: Timestamp(1574796795, 2658) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796795, 2786), signature: { hash: BinData(0, 9947F91B2699C8638F258810341B434EC1837A0B), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 160ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.519-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (e3c0c865-c32e-4505-9045-98f67127cb3b)'. Ident: 'index-1212--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 4614)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.533-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (a1b6195b-3d3b-4b66-9447-1b72f300791d).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.385-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.519-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (e3c0c865-c32e-4505-9045-98f67127cb3b)'. Ident: 'index-1219--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 4614)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.533-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 1850821f-6c98-41fa-8acc-7a60d18b1674 from test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.387-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef with generated UUID: 4dbf40f9-0fdf-4a12-a81d-d2f31910f4ec and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.519-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1211--4104909142373009110, commit timestamp: Timestamp(1574796795, 4614)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.533-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (a1b6195b-3d3b-4b66-9447-1b72f300791d)'. Ident: 'index-1214--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 5117)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.388-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.520-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 3efb20d0-c0f5-4ef9-82d3-3b46dad21e2f: test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e ( 566efeed-eba9-4e46-b6fc-d19c2970371c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.533-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (a1b6195b-3d3b-4b66-9447-1b72f300791d)'. Ident: 'index-1223--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 5117)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.404-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 4dd9cfa3-fda3-40f2-973c-e099d81e3e14: test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de ( 5ebdd2b6-953b-46b8-9dbe-17d0421e1a99 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.530-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 with provided UUID: 3f6b7f7c-4fe4-444d-bab1-2df49f8c7d5d and options: { uuid: UUID("3f6b7f7c-4fe4-444d-bab1-2df49f8c7d5d"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.533-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1213--8000595249233899911, commit timestamp: Timestamp(1574796795, 5117)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.412-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.544-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.539-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef with provided UUID: 4dbf40f9-0fdf-4a12-a81d-d2f31910f4ec and options: { uuid: UUID("4dbf40f9-0fdf-4a12-a81d-d2f31910f4ec"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.412-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.548-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f (1850821f-6c98-41fa-8acc-7a60d18b1674) to test5_fsmdb0.agg_out and drop a1b6195b-3d3b-4b66-9447-1b72f300791d.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.551-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.412-0500 I STORAGE [conn46] Index build initialized: 4c922606-5f5d-48a6-8772-5a63f601fcf0: test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6 (24268169-6668-4b6b-9dde-f171a7a5d0bc ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.548-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.agg_out (a1b6195b-3d3b-4b66-9447-1b72f300791d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 5117), t: 1 } and commit timestamp Timestamp(1574796795, 5117)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.568-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.412-0500 I INDEX [conn46] Waiting for index build to complete: 4c922606-5f5d-48a6-8772-5a63f601fcf0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.548-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.agg_out (a1b6195b-3d3b-4b66-9447-1b72f300791d).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.568-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.412-0500 I INDEX [conn110] Index build completed: 4dd9cfa3-fda3-40f2-973c-e099d81e3e14
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.548-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection 1850821f-6c98-41fa-8acc-7a60d18b1674 from test5_fsmdb0.tmp.agg_out.2504dbdb-44a6-4aa2-90fc-bd5d44342e5f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.568-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: a1c1e6b9-f3ce-42a7-bfad-8d6d32cbff38: test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de (5ebdd2b6-953b-46b8-9dbe-17d0421e1a99 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.420-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.548-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (a1b6195b-3d3b-4b66-9447-1b72f300791d)'. Ident: 'index-1214--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 5117)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.568-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.420-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.548-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (a1b6195b-3d3b-4b66-9447-1b72f300791d)'. Ident: 'index-1223--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 5117)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.569-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.420-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (1850821f-6c98-41fa-8acc-7a60d18b1674) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 5558), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.548-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1213--4104909142373009110, commit timestamp: Timestamp(1574796795, 5117)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.572-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.420-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (1850821f-6c98-41fa-8acc-7a60d18b1674).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.552-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef with provided UUID: 4dbf40f9-0fdf-4a12-a81d-d2f31910f4ec and options: { uuid: UUID("4dbf40f9-0fdf-4a12-a81d-d2f31910f4ec"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.573-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e (566efeed-eba9-4e46-b6fc-d19c2970371c) to test5_fsmdb0.agg_out and drop 1850821f-6c98-41fa-8acc-7a60d18b1674.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.420-0500 I STORAGE [conn114] renameCollection: renaming collection 566efeed-eba9-4e46-b6fc-d19c2970371c from test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.567-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.573-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (1850821f-6c98-41fa-8acc-7a60d18b1674) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 5558), t: 1 } and commit timestamp Timestamp(1574796795, 5558)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.420-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1850821f-6c98-41fa-8acc-7a60d18b1674)'. Ident: 'index-1210-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 5558)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.584-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.573-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (1850821f-6c98-41fa-8acc-7a60d18b1674).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.420-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1850821f-6c98-41fa-8acc-7a60d18b1674)'. Ident: 'index-1211-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 5558)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.584-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.573-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 566efeed-eba9-4e46-b6fc-d19c2970371c from test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.420-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1208-8224331490264904478, commit timestamp: Timestamp(1574796795, 5558)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.584-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: dee6a46d-a406-461b-943d-66586d1f10d1: test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de (5ebdd2b6-953b-46b8-9dbe-17d0421e1a99 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.573-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1850821f-6c98-41fa-8acc-7a60d18b1674)'. Ident: 'index-1218--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 5558)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.421-0500 I INDEX [conn112] Registering index build: 46a2e388-ca22-4445-a86a-7b0482adf472
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.584-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.573-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1850821f-6c98-41fa-8acc-7a60d18b1674)'. Ident: 'index-1225--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 5558)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.421-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.585-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.573-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1217--8000595249233899911, commit timestamp: Timestamp(1574796795, 5558)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.421-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5429267752787784117, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2187310504090450634, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796795259), clusterTime: Timestamp(1574796795, 3035) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796795, 3035), signature: { hash: BinData(0, 9947F91B2699C8638F258810341B434EC1837A0B), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 161ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.587-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.574-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 with provided UUID: 71c71892-05b1-4b81-a9de-b59cabd8358c and options: { uuid: UUID("71c71892-05b1-4b81-a9de-b59cabd8358c"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.421-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.589-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e (566efeed-eba9-4e46-b6fc-d19c2970371c) to test5_fsmdb0.agg_out and drop 1850821f-6c98-41fa-8acc-7a60d18b1674.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.576-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: a1c1e6b9-f3ce-42a7-bfad-8d6d32cbff38: test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de ( 5ebdd2b6-953b-46b8-9dbe-17d0421e1a99 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.424-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 with generated UUID: 71c71892-05b1-4b81-a9de-b59cabd8358c and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.589-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (1850821f-6c98-41fa-8acc-7a60d18b1674) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 5558), t: 1 } and commit timestamp Timestamp(1574796795, 5558)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.591-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.429-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.589-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (1850821f-6c98-41fa-8acc-7a60d18b1674).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.608-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.446-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.589-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 566efeed-eba9-4e46-b6fc-d19c2970371c from test5_fsmdb0.tmp.agg_out.787b3a7f-9418-4097-be9b-e3308135bf6e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.608-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.446-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.589-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: dee6a46d-a406-461b-943d-66586d1f10d1: test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de ( 5ebdd2b6-953b-46b8-9dbe-17d0421e1a99 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.608-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: def9d188-b0e5-42fd-9684-22ec9e3aa9eb: test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6 (24268169-6668-4b6b-9dde-f171a7a5d0bc ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.446-0500 I STORAGE [conn108] Index build initialized: 0a36665a-0254-4317-af6e-6b15d8000bf0: test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 (3f6b7f7c-4fe4-444d-bab1-2df49f8c7d5d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.589-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1850821f-6c98-41fa-8acc-7a60d18b1674)'. Ident: 'index-1218--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 5558)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.608-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.446-0500 I INDEX [conn108] Waiting for index build to complete: 0a36665a-0254-4317-af6e-6b15d8000bf0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.589-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1850821f-6c98-41fa-8acc-7a60d18b1674)'. Ident: 'index-1225--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 5558)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.609-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.449-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 4c922606-5f5d-48a6-8772-5a63f601fcf0: test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6 ( 24268169-6668-4b6b-9dde-f171a7a5d0bc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.589-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1217--4104909142373009110, commit timestamp: Timestamp(1574796795, 5558)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.611-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.458-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.592-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 with provided UUID: 71c71892-05b1-4b81-a9de-b59cabd8358c and options: { uuid: UUID("71c71892-05b1-4b81-a9de-b59cabd8358c"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.613-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de (5ebdd2b6-953b-46b8-9dbe-17d0421e1a99) to test5_fsmdb0.agg_out and drop 566efeed-eba9-4e46-b6fc-d19c2970371c.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.470-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.605-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.613-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test5_fsmdb0.agg_out (566efeed-eba9-4e46-b6fc-d19c2970371c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 6064), t: 1 } and commit timestamp Timestamp(1574796795, 6064)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.470-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.622-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.613-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test5_fsmdb0.agg_out (566efeed-eba9-4e46-b6fc-d19c2970371c).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.470-0500 I STORAGE [conn112] Index build initialized: 46a2e388-ca22-4445-a86a-7b0482adf472: test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef (4dbf40f9-0fdf-4a12-a81d-d2f31910f4ec ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.622-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.613-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection 5ebdd2b6-953b-46b8-9dbe-17d0421e1a99 from test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.470-0500 I INDEX [conn112] Waiting for index build to complete: 46a2e388-ca22-4445-a86a-7b0482adf472
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.622-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: af7dc69a-7eca-4f0b-bf6e-0f75d4187b0d: test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6 (24268169-6668-4b6b-9dde-f171a7a5d0bc ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.613-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (566efeed-eba9-4e46-b6fc-d19c2970371c)'. Ident: 'index-1222--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 6064)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.470-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.622-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.613-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (566efeed-eba9-4e46-b6fc-d19c2970371c)'. Ident: 'index-1231--8000595249233899911', commit timestamp: 'Timestamp(1574796795, 6064)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.470-0500 I INDEX [conn46] Index build completed: 4c922606-5f5d-48a6-8772-5a63f601fcf0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.623-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.613-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1221--8000595249233899911, commit timestamp: Timestamp(1574796795, 6064)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.470-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (566efeed-eba9-4e46-b6fc-d19c2970371c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 6064), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.624-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:15.615-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: def9d188-b0e5-42fd-9684-22ec9e3aa9eb: test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6 ( 24268169-6668-4b6b-9dde-f171a7a5d0bc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.470-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6 appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796795, 4613), signature: { hash: BinData(0, 9947F91B2699C8638F258810341B434EC1837A0B), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 15854 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 126ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.625-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796795, 5625) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796795, 5689), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 181ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.470-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (566efeed-eba9-4e46-b6fc-d19c2970371c).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.043-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.627-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de (5ebdd2b6-953b-46b8-9dbe-17d0421e1a99) to test5_fsmdb0.agg_out and drop 566efeed-eba9-4e46-b6fc-d19c2970371c.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.470-0500 I STORAGE [conn110] renameCollection: renaming collection 5ebdd2b6-953b-46b8-9dbe-17d0421e1a99 from test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.043-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.627-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (566efeed-eba9-4e46-b6fc-d19c2970371c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796795, 6064), t: 1 } and commit timestamp Timestamp(1574796795, 6064)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.470-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (566efeed-eba9-4e46-b6fc-d19c2970371c)'. Ident: 'index-1214-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 6064)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.043-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: db75784e-0668-402a-90a9-0221d2acd404: test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 (3f6b7f7c-4fe4-444d-bab1-2df49f8c7d5d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.627-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (566efeed-eba9-4e46-b6fc-d19c2970371c).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.470-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (566efeed-eba9-4e46-b6fc-d19c2970371c)'. Ident: 'index-1215-8224331490264904478', commit timestamp: 'Timestamp(1574796795, 6064)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.044-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.627-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection 5ebdd2b6-953b-46b8-9dbe-17d0421e1a99 from test5_fsmdb0.tmp.agg_out.3f0ac1c2-cf6b-4c75-b592-00f3e69554de to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.470-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1212-8224331490264904478, commit timestamp: Timestamp(1574796795, 6064)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.044-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.627-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (566efeed-eba9-4e46-b6fc-d19c2970371c)'. Ident: 'index-1222--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 6064)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.470-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.047-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.627-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (566efeed-eba9-4e46-b6fc-d19c2970371c)'. Ident: 'index-1231--4104909142373009110', commit timestamp: 'Timestamp(1574796795, 6064)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.470-0500 I INDEX [conn114] Registering index build: 1941888b-e6ba-42af-8bed-8b71b6ab937e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.627-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1221--4104909142373009110, commit timestamp: Timestamp(1574796795, 6064)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.470-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:15.627-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: af7dc69a-7eca-4f0b-bf6e-0f75d4187b0d: test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6 ( 24268169-6668-4b6b-9dde-f171a7a5d0bc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.470-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3956765419731804884, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1197862499730770343, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796795313), clusterTime: Timestamp(1574796795, 4045) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796795, 4110), signature: { hash: BinData(0, 9947F91B2699C8638F258810341B434EC1837A0B), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 156ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.471-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:15.488-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.021-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.021-0500 I STORAGE [conn114] Index build initialized: 1941888b-e6ba-42af-8bed-8b71b6ab937e: test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 (71c71892-05b1-4b81-a9de-b59cabd8358c ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.021-0500 I INDEX [conn114] Waiting for index build to complete: 1941888b-e6ba-42af-8bed-8b71b6ab937e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.021-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.024-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.026-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.026-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.026-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (5ebdd2b6-953b-46b8-9dbe-17d0421e1a99) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 6), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.026-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (5ebdd2b6-953b-46b8-9dbe-17d0421e1a99).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.026-0500 I STORAGE [conn46] renameCollection: renaming collection 24268169-6668-4b6b-9dde-f171a7a5d0bc from test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.026-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (5ebdd2b6-953b-46b8-9dbe-17d0421e1a99)'. Ident: 'index-1220-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 6)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.026-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (5ebdd2b6-953b-46b8-9dbe-17d0421e1a99)'. Ident: 'index-1221-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 6)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.026-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1218-8224331490264904478, commit timestamp: Timestamp(1574796798, 6)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.026-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6 appName: "tid:3" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6", to: "test5_fsmdb0.agg_out", collectionOptions: { validationLevel: "off", validationAction: "error" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796795, 6564), signature: { hash: BinData(0, 9947F91B2699C8638F258810341B434EC1837A0B), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 2537558 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2538ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.026-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.026-0500 I COMMAND [conn220] command test5_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796795, 5625), lsid: { id: UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") }, $clusterTime: { clusterTime: Timestamp(1574796795, 5689), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796795, 5625). Collection minimum timestamp is Timestamp(1574796798, 6)" errName:SnapshotUnavailable errCode:246 reslen:599 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2399436 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 2399ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.026-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2798899110744642285, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1926951085903927976, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796795300), clusterTime: Timestamp(1574796795, 4043) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796795, 4043), signature: { hash: BinData(0, 9947F91B2699C8638F258810341B434EC1837A0B), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 11682 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2725ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.056-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: db75784e-0668-402a-90a9-0221d2acd404: test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 ( 3f6b7f7c-4fe4-444d-bab1-2df49f8c7d5d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.028-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 0a36665a-0254-4317-af6e-6b15d8000bf0: test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 ( 3f6b7f7c-4fe4-444d-bab1-2df49f8c7d5d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.028-0500 I INDEX [conn108] Index build completed: 0a36665a-0254-4317-af6e-6b15d8000bf0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.028-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796795, 5117), signature: { hash: BinData(0, 9947F91B2699C8638F258810341B434EC1837A0B), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 8253 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2644ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.029-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 46a2e388-ca22-4445-a86a-7b0482adf472: test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef ( 4dbf40f9-0fdf-4a12-a81d-d2f31910f4ec ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.029-0500 I INDEX [conn112] Index build completed: 46a2e388-ca22-4445-a86a-7b0482adf472
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.030-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796795, 5558), signature: { hash: BinData(0, 9947F91B2699C8638F258810341B434EC1837A0B), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 99 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2609ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.030-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.032-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.033-0500 I COMMAND [conn71] CMD: dropIndexes test5_fsmdb0.agg_out: { flag: 1.0 }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.034-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 1941888b-e6ba-42af-8bed-8b71b6ab937e: test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 ( 71c71892-05b1-4b81-a9de-b59cabd8358c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.034-0500 I INDEX [conn114] Index build completed: 1941888b-e6ba-42af-8bed-8b71b6ab937e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.034-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796795, 6062), signature: { hash: BinData(0, 9947F91B2699C8638F258810341B434EC1837A0B), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 12081 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2576ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.035-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1 with generated UUID: 81c626ee-c495-4a26-a5b2-dadee865ec86 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.035-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71 with generated UUID: 3a9f8aa6-0b8d-443d-9308-e55e3dc63658 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.058-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.060-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.060-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.060-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: e87d332a-b1b5-4539-a726-98f749780d72: test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 (3f6b7f7c-4fe4-444d-bab1-2df49f8c7d5d ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.060-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.061-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.063-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.065-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: e87d332a-b1b5-4539-a726-98f749780d72: test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 ( 3f6b7f7c-4fe4-444d-bab1-2df49f8c7d5d ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.065-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.065-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.065-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 2cf1128d-8b37-4007-921b-3ff0187b3b94: test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef (4dbf40f9-0fdf-4a12-a81d-d2f31910f4ec ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.065-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.066-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.067-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.067-0500 I COMMAND [conn46] CMD: drop test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.068-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6 (24268169-6668-4b6b-9dde-f171a7a5d0bc) to test5_fsmdb0.agg_out and drop 5ebdd2b6-953b-46b8-9dbe-17d0421e1a99.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.068-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.068-0500 I INDEX [conn112] Registering index build: 3e8658cd-ccb1-4cfa-bb76-a9616d599703
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.068-0500 I INDEX [conn108] Registering index build: 70f73d4c-b3ea-4505-99c6-942af826acd0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.068-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 (3f6b7f7c-4fe4-444d-bab1-2df49f8c7d5d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.068-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 (3f6b7f7c-4fe4-444d-bab1-2df49f8c7d5d).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.068-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (5ebdd2b6-953b-46b8-9dbe-17d0421e1a99) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 6), t: 1 } and commit timestamp Timestamp(1574796798, 6)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.068-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 (3f6b7f7c-4fe4-444d-bab1-2df49f8c7d5d)'. Ident: 'index-1224-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 1512)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.068-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (5ebdd2b6-953b-46b8-9dbe-17d0421e1a99).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.068-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 (3f6b7f7c-4fe4-444d-bab1-2df49f8c7d5d)'. Ident: 'index-1229-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 1512)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.068-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 24268169-6668-4b6b-9dde-f171a7a5d0bc from test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.068-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8'. Ident: collection-1222-8224331490264904478, commit timestamp: Timestamp(1574796798, 1512)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.068-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (5ebdd2b6-953b-46b8-9dbe-17d0421e1a99)'. Ident: 'index-1230--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 6)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.068-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (5ebdd2b6-953b-46b8-9dbe-17d0421e1a99)'. Ident: 'index-1237--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 6)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.068-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1229--8000595249233899911, commit timestamp: Timestamp(1574796798, 6)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.068-0500 I COMMAND [conn110] CMD: drop test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.068-0500 I COMMAND [conn70] command test5_fsmdb0.agg_out appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8088646602669316444, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2864535257997828073, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796795352), clusterTime: Timestamp(1574796795, 4678) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796795, 4742), signature: { hash: BinData(0, 9947F91B2699C8638F258810341B434EC1837A0B), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:986 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2716ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:18.069-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796795, 4678), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:816 protocol:op_msg 2717ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.069-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 2cf1128d-8b37-4007-921b-3ff0187b3b94: test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef ( 4dbf40f9-0fdf-4a12-a81d-d2f31910f4ec ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.082-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.082-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.082-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 1acd29a4-dc51-4799-9b17-062e738536b4: test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef (4dbf40f9-0fdf-4a12-a81d-d2f31910f4ec ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.082-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.083-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.083-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.083-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.083-0500 I STORAGE [conn112] Index build initialized: 3e8658cd-ccb1-4cfa-bb76-a9616d599703: test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1 (81c626ee-c495-4a26-a5b2-dadee865ec86 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.083-0500 I INDEX [conn112] Waiting for index build to complete: 3e8658cd-ccb1-4cfa-bb76-a9616d599703
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.083-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef (4dbf40f9-0fdf-4a12-a81d-d2f31910f4ec) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.083-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef (4dbf40f9-0fdf-4a12-a81d-d2f31910f4ec).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.083-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef (4dbf40f9-0fdf-4a12-a81d-d2f31910f4ec)'. Ident: 'index-1228-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 1514)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.083-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef (4dbf40f9-0fdf-4a12-a81d-d2f31910f4ec)'. Ident: 'index-1233-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 1514)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.083-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef'. Ident: collection-1226-8224331490264904478, commit timestamp: Timestamp(1574796798, 1514)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.084-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.084-0500 I COMMAND [conn67] command test5_fsmdb0.agg_out appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2241939008553447015, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1023576511127216889, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796795386), clusterTime: Timestamp(1574796795, 5181) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796795, 5245), signature: { hash: BinData(0, 9947F91B2699C8638F258810341B434EC1837A0B), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:986 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2697ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.084-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6 (24268169-6668-4b6b-9dde-f171a7a5d0bc) to test5_fsmdb0.agg_out and drop 5ebdd2b6-953b-46b8-9dbe-17d0421e1a99.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.084-0500 I COMMAND [conn114] CMD: drop test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:18.084-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796795, 5181), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:816 protocol:op_msg 2698ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.084-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.085-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f with generated UUID: bf937f7a-9a3e-4070-93ab-4bc8cd98ece1 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.085-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.085-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (5ebdd2b6-953b-46b8-9dbe-17d0421e1a99) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 6), t: 1 } and commit timestamp Timestamp(1574796798, 6)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.086-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (5ebdd2b6-953b-46b8-9dbe-17d0421e1a99).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.086-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 24268169-6668-4b6b-9dde-f171a7a5d0bc from test5_fsmdb0.tmp.agg_out.63a98a2f-2fc9-4bd3-a9d7-4752c320cdf6 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.086-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (5ebdd2b6-953b-46b8-9dbe-17d0421e1a99)'. Ident: 'index-1230--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 6)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.086-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (5ebdd2b6-953b-46b8-9dbe-17d0421e1a99)'. Ident: 'index-1237--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 6)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.086-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1229--4104909142373009110, commit timestamp: Timestamp(1574796798, 6)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.085-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.086-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.086-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: f35b4ef0-e404-437b-a25f-3edfba34cc40: test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 (71c71892-05b1-4b81-a9de-b59cabd8358c ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.086-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.086-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e with generated UUID: badf6d1c-a143-4f31-a54b-81c3db96d14c and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.087-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.088-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 1acd29a4-dc51-4799-9b17-062e738536b4: test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef ( 4dbf40f9-0fdf-4a12-a81d-d2f31910f4ec ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.089-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.090-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1 with provided UUID: 81c626ee-c495-4a26-a5b2-dadee865ec86 and options: { uuid: UUID("81c626ee-c495-4a26-a5b2-dadee865ec86"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.093-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: f35b4ef0-e404-437b-a25f-3edfba34cc40: test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 ( 71c71892-05b1-4b81-a9de-b59cabd8358c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.094-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.103-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.103-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.103-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 8a704e36-9038-4a15-95f9-8b8d0a215102: test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 (71c71892-05b1-4b81-a9de-b59cabd8358c ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.104-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.104-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.106-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.108-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.115-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:18.116-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796795, 5558), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:816 protocol:op_msg 2693ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:18.244-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796798, 1512), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 173ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:21.071-0500 I NETWORK [conn64] end connection 127.0.0.1:46102 (41 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.108-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 8a704e36-9038-4a15-95f9-8b8d0a215102: test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 ( 71c71892-05b1-4b81-a9de-b59cabd8358c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.108-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71 with provided UUID: 3a9f8aa6-0b8d-443d-9308-e55e3dc63658 and options: { uuid: UUID("3a9f8aa6-0b8d-443d-9308-e55e3dc63658"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.115-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:18.165-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796798, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 131ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:18.244-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796798, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 210ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.109-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1 with provided UUID: 81c626ee-c495-4a26-a5b2-dadee865ec86 and options: { uuid: UUID("81c626ee-c495-4a26-a5b2-dadee865ec86"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.122-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.115-0500 I STORAGE [conn108] Index build initialized: 70f73d4c-b3ea-4505-99c6-942af826acd0: test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71 (3a9f8aa6-0b8d-443d-9308-e55e3dc63658 ): indexes: 1
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:18.299-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796798, 1520), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 181ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:18.420-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796798, 3037), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 175ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.126-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.134-0500 I COMMAND [ReplWriterWorker-15] CMD: drop test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.115-0500 I INDEX [conn108] Waiting for index build to complete: 70f73d4c-b3ea-4505-99c6-942af826acd0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:18.318-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796798, 1514), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 232ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:18.460-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796798, 3037), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 213ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.126-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71 with provided UUID: 3a9f8aa6-0b8d-443d-9308-e55e3dc63658 and options: { uuid: UUID("3a9f8aa6-0b8d-443d-9308-e55e3dc63658"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.134-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 (3f6b7f7c-4fe4-444d-bab1-2df49f8c7d5d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 1512), t: 1 } and commit timestamp Timestamp(1574796798, 1512)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.116-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 (71c71892-05b1-4b81-a9de-b59cabd8358c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:18.385-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796798, 2025), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 218ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.140-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.134-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 (3f6b7f7c-4fe4-444d-bab1-2df49f8c7d5d).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.116-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 (71c71892-05b1-4b81-a9de-b59cabd8358c).
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:21.071-0500 I CONNPOOL [TaskExecutorPool-0] Ending idle connection to host localhost:20004 because the pool meets constraints; 2 connections to that host remain open
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.151-0500 I COMMAND [ReplWriterWorker-4] CMD: drop test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.134-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 (3f6b7f7c-4fe4-444d-bab1-2df49f8c7d5d)'. Ident: 'index-1234--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 1512)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.116-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.151-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 (3f6b7f7c-4fe4-444d-bab1-2df49f8c7d5d) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 1512), t: 1 } and commit timestamp Timestamp(1574796798, 1512)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:21.228-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796798, 4303), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2909ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.134-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 (3f6b7f7c-4fe4-444d-bab1-2df49f8c7d5d)'. Ident: 'index-1243--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 1512)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.116-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 (71c71892-05b1-4b81-a9de-b59cabd8358c)'. Ident: 'index-1232-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 1520)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.151-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 (3f6b7f7c-4fe4-444d-bab1-2df49f8c7d5d).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.134-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8'. Ident: collection-1233--8000595249233899911, commit timestamp: Timestamp(1574796798, 1512)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.116-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 (71c71892-05b1-4b81-a9de-b59cabd8358c)'. Ident: 'index-1235-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 1520)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.151-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 (3f6b7f7c-4fe4-444d-bab1-2df49f8c7d5d)'. Ident: 'index-1234--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 1512)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.136-0500 I COMMAND [ReplWriterWorker-1] CMD: drop test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.116-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459'. Ident: collection-1230-8224331490264904478, commit timestamp: Timestamp(1574796798, 1520)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.151-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8 (3f6b7f7c-4fe4-444d-bab1-2df49f8c7d5d)'. Ident: 'index-1243--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 1512)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.136-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef (4dbf40f9-0fdf-4a12-a81d-d2f31910f4ec) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 1514), t: 1 } and commit timestamp Timestamp(1574796798, 1514)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.116-0500 I COMMAND [conn65] command test5_fsmdb0.agg_out appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8984327534959751150, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3767536133731782960, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796795423), clusterTime: Timestamp(1574796795, 5558) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796795, 5558), signature: { hash: BinData(0, 9947F91B2699C8638F258810341B434EC1837A0B), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796787, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"strict\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:986 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2692ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.151-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.22f3055a-73cf-46f6-bc2f-f61bf232e1d8'. Ident: collection-1233--4104909142373009110, commit timestamp: Timestamp(1574796798, 1512)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.136-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef (4dbf40f9-0fdf-4a12-a81d-d2f31910f4ec).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.116-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 3e8658cd-ccb1-4cfa-bb76-a9616d599703: test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1 ( 81c626ee-c495-4a26-a5b2-dadee865ec86 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.152-0500 I COMMAND [ReplWriterWorker-3] CMD: drop test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.136-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef (4dbf40f9-0fdf-4a12-a81d-d2f31910f4ec)'. Ident: 'index-1236--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 1514)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.116-0500 I INDEX [conn112] Index build completed: 3e8658cd-ccb1-4cfa-bb76-a9616d599703
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.152-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef (4dbf40f9-0fdf-4a12-a81d-d2f31910f4ec) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 1514), t: 1 } and commit timestamp Timestamp(1574796798, 1514)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.136-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef (4dbf40f9-0fdf-4a12-a81d-d2f31910f4ec)'. Ident: 'index-1245--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 1514)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.119-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4 with generated UUID: 52619f84-206a-481f-855a-310d351a6bc6 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.152-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef (4dbf40f9-0fdf-4a12-a81d-d2f31910f4ec).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.136-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef'. Ident: collection-1235--8000595249233899911, commit timestamp: Timestamp(1574796798, 1514)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.124-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.152-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef (4dbf40f9-0fdf-4a12-a81d-d2f31910f4ec)'. Ident: 'index-1236--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 1514)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.137-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f with provided UUID: bf937f7a-9a3e-4070-93ab-4bc8cd98ece1 and options: { uuid: UUID("bf937f7a-9a3e-4070-93ab-4bc8cd98ece1"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.125-0500 I INDEX [conn110] Registering index build: 00da8040-534a-4fca-9b0f-6b3979c26808
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.152-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef (4dbf40f9-0fdf-4a12-a81d-d2f31910f4ec)'. Ident: 'index-1245--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 1514)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.152-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.129-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.152-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.d40498f3-7ec9-44f4-9316-0889ca3464ef'. Ident: collection-1235--4104909142373009110, commit timestamp: Timestamp(1574796798, 1514)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.153-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e with provided UUID: badf6d1c-a143-4f31-a54b-81c3db96d14c and options: { uuid: UUID("badf6d1c-a143-4f31-a54b-81c3db96d14c"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.129-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.153-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f with provided UUID: bf937f7a-9a3e-4070-93ab-4bc8cd98ece1 and options: { uuid: UUID("bf937f7a-9a3e-4070-93ab-4bc8cd98ece1"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.169-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.129-0500 I INDEX [conn46] Registering index build: 09a39473-d2e0-4992-a558-2053610ea6f2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.169-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.184-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.147-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.170-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e with provided UUID: badf6d1c-a143-4f31-a54b-81c3db96d14c and options: { uuid: UUID("badf6d1c-a143-4f31-a54b-81c3db96d14c"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.184-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.157-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.186-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.184-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 91ce2fc0-29ed-49d4-94ee-d39c09dd496b: test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1 (81c626ee-c495-4a26-a5b2-dadee865ec86 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.164-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.201-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.184-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.164-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.201-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.185-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.164-0500 I STORAGE [conn110] Index build initialized: 00da8040-534a-4fca-9b0f-6b3979c26808: test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f (bf937f7a-9a3e-4070-93ab-4bc8cd98ece1 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.201-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 3d92b8b2-fb4b-4a07-a9a6-b50f031da12d: test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1 (81c626ee-c495-4a26-a5b2-dadee865ec86 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.186-0500 I COMMAND [ReplWriterWorker-15] CMD: drop test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.164-0500 I INDEX [conn110] Waiting for index build to complete: 00da8040-534a-4fca-9b0f-6b3979c26808
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.201-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.187-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 (71c71892-05b1-4b81-a9de-b59cabd8358c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 1520), t: 1 } and commit timestamp Timestamp(1574796798, 1520)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.164-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.202-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.187-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 (71c71892-05b1-4b81-a9de-b59cabd8358c).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.164-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (24268169-6668-4b6b-9dde-f171a7a5d0bc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 2025), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.204-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.187-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 (71c71892-05b1-4b81-a9de-b59cabd8358c)'. Ident: 'index-1240--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 1520)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.164-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (24268169-6668-4b6b-9dde-f171a7a5d0bc).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.204-0500 I COMMAND [ReplWriterWorker-4] CMD: drop test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.187-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 (71c71892-05b1-4b81-a9de-b59cabd8358c)'. Ident: 'index-1247--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 1520)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.164-0500 I STORAGE [conn112] renameCollection: renaming collection 81c626ee-c495-4a26-a5b2-dadee865ec86 from test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.204-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 (71c71892-05b1-4b81-a9de-b59cabd8358c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 1520), t: 1 } and commit timestamp Timestamp(1574796798, 1520)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.187-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459'. Ident: collection-1239--8000595249233899911, commit timestamp: Timestamp(1574796798, 1520)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.164-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (24268169-6668-4b6b-9dde-f171a7a5d0bc)'. Ident: 'index-1219-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 2025)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.204-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 (71c71892-05b1-4b81-a9de-b59cabd8358c).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.187-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4 with provided UUID: 52619f84-206a-481f-855a-310d351a6bc6 and options: { uuid: UUID("52619f84-206a-481f-855a-310d351a6bc6"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.164-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (24268169-6668-4b6b-9dde-f171a7a5d0bc)'. Ident: 'index-1225-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 2025)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.204-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 (71c71892-05b1-4b81-a9de-b59cabd8358c)'. Ident: 'index-1240--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 1520)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.188-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.164-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1217-8224331490264904478, commit timestamp: Timestamp(1574796798, 2025)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.204-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459 (71c71892-05b1-4b81-a9de-b59cabd8358c)'. Ident: 'index-1247--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 1520)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.196-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 91ce2fc0-29ed-49d4-94ee-d39c09dd496b: test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1 ( 81c626ee-c495-4a26-a5b2-dadee865ec86 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.164-0500 I INDEX [conn114] Registering index build: ac5f0a2c-55e3-485f-bb35-94af81fcf1ea
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.204-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.d77d0b26-e43b-4990-905b-107631a7d459'. Ident: collection-1239--4104909142373009110, commit timestamp: Timestamp(1574796798, 1520)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.205-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.164-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.206-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4 with provided UUID: 52619f84-206a-481f-855a-310d351a6bc6 and options: { uuid: UUID("52619f84-206a-481f-855a-310d351a6bc6"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.223-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.164-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7162553688162626244, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6027293247899710433, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796798034), clusterTime: Timestamp(1574796798, 9) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796798, 9), signature: { hash: BinData(0, E6F53FB4E0546E7978B1299E99F5ADEE9AA74B7E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 129ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.207-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 3d92b8b2-fb4b-4a07-a9a6-b50f031da12d: test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1 ( 81c626ee-c495-4a26-a5b2-dadee865ec86 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.223-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.165-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 70f73d4c-b3ea-4505-99c6-942af826acd0: test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71 ( 3a9f8aa6-0b8d-443d-9308-e55e3dc63658 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.220-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.223-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 790155c6-6a6c-40a8-96b9-4bbce8d66ce1: test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71 (3a9f8aa6-0b8d-443d-9308-e55e3dc63658 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.165-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.239-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.224-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.167-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db with generated UUID: ee0f9b64-4ca5-4664-8395-694f8bb5ee60 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.239-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.224-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.168-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.239-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 0390a63c-339b-4028-8173-f03b4e85c2df: test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71 (3a9f8aa6-0b8d-443d-9308-e55e3dc63658 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.226-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1 (81c626ee-c495-4a26-a5b2-dadee865ec86) to test5_fsmdb0.agg_out and drop 24268169-6668-4b6b-9dde-f171a7a5d0bc.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.184-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 00da8040-534a-4fca-9b0f-6b3979c26808: test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f ( bf937f7a-9a3e-4070-93ab-4bc8cd98ece1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.239-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.227-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.194-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.239-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.228-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (24268169-6668-4b6b-9dde-f171a7a5d0bc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 2025), t: 1 } and commit timestamp Timestamp(1574796798, 2025)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.194-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.241-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1 (81c626ee-c495-4a26-a5b2-dadee865ec86) to test5_fsmdb0.agg_out and drop 24268169-6668-4b6b-9dde-f171a7a5d0bc.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.228-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (24268169-6668-4b6b-9dde-f171a7a5d0bc).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.194-0500 I STORAGE [conn46] Index build initialized: 09a39473-d2e0-4992-a558-2053610ea6f2: test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e (badf6d1c-a143-4f31-a54b-81c3db96d14c ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.241-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.228-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 81c626ee-c495-4a26-a5b2-dadee865ec86 from test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.194-0500 I INDEX [conn110] Index build completed: 00da8040-534a-4fca-9b0f-6b3979c26808
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.241-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (24268169-6668-4b6b-9dde-f171a7a5d0bc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 2025), t: 1 } and commit timestamp Timestamp(1574796798, 2025)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.228-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (24268169-6668-4b6b-9dde-f171a7a5d0bc)'. Ident: 'index-1228--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 2025)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.194-0500 I INDEX [conn108] Index build completed: 70f73d4c-b3ea-4505-99c6-942af826acd0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.241-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (24268169-6668-4b6b-9dde-f171a7a5d0bc).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.228-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (24268169-6668-4b6b-9dde-f171a7a5d0bc)'. Ident: 'index-1241--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 2025)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.194-0500 I INDEX [conn46] Waiting for index build to complete: 09a39473-d2e0-4992-a558-2053610ea6f2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.241-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 81c626ee-c495-4a26-a5b2-dadee865ec86 from test5_fsmdb0.tmp.agg_out.e92d0cd9-05c6-41d6-a159-ebd15dd622f1 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.228-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1227--8000595249233899911, commit timestamp: Timestamp(1574796798, 2025)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.194-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.241-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (24268169-6668-4b6b-9dde-f171a7a5d0bc)'. Ident: 'index-1228--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 2025)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.228-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db with provided UUID: ee0f9b64-4ca5-4664-8395-694f8bb5ee60 and options: { uuid: UUID("ee0f9b64-4ca5-4664-8395-694f8bb5ee60"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.194-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71 appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796798, 1511), signature: { hash: BinData(0, E6F53FB4E0546E7978B1299E99F5ADEE9AA74B7E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 1236 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 127ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.241-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (24268169-6668-4b6b-9dde-f171a7a5d0bc)'. Ident: 'index-1241--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 2025)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.229-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 790155c6-6a6c-40a8-96b9-4bbce8d66ce1: test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71 ( 3a9f8aa6-0b8d-443d-9308-e55e3dc63658 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.200-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.241-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1227--4104909142373009110, commit timestamp: Timestamp(1574796798, 2025)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.244-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.200-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.243-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 0390a63c-339b-4028-8173-f03b4e85c2df: test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71 ( 3a9f8aa6-0b8d-443d-9308-e55e3dc63658 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.261-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.200-0500 I INDEX [conn112] Registering index build: 5c73344e-3661-426c-8fd2-c2061ae32956
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.245-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db with provided UUID: ee0f9b64-4ca5-4664-8395-694f8bb5ee60 and options: { uuid: UUID("ee0f9b64-4ca5-4664-8395-694f8bb5ee60"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.261-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.210-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.261-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.261-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: f79cef74-5e2c-4fb3-a6e8-9a3d1175e9f0: test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f (bf937f7a-9a3e-4070-93ab-4bc8cd98ece1 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.223-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.278-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.261-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.223-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.278-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.262-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.223-0500 I STORAGE [conn114] Index build initialized: ac5f0a2c-55e3-485f-bb35-94af81fcf1ea: test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4 (52619f84-206a-481f-855a-310d351a6bc6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.278-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 9f49976d-708e-4c3d-b2bf-83c26ed81476: test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f (bf937f7a-9a3e-4070-93ab-4bc8cd98ece1 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.264-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.223-0500 I INDEX [conn114] Waiting for index build to complete: ac5f0a2c-55e3-485f-bb35-94af81fcf1ea
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.278-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.270-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: f79cef74-5e2c-4fb3-a6e8-9a3d1175e9f0: test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f ( bf937f7a-9a3e-4070-93ab-4bc8cd98ece1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.223-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.278-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.287-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.224-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 09a39473-d2e0-4992-a558-2053610ea6f2: test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e ( badf6d1c-a143-4f31-a54b-81c3db96d14c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.281-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.287-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.224-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.284-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 9f49976d-708e-4c3d-b2bf-83c26ed81476: test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f ( bf937f7a-9a3e-4070-93ab-4bc8cd98ece1 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.287-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 1eef9114-a395-4e03-8b94-605e996a05d6: test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e (badf6d1c-a143-4f31-a54b-81c3db96d14c ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.235-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.304-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.287-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.243-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.304-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.288-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.243-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.304-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 9e5e048c-272b-47ff-ab29-1c52a23fde80: test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e (badf6d1c-a143-4f31-a54b-81c3db96d14c ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.290-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.243-0500 I STORAGE [conn112] Index build initialized: 5c73344e-3661-426c-8fd2-c2061ae32956: test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db (ee0f9b64-4ca5-4664-8395-694f8bb5ee60 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.304-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.293-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 1eef9114-a395-4e03-8b94-605e996a05d6: test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e ( badf6d1c-a143-4f31-a54b-81c3db96d14c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.243-0500 I INDEX [conn112] Waiting for index build to complete: 5c73344e-3661-426c-8fd2-c2061ae32956
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.304-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.308-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.243-0500 I INDEX [conn46] Index build completed: 09a39473-d2e0-4992-a558-2053610ea6f2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.307-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.308-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.243-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.310-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 9e5e048c-272b-47ff-ab29-1c52a23fde80: test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e ( badf6d1c-a143-4f31-a54b-81c3db96d14c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.308-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: fbc336f5-e2a1-4d84-9f9c-cc1d50635bf9: test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4 (52619f84-206a-481f-855a-310d351a6bc6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.243-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (81c626ee-c495-4a26-a5b2-dadee865ec86) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 3036), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.328-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.308-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.243-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (81c626ee-c495-4a26-a5b2-dadee865ec86).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.328-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.309-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.243-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796798, 1585), signature: { hash: BinData(0, E6F53FB4E0546E7978B1299E99F5ADEE9AA74B7E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 409 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 113ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.328-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: dd46f412-3cd3-4e07-a05b-36d495a8b5a4: test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4 (52619f84-206a-481f-855a-310d351a6bc6 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.310-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f (bf937f7a-9a3e-4070-93ab-4bc8cd98ece1) to test5_fsmdb0.agg_out and drop 81c626ee-c495-4a26-a5b2-dadee865ec86.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.243-0500 I STORAGE [conn110] renameCollection: renaming collection bf937f7a-9a3e-4070-93ab-4bc8cd98ece1 from test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.329-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.311-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.243-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (81c626ee-c495-4a26-a5b2-dadee865ec86)'. Ident: 'index-1239-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 3036)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.329-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.312-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (81c626ee-c495-4a26-a5b2-dadee865ec86) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 3036), t: 1 } and commit timestamp Timestamp(1574796798, 3036)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.243-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (81c626ee-c495-4a26-a5b2-dadee865ec86)'. Ident: 'index-1241-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 3036)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.330-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f (bf937f7a-9a3e-4070-93ab-4bc8cd98ece1) to test5_fsmdb0.agg_out and drop 81c626ee-c495-4a26-a5b2-dadee865ec86.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.312-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (81c626ee-c495-4a26-a5b2-dadee865ec86).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.243-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1237-8224331490264904478, commit timestamp: Timestamp(1574796798, 3036)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.332-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.312-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection bf937f7a-9a3e-4070-93ab-4bc8cd98ece1 from test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.243-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.332-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (81c626ee-c495-4a26-a5b2-dadee865ec86) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 3036), t: 1 } and commit timestamp Timestamp(1574796798, 3036)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.312-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (81c626ee-c495-4a26-a5b2-dadee865ec86)'. Ident: 'index-1250--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 3036)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.243-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (bf937f7a-9a3e-4070-93ab-4bc8cd98ece1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 3037), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.332-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (81c626ee-c495-4a26-a5b2-dadee865ec86).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.312-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (81c626ee-c495-4a26-a5b2-dadee865ec86)'. Ident: 'index-1257--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 3036)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.243-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (bf937f7a-9a3e-4070-93ab-4bc8cd98ece1).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.332-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection bf937f7a-9a3e-4070-93ab-4bc8cd98ece1 from test5_fsmdb0.tmp.agg_out.b5275729-88cf-4472-b62e-a938c949dc6f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.312-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1249--8000595249233899911, commit timestamp: Timestamp(1574796798, 3036)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.243-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5862748942397613592, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2593065931845190956, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796798070), clusterTime: Timestamp(1574796798, 1512) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796798, 1514), signature: { hash: BinData(0, E6F53FB4E0546E7978B1299E99F5ADEE9AA74B7E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 159ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.332-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (81c626ee-c495-4a26-a5b2-dadee865ec86)'. Ident: 'index-1250--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 3036)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.313-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71 (3a9f8aa6-0b8d-443d-9308-e55e3dc63658) to test5_fsmdb0.agg_out and drop bf937f7a-9a3e-4070-93ab-4bc8cd98ece1.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.243-0500 I STORAGE [conn108] renameCollection: renaming collection 3a9f8aa6-0b8d-443d-9308-e55e3dc63658 from test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.332-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (81c626ee-c495-4a26-a5b2-dadee865ec86)'. Ident: 'index-1257--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 3036)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.313-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (bf937f7a-9a3e-4070-93ab-4bc8cd98ece1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 3037), t: 1 } and commit timestamp Timestamp(1574796798, 3037)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.244-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bf937f7a-9a3e-4070-93ab-4bc8cd98ece1)'. Ident: 'index-1247-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 3037)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.332-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1249--4104909142373009110, commit timestamp: Timestamp(1574796798, 3036)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.313-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (bf937f7a-9a3e-4070-93ab-4bc8cd98ece1).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.244-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bf937f7a-9a3e-4070-93ab-4bc8cd98ece1)'. Ident: 'index-1250-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 3037)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.333-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71 (3a9f8aa6-0b8d-443d-9308-e55e3dc63658) to test5_fsmdb0.agg_out and drop bf937f7a-9a3e-4070-93ab-4bc8cd98ece1.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.313-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 3a9f8aa6-0b8d-443d-9308-e55e3dc63658 from test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.244-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1244-8224331490264904478, commit timestamp: Timestamp(1574796798, 3037)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.333-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (bf937f7a-9a3e-4070-93ab-4bc8cd98ece1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 3037), t: 1 } and commit timestamp Timestamp(1574796798, 3037)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.313-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bf937f7a-9a3e-4070-93ab-4bc8cd98ece1)'. Ident: 'index-1254--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 3037)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.244-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.333-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (bf937f7a-9a3e-4070-93ab-4bc8cd98ece1).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.313-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bf937f7a-9a3e-4070-93ab-4bc8cd98ece1)'. Ident: 'index-1265--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 3037)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.244-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: ac5f0a2c-55e3-485f-bb35-94af81fcf1ea: test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4 ( 52619f84-206a-481f-855a-310d351a6bc6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.333-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 3a9f8aa6-0b8d-443d-9308-e55e3dc63658 from test5_fsmdb0.tmp.agg_out.995d003f-db5a-4335-8352-3a4cb4356c71 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.313-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1253--8000595249233899911, commit timestamp: Timestamp(1574796798, 3037)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.244-0500 I INDEX [conn114] Index build completed: ac5f0a2c-55e3-485f-bb35-94af81fcf1ea
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.333-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bf937f7a-9a3e-4070-93ab-4bc8cd98ece1)'. Ident: 'index-1254--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 3037)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.314-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006 with provided UUID: 62c8349e-1e3c-48a2-a729-8f84a401c16e and options: { uuid: UUID("62c8349e-1e3c-48a2-a729-8f84a401c16e"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.244-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7247579711206940961, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1533021823187689007, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796798034), clusterTime: Timestamp(1574796798, 9) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796798, 9), signature: { hash: BinData(0, E6F53FB4E0546E7978B1299E99F5ADEE9AA74B7E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 209ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.333-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bf937f7a-9a3e-4070-93ab-4bc8cd98ece1)'. Ident: 'index-1265--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 3037)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.315-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: fbc336f5-e2a1-4d84-9f9c-cc1d50635bf9: test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4 ( 52619f84-206a-481f-855a-310d351a6bc6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.244-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.333-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1253--4104909142373009110, commit timestamp: Timestamp(1574796798, 3037)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.329-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.245-0500 I COMMAND [conn70] CMD: dropIndexes test5_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.334-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006 with provided UUID: 62c8349e-1e3c-48a2-a729-8f84a401c16e and options: { uuid: UUID("62c8349e-1e3c-48a2-a729-8f84a401c16e"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.346-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.246-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006 with generated UUID: 62c8349e-1e3c-48a2-a729-8f84a401c16e and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.335-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: dd46f412-3cd3-4e07-a05b-36d495a8b5a4: test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4 ( 52619f84-206a-481f-855a-310d351a6bc6 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.346-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.247-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.378-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.346-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 92239bcf-7b77-4171-a6a1-404edc6d1adf: test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db (ee0f9b64-4ca5-4664-8395-694f8bb5ee60 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.248-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a with generated UUID: 4eb3eb12-c63c-46e4-862d-c0273e276971 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.392-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.346-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.256-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 5c73344e-3661-426c-8fd2-c2061ae32956: test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db ( ee0f9b64-4ca5-4664-8395-694f8bb5ee60 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.392-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.346-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.256-0500 I INDEX [conn112] Index build completed: 5c73344e-3661-426c-8fd2-c2061ae32956
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.392-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 84d9cd3a-c566-4b57-a5b0-4189831d4eee: test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db (ee0f9b64-4ca5-4664-8395-694f8bb5ee60 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.348-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a with provided UUID: 4eb3eb12-c63c-46e4-862d-c0273e276971 and options: { uuid: UUID("4eb3eb12-c63c-46e4-862d-c0273e276971"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.274-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.393-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.377-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.275-0500 I INDEX [conn114] Registering index build: 5a03bfe4-b7a5-4fd6-ac30-20776d27b2bd
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.393-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.384-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 92239bcf-7b77-4171-a6a1-404edc6d1adf: test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db ( ee0f9b64-4ca5-4664-8395-694f8bb5ee60 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.282-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.395-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a with provided UUID: 4eb3eb12-c63c-46e4-862d-c0273e276971 and options: { uuid: UUID("4eb3eb12-c63c-46e4-862d-c0273e276971"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.392-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.297-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.395-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.400-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4 (52619f84-206a-481f-855a-310d351a6bc6) to test5_fsmdb0.agg_out and drop 3a9f8aa6-0b8d-443d-9308-e55e3dc63658.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.298-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.404-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 84d9cd3a-c566-4b57-a5b0-4189831d4eee: test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db ( ee0f9b64-4ca5-4664-8395-694f8bb5ee60 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.400-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (3a9f8aa6-0b8d-443d-9308-e55e3dc63658) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 4055), t: 1 } and commit timestamp Timestamp(1574796798, 4055)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.298-0500 I STORAGE [conn114] Index build initialized: 5a03bfe4-b7a5-4fd6-ac30-20776d27b2bd: test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006 (62c8349e-1e3c-48a2-a729-8f84a401c16e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.411-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.400-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (3a9f8aa6-0b8d-443d-9308-e55e3dc63658).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.298-0500 I INDEX [conn114] Waiting for index build to complete: 5a03bfe4-b7a5-4fd6-ac30-20776d27b2bd
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.421-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4 (52619f84-206a-481f-855a-310d351a6bc6) to test5_fsmdb0.agg_out and drop 3a9f8aa6-0b8d-443d-9308-e55e3dc63658.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.400-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection 52619f84-206a-481f-855a-310d351a6bc6 from test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.298-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.421-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (3a9f8aa6-0b8d-443d-9308-e55e3dc63658) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 4055), t: 1 } and commit timestamp Timestamp(1574796798, 4055)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.400-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3a9f8aa6-0b8d-443d-9308-e55e3dc63658)'. Ident: 'index-1252--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 4055)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.298-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (3a9f8aa6-0b8d-443d-9308-e55e3dc63658) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 4055), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.421-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (3a9f8aa6-0b8d-443d-9308-e55e3dc63658).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.400-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3a9f8aa6-0b8d-443d-9308-e55e3dc63658)'. Ident: 'index-1261--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 4055)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.298-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (3a9f8aa6-0b8d-443d-9308-e55e3dc63658).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.421-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection 52619f84-206a-481f-855a-310d351a6bc6 from test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.400-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1251--8000595249233899911, commit timestamp: Timestamp(1574796798, 4055)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.298-0500 I STORAGE [conn108] renameCollection: renaming collection 52619f84-206a-481f-855a-310d351a6bc6 from test5_fsmdb0.tmp.agg_out.7c21fe72-4f03-4977-85e5-6673dfc9bea4 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.421-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3a9f8aa6-0b8d-443d-9308-e55e3dc63658)'. Ident: 'index-1252--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 4055)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.420-0500 I INDEX [ReplWriterWorker-12] index build: starting on test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.298-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (3a9f8aa6-0b8d-443d-9308-e55e3dc63658)'. Ident: 'index-1240-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 4055)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.421-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3a9f8aa6-0b8d-443d-9308-e55e3dc63658)'. Ident: 'index-1261--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 4055)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.420-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.298-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (3a9f8aa6-0b8d-443d-9308-e55e3dc63658)'. Ident: 'index-1243-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 4055)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.421-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1251--4104909142373009110, commit timestamp: Timestamp(1574796798, 4055)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.420-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: 3f669d06-47d4-4aae-97b1-350817e67ea1: test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006 (62c8349e-1e3c-48a2-a729-8f84a401c16e ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.298-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1238-8224331490264904478, commit timestamp: Timestamp(1574796798, 4055)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.424-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796798, 4055) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796798, 4171), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 4279 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 122ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.420-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.298-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.440-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.421-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.298-0500 I INDEX [conn46] Registering index build: 05edfab4-5176-47d1-a5a3-ae39cdf5296c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.440-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.422-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e (badf6d1c-a143-4f31-a54b-81c3db96d14c) to test5_fsmdb0.agg_out and drop 52619f84-206a-481f-855a-310d351a6bc6.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.298-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1992546857060491881, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2319981552856821863, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796798117), clusterTime: Timestamp(1574796798, 1520) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796798, 1520), signature: { hash: BinData(0, E6F53FB4E0546E7978B1299E99F5ADEE9AA74B7E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 180ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.440-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: aaa9ada0-f6bf-48b2-8502-494f24410cc6: test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006 (62c8349e-1e3c-48a2-a729-8f84a401c16e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.424-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.299-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.440-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.424-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (52619f84-206a-481f-855a-310d351a6bc6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 4239), t: 1 } and commit timestamp Timestamp(1574796798, 4239)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.309-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.441-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.424-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (52619f84-206a-481f-855a-310d351a6bc6).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.317-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.442-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e (badf6d1c-a143-4f31-a54b-81c3db96d14c) to test5_fsmdb0.agg_out and drop 52619f84-206a-481f-855a-310d351a6bc6.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.424-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection badf6d1c-a143-4f31-a54b-81c3db96d14c from test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.317-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.443-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.424-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (52619f84-206a-481f-855a-310d351a6bc6)'. Ident: 'index-1260--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 4239)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.317-0500 I STORAGE [conn46] Index build initialized: 05edfab4-5176-47d1-a5a3-ae39cdf5296c: test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a (4eb3eb12-c63c-46e4-862d-c0273e276971 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.444-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.agg_out (52619f84-206a-481f-855a-310d351a6bc6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 4239), t: 1 } and commit timestamp Timestamp(1574796798, 4239)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.424-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (52619f84-206a-481f-855a-310d351a6bc6)'. Ident: 'index-1269--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 4239)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.317-0500 I INDEX [conn46] Waiting for index build to complete: 05edfab4-5176-47d1-a5a3-ae39cdf5296c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.444-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.agg_out (52619f84-206a-481f-855a-310d351a6bc6).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.424-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1259--8000595249233899911, commit timestamp: Timestamp(1574796798, 4239)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.317-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.444-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection badf6d1c-a143-4f31-a54b-81c3db96d14c from test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.426-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 3f669d06-47d4-4aae-97b1-350817e67ea1: test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006 ( 62c8349e-1e3c-48a2-a729-8f84a401c16e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.317-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (52619f84-206a-481f-855a-310d351a6bc6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 4239), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.444-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (52619f84-206a-481f-855a-310d351a6bc6)'. Ident: 'index-1260--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 4239)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.426-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404 with provided UUID: 31e278d4-3a65-4c5e-b957-3214f807fbab and options: { uuid: UUID("31e278d4-3a65-4c5e-b957-3214f807fbab"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.317-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (52619f84-206a-481f-855a-310d351a6bc6).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.444-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (52619f84-206a-481f-855a-310d351a6bc6)'. Ident: 'index-1269--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 4239)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.441-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.317-0500 I STORAGE [conn110] renameCollection: renaming collection badf6d1c-a143-4f31-a54b-81c3db96d14c from test5_fsmdb0.tmp.agg_out.87e4aa1d-371c-4f6a-be90-8659073bf62e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.444-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1259--4104909142373009110, commit timestamp: Timestamp(1574796798, 4239)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.442-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a with provided UUID: cb8eecdb-ba29-49c7-8200-62ab7d7b20aa and options: { uuid: UUID("cb8eecdb-ba29-49c7-8200-62ab7d7b20aa"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.317-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (52619f84-206a-481f-855a-310d351a6bc6)'. Ident: 'index-1251-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 4239)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.446-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: aaa9ada0-f6bf-48b2-8502-494f24410cc6: test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006 ( 62c8349e-1e3c-48a2-a729-8f84a401c16e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.457-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.317-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (52619f84-206a-481f-855a-310d351a6bc6)'. Ident: 'index-1257-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 4239)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.446-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404 with provided UUID: 31e278d4-3a65-4c5e-b957-3214f807fbab and options: { uuid: UUID("31e278d4-3a65-4c5e-b957-3214f807fbab"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.472-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.317-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1249-8224331490264904478, commit timestamp: Timestamp(1574796798, 4239)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.460-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.472-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.317-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.461-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a with provided UUID: cb8eecdb-ba29-49c7-8200-62ab7d7b20aa and options: { uuid: UUID("cb8eecdb-ba29-49c7-8200-62ab7d7b20aa"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.472-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 05ea86b4-807c-48b5-8751-aa09c019072c: test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a (4eb3eb12-c63c-46e4-862d-c0273e276971 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.317-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8361359118479966759, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1147899315260074220, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796798085), clusterTime: Timestamp(1574796798, 1514) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796798, 1515), signature: { hash: BinData(0, E6F53FB4E0546E7978B1299E99F5ADEE9AA74B7E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 231ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.476-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.472-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.317-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 5a03bfe4-b7a5-4fd6-ac30-20776d27b2bd: test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006 ( 62c8349e-1e3c-48a2-a729-8f84a401c16e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.493-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.473-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.318-0500 I INDEX [conn114] Index build completed: 5a03bfe4-b7a5-4fd6-ac30-20776d27b2bd
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.493-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.475-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.318-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404 with generated UUID: 31e278d4-3a65-4c5e-b957-3214f807fbab and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.493-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 6a61a1c2-af5a-47ef-a3e5-a508aa871c49: test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a (4eb3eb12-c63c-46e4-862d-c0273e276971 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.476-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db (ee0f9b64-4ca5-4664-8395-694f8bb5ee60) to test5_fsmdb0.agg_out and drop badf6d1c-a143-4f31-a54b-81c3db96d14c.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.319-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.493-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.476-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (badf6d1c-a143-4f31-a54b-81c3db96d14c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 4616), t: 1 } and commit timestamp Timestamp(1574796798, 4616)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.320-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a with generated UUID: cb8eecdb-ba29-49c7-8200-62ab7d7b20aa and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.493-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.476-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (badf6d1c-a143-4f31-a54b-81c3db96d14c).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.322-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.497-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.477-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection ee0f9b64-4ca5-4664-8395-694f8bb5ee60 from test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.340-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 05edfab4-5176-47d1-a5a3-ae39cdf5296c: test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a ( 4eb3eb12-c63c-46e4-862d-c0273e276971 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.498-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db (ee0f9b64-4ca5-4664-8395-694f8bb5ee60) to test5_fsmdb0.agg_out and drop badf6d1c-a143-4f31-a54b-81c3db96d14c.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.477-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (badf6d1c-a143-4f31-a54b-81c3db96d14c)'. Ident: 'index-1256--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 4616)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.340-0500 I INDEX [conn46] Index build completed: 05edfab4-5176-47d1-a5a3-ae39cdf5296c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.498-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.agg_out (badf6d1c-a143-4f31-a54b-81c3db96d14c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 4616), t: 1 } and commit timestamp Timestamp(1574796798, 4616)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.477-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (badf6d1c-a143-4f31-a54b-81c3db96d14c)'. Ident: 'index-1267--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 4616)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.377-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.498-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.agg_out (badf6d1c-a143-4f31-a54b-81c3db96d14c).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.498-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection ee0f9b64-4ca5-4664-8395-694f8bb5ee60 from test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.383-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.477-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1255--8000595249233899911, commit timestamp: Timestamp(1574796798, 4616)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.498-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (badf6d1c-a143-4f31-a54b-81c3db96d14c)'. Ident: 'index-1256--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 4616)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.384-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.478-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 05ea86b4-807c-48b5-8751-aa09c019072c: test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a ( 4eb3eb12-c63c-46e4-862d-c0273e276971 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.498-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (badf6d1c-a143-4f31-a54b-81c3db96d14c)'. Ident: 'index-1267--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 4616)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.384-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (badf6d1c-a143-4f31-a54b-81c3db96d14c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 4616), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.481-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b with provided UUID: 88fb6e9e-382e-493e-a175-0e756d0a74f2 and options: { uuid: UUID("88fb6e9e-382e-493e-a175-0e756d0a74f2"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.498-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1255--4104909142373009110, commit timestamp: Timestamp(1574796798, 4616)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.384-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (badf6d1c-a143-4f31-a54b-81c3db96d14c).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.495-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.499-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 6a61a1c2-af5a-47ef-a3e5-a508aa871c49: test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a ( 4eb3eb12-c63c-46e4-862d-c0273e276971 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.384-0500 I STORAGE [conn112] renameCollection: renaming collection ee0f9b64-4ca5-4664-8395-694f8bb5ee60 from test5_fsmdb0.tmp.agg_out.4cbb1141-de30-4317-9ba0-9b2abf3d04db to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.500-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006 (62c8349e-1e3c-48a2-a729-8f84a401c16e) to test5_fsmdb0.agg_out and drop ee0f9b64-4ca5-4664-8395-694f8bb5ee60.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.502-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b with provided UUID: 88fb6e9e-382e-493e-a175-0e756d0a74f2 and options: { uuid: UUID("88fb6e9e-382e-493e-a175-0e756d0a74f2"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.384-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (badf6d1c-a143-4f31-a54b-81c3db96d14c)'. Ident: 'index-1248-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 4616)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.500-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (ee0f9b64-4ca5-4664-8395-694f8bb5ee60) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 5119), t: 1 } and commit timestamp Timestamp(1574796798, 5119)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.516-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.384-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (badf6d1c-a143-4f31-a54b-81c3db96d14c)'. Ident: 'index-1253-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 4616)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.500-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (ee0f9b64-4ca5-4664-8395-694f8bb5ee60).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.520-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006 (62c8349e-1e3c-48a2-a729-8f84a401c16e) to test5_fsmdb0.agg_out and drop ee0f9b64-4ca5-4664-8395-694f8bb5ee60.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.384-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1245-8224331490264904478, commit timestamp: Timestamp(1574796798, 4616)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.500-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 62c8349e-1e3c-48a2-a729-8f84a401c16e from test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.520-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (ee0f9b64-4ca5-4664-8395-694f8bb5ee60) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 5119), t: 1 } and commit timestamp Timestamp(1574796798, 5119)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.384-0500 I INDEX [conn108] Registering index build: 4025c7e0-4209-464d-950b-a2c0a2b6e98c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.500-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ee0f9b64-4ca5-4664-8395-694f8bb5ee60)'. Ident: 'index-1264--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 5119)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.521-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (ee0f9b64-4ca5-4664-8395-694f8bb5ee60).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.384-0500 I INDEX [conn110] Registering index build: 8c576104-6fe2-4e5d-b440-4959bd692f1a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.500-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ee0f9b64-4ca5-4664-8395-694f8bb5ee60)'. Ident: 'index-1273--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 5119)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.521-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 62c8349e-1e3c-48a2-a729-8f84a401c16e from test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.384-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 777891638940266823, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5273484018427582777, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796798166), clusterTime: Timestamp(1574796798, 2025) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796798, 2025), signature: { hash: BinData(0, E6F53FB4E0546E7978B1299E99F5ADEE9AA74B7E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 217ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.500-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1263--8000595249233899911, commit timestamp: Timestamp(1574796798, 5119)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.521-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ee0f9b64-4ca5-4664-8395-694f8bb5ee60)'. Ident: 'index-1264--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 5119)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.387-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b with generated UUID: 88fb6e9e-382e-493e-a175-0e756d0a74f2 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.502-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f with provided UUID: f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c and options: { uuid: UUID("f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.521-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ee0f9b64-4ca5-4664-8395-694f8bb5ee60)'. Ident: 'index-1273--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 5119)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.411-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.516-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.521-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1263--4104909142373009110, commit timestamp: Timestamp(1574796798, 5119)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.411-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.529-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.523-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f with provided UUID: f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c and options: { uuid: UUID("f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.411-0500 I STORAGE [conn108] Index build initialized: 4025c7e0-4209-464d-950b-a2c0a2b6e98c: test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a (cb8eecdb-ba29-49c7-8200-62ab7d7b20aa ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.529-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.538-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.411-0500 I INDEX [conn108] Waiting for index build to complete: 4025c7e0-4209-464d-950b-a2c0a2b6e98c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.529-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 00fffc8b-0cff-41ba-90aa-b71a183233d8: test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a (cb8eecdb-ba29-49c7-8200-62ab7d7b20aa ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.549-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.419-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.529-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.549-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.419-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.530-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.549-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: fbd5edc9-ca8c-43ad-b42a-7275f9d0ad33: test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a (cb8eecdb-ba29-49c7-8200-62ab7d7b20aa ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.419-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (ee0f9b64-4ca5-4664-8395-694f8bb5ee60) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 5119), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.532-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.550-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.419-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (ee0f9b64-4ca5-4664-8395-694f8bb5ee60).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.533-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a (4eb3eb12-c63c-46e4-862d-c0273e276971) to test5_fsmdb0.agg_out and drop 62c8349e-1e3c-48a2-a729-8f84a401c16e.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.550-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.419-0500 I STORAGE [conn114] renameCollection: renaming collection 62c8349e-1e3c-48a2-a729-8f84a401c16e from test5_fsmdb0.tmp.agg_out.f8bc9133-2793-4b6f-a79d-a30a0b615006 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.533-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.agg_out (62c8349e-1e3c-48a2-a729-8f84a401c16e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 5560), t: 1 } and commit timestamp Timestamp(1574796798, 5560)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.552-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.419-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ee0f9b64-4ca5-4664-8395-694f8bb5ee60)'. Ident: 'index-1256-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 5119)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.533-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.agg_out (62c8349e-1e3c-48a2-a729-8f84a401c16e).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.554-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a (4eb3eb12-c63c-46e4-862d-c0273e276971) to test5_fsmdb0.agg_out and drop 62c8349e-1e3c-48a2-a729-8f84a401c16e.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.419-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ee0f9b64-4ca5-4664-8395-694f8bb5ee60)'. Ident: 'index-1259-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 5119)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.533-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection 4eb3eb12-c63c-46e4-862d-c0273e276971 from test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.554-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (62c8349e-1e3c-48a2-a729-8f84a401c16e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 5560), t: 1 } and commit timestamp Timestamp(1574796798, 5560)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.419-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1254-8224331490264904478, commit timestamp: Timestamp(1574796798, 5119)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.533-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (62c8349e-1e3c-48a2-a729-8f84a401c16e)'. Ident: 'index-1272--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 5560)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.554-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (62c8349e-1e3c-48a2-a729-8f84a401c16e).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.419-0500 I INDEX [conn112] Registering index build: fea542a4-6749-47f1-830c-ae90eff0e146
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.533-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (62c8349e-1e3c-48a2-a729-8f84a401c16e)'. Ident: 'index-1277--8000595249233899911', commit timestamp: 'Timestamp(1574796798, 5560)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.554-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 4eb3eb12-c63c-46e4-862d-c0273e276971 from test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.419-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.533-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1271--8000595249233899911, commit timestamp: Timestamp(1574796798, 5560)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.554-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (62c8349e-1e3c-48a2-a729-8f84a401c16e)'. Ident: 'index-1272--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 5560)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.420-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8142034920537119489, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 87847158100439423, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796798245), clusterTime: Timestamp(1574796798, 3037) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796798, 3037), signature: { hash: BinData(0, E6F53FB4E0546E7978B1299E99F5ADEE9AA74B7E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 173ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:18.537-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 00fffc8b-0cff-41ba-90aa-b71a183233d8: test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a ( cb8eecdb-ba29-49c7-8200-62ab7d7b20aa ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.554-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (62c8349e-1e3c-48a2-a729-8f84a401c16e)'. Ident: 'index-1277--4104909142373009110', commit timestamp: 'Timestamp(1574796798, 5560)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.420-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.554-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1271--4104909142373009110, commit timestamp: Timestamp(1574796798, 5560)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.235-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25 with provided UUID: 82f86bb6-6d32-4627-a907-851e318544c0 and options: { uuid: UUID("82f86bb6-6d32-4627-a907-851e318544c0"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.422-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f with generated UUID: f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:18.555-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: fbd5edc9-ca8c-43ad-b42a-7275f9d0ad33: test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a ( cb8eecdb-ba29-49c7-8200-62ab7d7b20aa ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.250-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.424-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.251-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25 with provided UUID: 82f86bb6-6d32-4627-a907-851e318544c0 and options: { uuid: UUID("82f86bb6-6d32-4627-a907-851e318544c0"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.443-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 4025c7e0-4209-464d-950b-a2c0a2b6e98c: test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a ( cb8eecdb-ba29-49c7-8200-62ab7d7b20aa ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.452-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.452-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.452-0500 I STORAGE [conn110] Index build initialized: 8c576104-6fe2-4e5d-b440-4959bd692f1a: test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404 (31e278d4-3a65-4c5e-b957-3214f807fbab ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.452-0500 I INDEX [conn110] Waiting for index build to complete: 8c576104-6fe2-4e5d-b440-4959bd692f1a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.452-0500 I INDEX [conn108] Index build completed: 4025c7e0-4209-464d-950b-a2c0a2b6e98c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.459-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.459-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.459-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (62c8349e-1e3c-48a2-a729-8f84a401c16e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796798, 5560), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.459-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (62c8349e-1e3c-48a2-a729-8f84a401c16e).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.459-0500 I STORAGE [conn46] renameCollection: renaming collection 4eb3eb12-c63c-46e4-862d-c0273e276971 from test5_fsmdb0.tmp.agg_out.1c9444df-c9d7-42da-90bd-18fcf81cf00a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.459-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (62c8349e-1e3c-48a2-a729-8f84a401c16e)'. Ident: 'index-1263-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 5560)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.459-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (62c8349e-1e3c-48a2-a729-8f84a401c16e)'. Ident: 'index-1265-8224331490264904478', commit timestamp: 'Timestamp(1574796798, 5560)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.459-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1261-8224331490264904478, commit timestamp: Timestamp(1574796798, 5560)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.459-0500 I INDEX [conn114] Registering index build: a4c28141-13ae-4fe8-b6d8-7152589b25bf
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.459-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.459-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3288229418954080939, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7920672855948572699, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796798246), clusterTime: Timestamp(1574796798, 3037) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796798, 3039), signature: { hash: BinData(0, E6F53FB4E0546E7978B1299E99F5ADEE9AA74B7E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 212ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.460-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.462-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25 with generated UUID: 82f86bb6-6d32-4627-a907-851e318544c0 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.471-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.487-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.487-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.487-0500 I STORAGE [conn112] Index build initialized: fea542a4-6749-47f1-830c-ae90eff0e146: test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b (88fb6e9e-382e-493e-a175-0e756d0a74f2 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.487-0500 I INDEX [conn112] Waiting for index build to complete: fea542a4-6749-47f1-830c-ae90eff0e146
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.489-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 8c576104-6fe2-4e5d-b440-4959bd692f1a: test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404 ( 31e278d4-3a65-4c5e-b957-3214f807fbab ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.489-0500 I INDEX [conn110] Index build completed: 8c576104-6fe2-4e5d-b440-4959bd692f1a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.489-0500 I COMMAND [conn110] command test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796798, 4615), signature: { hash: BinData(0, E6F53FB4E0546E7978B1299E99F5ADEE9AA74B7E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 14738 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 111ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:18.496-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.072-0500 I CONNPOOL [ShardRegistry] Ending idle connection to host localhost:20001 because the pool meets constraints; 2 connections to that host remain open
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.227-0500 I NETWORK [conn106] end connection 127.0.0.1:39016 (47 connections now open)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.227-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25 appName: "tid:3" command: create { create: "tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25", temp: true, validationLevel: "strict", validationAction: "error", databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796798, 5560), signature: { hash: BinData(0, E6F53FB4E0546E7978B1299E99F5ADEE9AA74B7E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2765ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.227-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.227-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (4eb3eb12-c63c-46e4-862d-c0273e276971) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 1), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.228-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (4eb3eb12-c63c-46e4-862d-c0273e276971).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.228-0500 I STORAGE [conn46] renameCollection: renaming collection cb8eecdb-ba29-49c7-8200-62ab7d7b20aa from test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.228-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (4eb3eb12-c63c-46e4-862d-c0273e276971)'. Ident: 'index-1264-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 1)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.228-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (4eb3eb12-c63c-46e4-862d-c0273e276971)'. Ident: 'index-1267-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 1)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.228-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1262-8224331490264904478, commit timestamp: Timestamp(1574796801, 1)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.228-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.228-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a appName: "tid:4" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a", to: "test5_fsmdb0.agg_out", collectionOptions: { validationLevel: "strict", validationAction: "error" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796798, 6063), signature: { hash: BinData(0, E6F53FB4E0546E7978B1299E99F5ADEE9AA74B7E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 2740565 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2741ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.228-0500 I INDEX [conn108] Registering index build: dd946066-eb88-48a5-a5f7-516ae17f2052
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.228-0500 I COMMAND [conn220] command test5_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796798, 5559), lsid: { id: UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") }, $clusterTime: { clusterTime: Timestamp(1574796798, 5559), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796798, 5559). Collection minimum timestamp is Timestamp(1574796798, 5627)" errName:SnapshotUnavailable errCode:246 reslen:602 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2673015 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 2673ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.228-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5249031678382885237, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4389215807655177373, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796798319), clusterTime: Timestamp(1574796798, 4303) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796798, 4304), signature: { hash: BinData(0, E6F53FB4E0546E7978B1299E99F5ADEE9AA74B7E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2908ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.228-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.231-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35 with generated UUID: 54ed7f91-a029-472c-b369-ed3056eb5db8 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.236-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.251-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.251-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.251-0500 I STORAGE [conn114] Index build initialized: a4c28141-13ae-4fe8-b6d8-7152589b25bf: test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f (f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.252-0500 I INDEX [conn114] Waiting for index build to complete: a4c28141-13ae-4fe8-b6d8-7152589b25bf
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.252-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.253-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: fea542a4-6749-47f1-830c-ae90eff0e146: test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b ( 88fb6e9e-382e-493e-a175-0e756d0a74f2 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.262-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.262-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.268-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.269-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.269-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.269-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 440e53ea-4460-412b-81d6-bdddbd969ca6: test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404 (31e278d4-3a65-4c5e-b957-3214f807fbab ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.269-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.270-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.270-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.273-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.273-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a (cb8eecdb-ba29-49c7-8200-62ab7d7b20aa) to test5_fsmdb0.agg_out and drop 4eb3eb12-c63c-46e4-862d-c0273e276971.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.274-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (4eb3eb12-c63c-46e4-862d-c0273e276971) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 1), t: 1 } and commit timestamp Timestamp(1574796801, 1)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.274-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (4eb3eb12-c63c-46e4-862d-c0273e276971).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.274-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection cb8eecdb-ba29-49c7-8200-62ab7d7b20aa from test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.274-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (4eb3eb12-c63c-46e4-862d-c0273e276971)'. Ident: 'index-1276--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 1)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.274-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (4eb3eb12-c63c-46e4-862d-c0273e276971)'. Ident: 'index-1283--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 1)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.274-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1275--8000595249233899911, commit timestamp: Timestamp(1574796801, 1)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.274-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35 with provided UUID: 54ed7f91-a029-472c-b369-ed3056eb5db8 and options: { uuid: UUID("54ed7f91-a029-472c-b369-ed3056eb5db8"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.277-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 440e53ea-4460-412b-81d6-bdddbd969ca6: test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404 ( 31e278d4-3a65-4c5e-b957-3214f807fbab ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.278-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.278-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.278-0500 I STORAGE [conn108] Index build initialized: dd946066-eb88-48a5-a5f7-516ae17f2052: test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25 (82f86bb6-6d32-4627-a907-851e318544c0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.278-0500 I INDEX [conn108] Waiting for index build to complete: dd946066-eb88-48a5-a5f7-516ae17f2052
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.278-0500 I INDEX [conn112] Index build completed: fea542a4-6749-47f1-830c-ae90eff0e146
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.278-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.278-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796798, 5119), signature: { hash: BinData(0, E6F53FB4E0546E7978B1299E99F5ADEE9AA74B7E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 7023 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2858ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.278-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (cb8eecdb-ba29-49c7-8200-62ab7d7b20aa) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 509), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.278-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (cb8eecdb-ba29-49c7-8200-62ab7d7b20aa).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.278-0500 I STORAGE [conn110] renameCollection: renaming collection 31e278d4-3a65-4c5e-b957-3214f807fbab from test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.278-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (cb8eecdb-ba29-49c7-8200-62ab7d7b20aa)'. Ident: 'index-1272-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 509)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.278-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (cb8eecdb-ba29-49c7-8200-62ab7d7b20aa)'. Ident: 'index-1273-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 509)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.278-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1270-8224331490264904478, commit timestamp: Timestamp(1574796801, 509)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.278-0500 I INDEX [conn46] Registering index build: 4c54c41c-c72c-40b1-a413-9f3b3a6431f2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.278-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.278-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 479233507697050444, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1186830625511889556, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796798300), clusterTime: Timestamp(1574796798, 4171) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796798, 4303), signature: { hash: BinData(0, E6F53FB4E0546E7978B1299E99F5ADEE9AA74B7E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2960ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.279-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: a4c28141-13ae-4fe8-b6d8-7152589b25bf: test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f ( f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:21.279-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796798, 4171), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2978ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.279-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.281-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695 with generated UUID: 10127493-e92e-46d5-ab1a-331bbba01f70 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.287-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.287-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.287-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: fa13997d-ab46-4488-bcc9-cd8f84d275d1: test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404 (31e278d4-3a65-4c5e-b957-3214f807fbab ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.287-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.288-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.288-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.290-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.290-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.292-0500 I COMMAND [ReplWriterWorker-11] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a (cb8eecdb-ba29-49c7-8200-62ab7d7b20aa) to test5_fsmdb0.agg_out and drop 4eb3eb12-c63c-46e4-862d-c0273e276971.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.292-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.agg_out (4eb3eb12-c63c-46e4-862d-c0273e276971) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 1), t: 1 } and commit timestamp Timestamp(1574796801, 1)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.292-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.agg_out (4eb3eb12-c63c-46e4-862d-c0273e276971).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.292-0500 I STORAGE [ReplWriterWorker-11] renameCollection: renaming collection cb8eecdb-ba29-49c7-8200-62ab7d7b20aa from test5_fsmdb0.tmp.agg_out.e4e09ad9-bc18-4742-b275-924b22a85a0a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.292-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (4eb3eb12-c63c-46e4-862d-c0273e276971)'. Ident: 'index-1276--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 1)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.292-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (4eb3eb12-c63c-46e4-862d-c0273e276971)'. Ident: 'index-1283--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 1)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.292-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1275--4104909142373009110, commit timestamp: Timestamp(1574796801, 1)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.293-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: fa13997d-ab46-4488-bcc9-cd8f84d275d1: test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404 ( 31e278d4-3a65-4c5e-b957-3214f807fbab ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.293-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35 with provided UUID: 54ed7f91-a029-472c-b369-ed3056eb5db8 and options: { uuid: UUID("54ed7f91-a029-472c-b369-ed3056eb5db8"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.305-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.305-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.305-0500 I STORAGE [conn46] Index build initialized: 4c54c41c-c72c-40b1-a413-9f3b3a6431f2: test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35 (54ed7f91-a029-472c-b369-ed3056eb5db8 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.306-0500 I INDEX [conn46] Waiting for index build to complete: 4c54c41c-c72c-40b1-a413-9f3b3a6431f2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.306-0500 I INDEX [conn114] Index build completed: a4c28141-13ae-4fe8-b6d8-7152589b25bf
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.306-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.306-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796798, 5560), signature: { hash: BinData(0, E6F53FB4E0546E7978B1299E99F5ADEE9AA74B7E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 2740878 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2846ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.307-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.307-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: dd946066-eb88-48a5-a5f7-516ae17f2052: test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25 ( 82f86bb6-6d32-4627-a907-851e318544c0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.307-0500 I INDEX [conn108] Index build completed: dd946066-eb88-48a5-a5f7-516ae17f2052
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.309-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.309-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.309-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 235c2283-4255-4dda-a1e2-e88c2f552e13: test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b (88fb6e9e-382e-493e-a175-0e756d0a74f2 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.309-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.310-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.312-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.314-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.314-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.315-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 235c2283-4255-4dda-a1e2-e88c2f552e13: test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b ( 88fb6e9e-382e-493e-a175-0e756d0a74f2 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.317-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.317-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.318-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (31e278d4-3a65-4c5e-b957-3214f807fbab) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 1016), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.318-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (31e278d4-3a65-4c5e-b957-3214f807fbab).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.318-0500 I STORAGE [conn110] renameCollection: renaming collection 88fb6e9e-382e-493e-a175-0e756d0a74f2 from test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.318-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (31e278d4-3a65-4c5e-b957-3214f807fbab)'. Ident: 'index-1271-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 1016)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.318-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (31e278d4-3a65-4c5e-b957-3214f807fbab)'. Ident: 'index-1277-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 1016)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.318-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1269-8224331490264904478, commit timestamp: Timestamp(1574796801, 1016)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.318-0500 I INDEX [conn112] Registering index build: 612d1183-63f6-475b-88f0-7726824b8f93
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.318-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7961421522915383658, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7825355286159858455, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796798386), clusterTime: Timestamp(1574796798, 4680) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796798, 4744), signature: { hash: BinData(0, E6F53FB4E0546E7978B1299E99F5ADEE9AA74B7E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2931ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:21.318-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796798, 4680), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2932ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.321-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418 with generated UUID: 2adf409a-3c13-4d0a-b084-ebd6015ac8ca and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.321-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 4c54c41c-c72c-40b1-a413-9f3b3a6431f2: test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35 ( 54ed7f91-a029-472c-b369-ed3056eb5db8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.325-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.325-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.325-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 6b0f541c-c7fc-42e2-8f54-a0c1004d8e45: test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b (88fb6e9e-382e-493e-a175-0e756d0a74f2 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.325-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.326-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.328-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.331-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 6b0f541c-c7fc-42e2-8f54-a0c1004d8e45: test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b ( 88fb6e9e-382e-493e-a175-0e756d0a74f2 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.331-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.331-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.331-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: dcc13929-8bc3-4bf8-ba16-6f26d6807d23: test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f (f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.332-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.332-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.334-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404 (31e278d4-3a65-4c5e-b957-3214f807fbab) to test5_fsmdb0.agg_out and drop cb8eecdb-ba29-49c7-8200-62ab7d7b20aa.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.335-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.335-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (cb8eecdb-ba29-49c7-8200-62ab7d7b20aa) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 509), t: 1 } and commit timestamp Timestamp(1574796801, 509)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.336-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (cb8eecdb-ba29-49c7-8200-62ab7d7b20aa).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.336-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 31e278d4-3a65-4c5e-b957-3214f807fbab from test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.336-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (cb8eecdb-ba29-49c7-8200-62ab7d7b20aa)'. Ident: 'index-1282--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 509)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.336-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (cb8eecdb-ba29-49c7-8200-62ab7d7b20aa)'. Ident: 'index-1289--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 509)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.336-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1281--8000595249233899911, commit timestamp: Timestamp(1574796801, 509)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.336-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695 with provided UUID: 10127493-e92e-46d5-ab1a-331bbba01f70 and options: { uuid: UUID("10127493-e92e-46d5-ab1a-331bbba01f70"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.338-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: dcc13929-8bc3-4bf8-ba16-6f26d6807d23: test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f ( f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.347-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.347-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.347-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 9df1a76f-bcf3-4e07-b64e-4e13898771e3: test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f (f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.347-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.347-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.347-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.347-0500 I STORAGE [conn112] Index build initialized: 612d1183-63f6-475b-88f0-7726824b8f93: test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695 (10127493-e92e-46d5-ab1a-331bbba01f70 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.347-0500 I INDEX [conn112] Waiting for index build to complete: 612d1183-63f6-475b-88f0-7726824b8f93
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.347-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.347-0500 I INDEX [conn46] Index build completed: 4c54c41c-c72c-40b1-a413-9f3b3a6431f2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.349-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404 (31e278d4-3a65-4c5e-b957-3214f807fbab) to test5_fsmdb0.agg_out and drop cb8eecdb-ba29-49c7-8200-62ab7d7b20aa.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.351-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.351-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (cb8eecdb-ba29-49c7-8200-62ab7d7b20aa) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 509), t: 1 } and commit timestamp Timestamp(1574796801, 509)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.351-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (cb8eecdb-ba29-49c7-8200-62ab7d7b20aa).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.351-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection 31e278d4-3a65-4c5e-b957-3214f807fbab from test5_fsmdb0.tmp.agg_out.10d9f2a4-cc73-40c4-a642-49bee58f3404 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.351-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (cb8eecdb-ba29-49c7-8200-62ab7d7b20aa)'. Ident: 'index-1282--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 509)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.351-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (cb8eecdb-ba29-49c7-8200-62ab7d7b20aa)'. Ident: 'index-1289--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 509)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.351-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1281--4104909142373009110, commit timestamp: Timestamp(1574796801, 509)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.353-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.353-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 9df1a76f-bcf3-4e07-b64e-4e13898771e3: test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f ( f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.354-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695 with provided UUID: 10127493-e92e-46d5-ab1a-331bbba01f70 and options: { uuid: UUID("10127493-e92e-46d5-ab1a-331bbba01f70"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.355-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.355-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.355-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (88fb6e9e-382e-493e-a175-0e756d0a74f2) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 1967), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.355-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (88fb6e9e-382e-493e-a175-0e756d0a74f2).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.355-0500 I STORAGE [conn114] renameCollection: renaming collection f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c from test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.355-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (88fb6e9e-382e-493e-a175-0e756d0a74f2)'. Ident: 'index-1276-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 1967)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.355-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (88fb6e9e-382e-493e-a175-0e756d0a74f2)'. Ident: 'index-1281-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 1967)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.355-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1274-8224331490264904478, commit timestamp: Timestamp(1574796801, 1967)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.355-0500 I INDEX [conn108] Registering index build: 26094431-c44a-4ee9-872d-2b7fea27f4ff
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.355-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.355-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2921085748913997199, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7625996028466321577, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796798421), clusterTime: Timestamp(1574796798, 5183) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796798, 5183), signature: { hash: BinData(0, E6F53FB4E0546E7978B1299E99F5ADEE9AA74B7E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2933ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:21.356-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796798, 5183), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2934ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.356-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.367-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.368-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.371-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.371-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.371-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: c0d825a5-1f99-4a60-b31d-4b33164bc67b: test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25 (82f86bb6-6d32-4627-a907-851e318544c0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.371-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.371-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.374-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.374-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.374-0500 I STORAGE [conn108] Index build initialized: 26094431-c44a-4ee9-872d-2b7fea27f4ff: test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418 (2adf409a-3c13-4d0a-b084-ebd6015ac8ca ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.374-0500 I INDEX [conn108] Waiting for index build to complete: 26094431-c44a-4ee9-872d-2b7fea27f4ff
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.374-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.374-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 2023), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.374-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.374-0500 I STORAGE [conn110] renameCollection: renaming collection 82f86bb6-6d32-4627-a907-851e318544c0 from test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.374-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c)'. Ident: 'index-1280-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 2023)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.374-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c)'. Ident: 'index-1285-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 2023)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.375-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1278-8224331490264904478, commit timestamp: Timestamp(1574796801, 2023)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.375-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.375-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.375-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2113987130544476111, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 7639088149222608966, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796798461), clusterTime: Timestamp(1574796798, 5560) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796798, 5560), signature: { hash: BinData(0, E6F53FB4E0546E7978B1299E99F5ADEE9AA74B7E), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2913ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:21.375-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796798, 5560), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2914ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.375-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 612d1183-63f6-475b-88f0-7726824b8f93: test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695 ( 10127493-e92e-46d5-ab1a-331bbba01f70 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.375-0500 I INDEX [conn112] Index build completed: 612d1183-63f6-475b-88f0-7726824b8f93
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.376-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e with generated UUID: d9f423b2-4f76-486d-b074-14eb2df3f1e3 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.376-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.377-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8 with generated UUID: 35fc03b2-7e7d-4d01-a1ac-c19415506b2e and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.379-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: c0d825a5-1f99-4a60-b31d-4b33164bc67b: test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25 ( 82f86bb6-6d32-4627-a907-851e318544c0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.379-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.388-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.388-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.388-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 9dc52559-d40a-4cb4-a4f6-95756ea90eb0: test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25 (82f86bb6-6d32-4627-a907-851e318544c0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.388-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.388-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.391-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.394-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.395-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 9dc52559-d40a-4cb4-a4f6-95756ea90eb0: test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25 ( 82f86bb6-6d32-4627-a907-851e318544c0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.401-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 26094431-c44a-4ee9-872d-2b7fea27f4ff: test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418 ( 2adf409a-3c13-4d0a-b084-ebd6015ac8ca ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:21.417-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796801, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 187ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:21.538-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796801, 2087), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 162ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.394-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.409-0500 I INDEX [ReplWriterWorker-12] index build: starting on test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:21.450-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796801, 509), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 170ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.401-0500 I INDEX [conn108] Index build completed: 26094431-c44a-4ee9-872d-2b7fea27f4ff
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:21.576-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796801, 2019), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 218ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.394-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: cc3d32f2-ec1f-430b-bfeb-884c7a8c68ff: test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35 (54ed7f91-a029-472c-b369-ed3056eb5db8 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.409-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:21.487-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796801, 1016), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 167ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.409-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:21.707-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796801, 4044), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 167ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.395-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.409-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: db9184bf-f18b-42b6-80c4-7d75a1fa7133: test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35 (54ed7f91-a029-472c-b369-ed3056eb5db8 ): indexes: 1
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:21.617-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796801, 3159), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 165ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.416-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:21.764-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796801, 4549), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 187ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.395-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.409-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:21.618-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796801, 2656), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 198ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.416-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.396-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b (88fb6e9e-382e-493e-a175-0e756d0a74f2) to test5_fsmdb0.agg_out and drop 31e278d4-3a65-4c5e-b957-3214f807fbab.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.410-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:21.673-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796801, 3536), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 185ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.417-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (82f86bb6-6d32-4627-a907-851e318544c0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 2592), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.398-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.411-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b (88fb6e9e-382e-493e-a175-0e756d0a74f2) to test5_fsmdb0.agg_out and drop 31e278d4-3a65-4c5e-b957-3214f807fbab.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:24.517-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796801, 5555), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2898ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.417-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (82f86bb6-6d32-4627-a907-851e318544c0).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.398-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (31e278d4-3a65-4c5e-b957-3214f807fbab) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 1016), t: 1 } and commit timestamp Timestamp(1574796801, 1016)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.411-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.417-0500 I STORAGE [conn46] renameCollection: renaming collection 54ed7f91-a029-472c-b369-ed3056eb5db8 from test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.398-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (31e278d4-3a65-4c5e-b957-3214f807fbab).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.412-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (31e278d4-3a65-4c5e-b957-3214f807fbab) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 1016), t: 1 } and commit timestamp Timestamp(1574796801, 1016)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.417-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (82f86bb6-6d32-4627-a907-851e318544c0)'. Ident: 'index-1284-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 2592)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.398-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 88fb6e9e-382e-493e-a175-0e756d0a74f2 from test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.412-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (31e278d4-3a65-4c5e-b957-3214f807fbab).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.417-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (82f86bb6-6d32-4627-a907-851e318544c0)'. Ident: 'index-1289-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 2592)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.398-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (31e278d4-3a65-4c5e-b957-3214f807fbab)'. Ident: 'index-1280--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 1016)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.412-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection 88fb6e9e-382e-493e-a175-0e756d0a74f2 from test5_fsmdb0.tmp.agg_out.458d84f0-d6aa-4139-8a44-66b033e2995b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.417-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1282-8224331490264904478, commit timestamp: Timestamp(1574796801, 2592)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.398-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (31e278d4-3a65-4c5e-b957-3214f807fbab)'. Ident: 'index-1293--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 1016)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.412-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (31e278d4-3a65-4c5e-b957-3214f807fbab)'. Ident: 'index-1280--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 1016)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.417-0500 I INDEX [conn114] Registering index build: d2be132e-edb6-471b-8151-b001d4da3b66
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.398-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1279--8000595249233899911, commit timestamp: Timestamp(1574796801, 1016)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.412-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (31e278d4-3a65-4c5e-b957-3214f807fbab)'. Ident: 'index-1293--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 1016)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.417-0500 I INDEX [conn112] Registering index build: 7d7b9a1c-2b4c-484e-9747-820a2883bc84
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.399-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418 with provided UUID: 2adf409a-3c13-4d0a-b084-ebd6015ac8ca and options: { uuid: UUID("2adf409a-3c13-4d0a-b084-ebd6015ac8ca"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.412-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1279--4104909142373009110, commit timestamp: Timestamp(1574796801, 1016)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.417-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4938188269636157192, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8647587853661694959, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796801230), clusterTime: Timestamp(1574796801, 1) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796801, 1), signature: { hash: BinData(0, 1E8F9AC6FD6C76503938AC9BA8C832D99CC7A0C5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 186ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.400-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: cc3d32f2-ec1f-430b-bfeb-884c7a8c68ff: test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35 ( 54ed7f91-a029-472c-b369-ed3056eb5db8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.413-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: db9184bf-f18b-42b6-80c4-7d75a1fa7133: test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35 ( 54ed7f91-a029-472c-b369-ed3056eb5db8 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.421-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a with generated UUID: 39b36a6e-5d33-4d14-8983-d91ca17abe20 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.415-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.416-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418 with provided UUID: 2adf409a-3c13-4d0a-b084-ebd6015ac8ca and options: { uuid: UUID("2adf409a-3c13-4d0a-b084-ebd6015ac8ca"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.442-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.423-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f (f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c) to test5_fsmdb0.agg_out and drop 88fb6e9e-382e-493e-a175-0e756d0a74f2.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.431-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.442-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.423-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (88fb6e9e-382e-493e-a175-0e756d0a74f2) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 1967), t: 1 } and commit timestamp Timestamp(1574796801, 1967)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.437-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f (f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c) to test5_fsmdb0.agg_out and drop 88fb6e9e-382e-493e-a175-0e756d0a74f2.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.442-0500 I STORAGE [conn114] Index build initialized: d2be132e-edb6-471b-8151-b001d4da3b66: test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8 (35fc03b2-7e7d-4d01-a1ac-c19415506b2e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.423-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (88fb6e9e-382e-493e-a175-0e756d0a74f2).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.437-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (88fb6e9e-382e-493e-a175-0e756d0a74f2) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 1967), t: 1 } and commit timestamp Timestamp(1574796801, 1967)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.442-0500 I INDEX [conn114] Waiting for index build to complete: d2be132e-edb6-471b-8151-b001d4da3b66
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.423-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c from test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.437-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (88fb6e9e-382e-493e-a175-0e756d0a74f2).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.449-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.423-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (88fb6e9e-382e-493e-a175-0e756d0a74f2)'. Ident: 'index-1286--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 1967)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.437-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c from test5_fsmdb0.tmp.agg_out.f43e570b-1720-41fd-8f26-bded061edf9f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.449-0500 I COMMAND [conn110] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.423-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (88fb6e9e-382e-493e-a175-0e756d0a74f2)'. Ident: 'index-1297--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 1967)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.438-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (88fb6e9e-382e-493e-a175-0e756d0a74f2)'. Ident: 'index-1286--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 1967)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.449-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.agg_out (54ed7f91-a029-472c-b369-ed3056eb5db8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 3095), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.423-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1285--8000595249233899911, commit timestamp: Timestamp(1574796801, 1967)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.438-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (88fb6e9e-382e-493e-a175-0e756d0a74f2)'. Ident: 'index-1297--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 1967)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.449-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.agg_out (54ed7f91-a029-472c-b369-ed3056eb5db8).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.438-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.438-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1285--4104909142373009110, commit timestamp: Timestamp(1574796801, 1967)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.449-0500 I STORAGE [conn110] renameCollection: renaming collection 10127493-e92e-46d5-ab1a-331bbba01f70 from test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.438-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.456-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.449-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (54ed7f91-a029-472c-b369-ed3056eb5db8)'. Ident: 'index-1288-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 3095)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.438-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 324253ee-cac6-4d6b-9dcc-08a71c86193d: test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695 (10127493-e92e-46d5-ab1a-331bbba01f70 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.456-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.449-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (54ed7f91-a029-472c-b369-ed3056eb5db8)'. Ident: 'index-1291-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 3095)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.439-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.456-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 4b5e125f-e11e-4dde-8dbb-45ee66c4eae4: test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695 (10127493-e92e-46d5-ab1a-331bbba01f70 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.449-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1286-8224331490264904478, commit timestamp: Timestamp(1574796801, 3095)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.439-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.456-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.450-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.441-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25 (82f86bb6-6d32-4627-a907-851e318544c0) to test5_fsmdb0.agg_out and drop f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.457-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.450-0500 I INDEX [conn46] Registering index build: 369ca097-7ef1-4e33-ac4d-fbab0d130eb9
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.441-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.458-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25 (82f86bb6-6d32-4627-a907-851e318544c0) to test5_fsmdb0.agg_out and drop f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.450-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7226191142606038192, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8394700480776651829, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796801280), clusterTime: Timestamp(1574796801, 509) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796801, 509), signature: { hash: BinData(0, 1E8F9AC6FD6C76503938AC9BA8C832D99CC7A0C5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 169ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.442-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 2023), t: 1 } and commit timestamp Timestamp(1574796801, 2023)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.460-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.450-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.442-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.460-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 2023), t: 1 } and commit timestamp Timestamp(1574796801, 2023)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.453-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b with generated UUID: c5d7a19e-fda3-4451-bba4-92e58d053153 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.442-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 82f86bb6-6d32-4627-a907-851e318544c0 from test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.460-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.463-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.442-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c)'. Ident: 'index-1288--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 2023)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.460-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 82f86bb6-6d32-4627-a907-851e318544c0 from test5_fsmdb0.tmp.agg_out.f24f0fce-f8bb-4e2c-8410-150e57db1a25 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.478-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.442-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c)'. Ident: 'index-1299--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 2023)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.460-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c)'. Ident: 'index-1288--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 2023)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.478-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.442-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1287--8000595249233899911, commit timestamp: Timestamp(1574796801, 2023)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.460-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f45d388c-626b-4fb9-89d5-cc0ca4ebfe6c)'. Ident: 'index-1299--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 2023)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.478-0500 I STORAGE [conn112] Index build initialized: 7d7b9a1c-2b4c-484e-9747-820a2883bc84: test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e (d9f423b2-4f76-486d-b074-14eb2df3f1e3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.444-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 324253ee-cac6-4d6b-9dcc-08a71c86193d: test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695 ( 10127493-e92e-46d5-ab1a-331bbba01f70 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.460-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1287--4104909142373009110, commit timestamp: Timestamp(1574796801, 2023)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.478-0500 I INDEX [conn112] Waiting for index build to complete: 7d7b9a1c-2b4c-484e-9747-820a2883bc84
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.446-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e with provided UUID: d9f423b2-4f76-486d-b074-14eb2df3f1e3 and options: { uuid: UUID("d9f423b2-4f76-486d-b074-14eb2df3f1e3"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.462-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 4b5e125f-e11e-4dde-8dbb-45ee66c4eae4: test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695 ( 10127493-e92e-46d5-ab1a-331bbba01f70 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.478-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: d2be132e-edb6-471b-8151-b001d4da3b66: test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8 ( 35fc03b2-7e7d-4d01-a1ac-c19415506b2e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.461-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.464-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e with provided UUID: d9f423b2-4f76-486d-b074-14eb2df3f1e3 and options: { uuid: UUID("d9f423b2-4f76-486d-b074-14eb2df3f1e3"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.478-0500 I INDEX [conn114] Index build completed: d2be132e-edb6-471b-8151-b001d4da3b66
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.462-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8 with provided UUID: 35fc03b2-7e7d-4d01-a1ac-c19415506b2e and options: { uuid: UUID("35fc03b2-7e7d-4d01-a1ac-c19415506b2e"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.480-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.485-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.477-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.481-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8 with provided UUID: 35fc03b2-7e7d-4d01-a1ac-c19415506b2e and options: { uuid: UUID("35fc03b2-7e7d-4d01-a1ac-c19415506b2e"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.486-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.498-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.497-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.486-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (10127493-e92e-46d5-ab1a-331bbba01f70) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 3536), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.498-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.515-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.486-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (10127493-e92e-46d5-ab1a-331bbba01f70).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.498-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 9b6ac22d-f857-46d7-8e22-e9378026ff63: test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418 (2adf409a-3c13-4d0a-b084-ebd6015ac8ca ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.515-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.486-0500 I STORAGE [conn108] renameCollection: renaming collection 2adf409a-3c13-4d0a-b084-ebd6015ac8ca from test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.498-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.515-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 2864e5de-ea93-40e2-abdd-509f458b9d70: test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418 (2adf409a-3c13-4d0a-b084-ebd6015ac8ca ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.486-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (10127493-e92e-46d5-ab1a-331bbba01f70)'. Ident: 'index-1294-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 3536)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.498-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.515-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.486-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (10127493-e92e-46d5-ab1a-331bbba01f70)'. Ident: 'index-1295-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 3536)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.500-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.516-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.486-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1292-8224331490264904478, commit timestamp: Timestamp(1574796801, 3536)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.504-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 9b6ac22d-f857-46d7-8e22-e9378026ff63: test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418 ( 2adf409a-3c13-4d0a-b084-ebd6015ac8ca ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.518-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.486-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.505-0500 I COMMAND [ReplWriterWorker-9] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35 (54ed7f91-a029-472c-b369-ed3056eb5db8) to test5_fsmdb0.agg_out and drop 82f86bb6-6d32-4627-a907-851e318544c0.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.520-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35 (54ed7f91-a029-472c-b369-ed3056eb5db8) to test5_fsmdb0.agg_out and drop 82f86bb6-6d32-4627-a907-851e318544c0.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.486-0500 I INDEX [conn110] Registering index build: 928762fe-c1f2-4862-92a7-a3c7ccb34091
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.506-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.agg_out (82f86bb6-6d32-4627-a907-851e318544c0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 2592), t: 1 } and commit timestamp Timestamp(1574796801, 2592)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.520-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (82f86bb6-6d32-4627-a907-851e318544c0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 2592), t: 1 } and commit timestamp Timestamp(1574796801, 2592)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.486-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5364322210195585606, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6029665396106987186, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796801319), clusterTime: Timestamp(1574796801, 1016) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796801, 1016), signature: { hash: BinData(0, 1E8F9AC6FD6C76503938AC9BA8C832D99CC7A0C5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 166ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.506-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.agg_out (82f86bb6-6d32-4627-a907-851e318544c0).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.520-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (82f86bb6-6d32-4627-a907-851e318544c0).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.487-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.506-0500 I STORAGE [ReplWriterWorker-9] renameCollection: renaming collection 54ed7f91-a029-472c-b369-ed3056eb5db8 from test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.520-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 54ed7f91-a029-472c-b369-ed3056eb5db8 from test5_fsmdb0.tmp.agg_out.ae687570-2aaa-409e-a4ea-435dfc25cb35 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.489-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df with generated UUID: 31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.506-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (82f86bb6-6d32-4627-a907-851e318544c0)'. Ident: 'index-1292--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 2592)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.520-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (82f86bb6-6d32-4627-a907-851e318544c0)'. Ident: 'index-1292--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 2592)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.496-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.506-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (82f86bb6-6d32-4627-a907-851e318544c0)'. Ident: 'index-1303--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 2592)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.520-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (82f86bb6-6d32-4627-a907-851e318544c0)'. Ident: 'index-1303--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 2592)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.510-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.506-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1291--8000595249233899911, commit timestamp: Timestamp(1574796801, 2592)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.520-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1291--4104909142373009110, commit timestamp: Timestamp(1574796801, 2592)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.510-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.509-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a with provided UUID: 39b36a6e-5d33-4d14-8983-d91ca17abe20 and options: { uuid: UUID("39b36a6e-5d33-4d14-8983-d91ca17abe20"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.521-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 2864e5de-ea93-40e2-abdd-509f458b9d70: test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418 ( 2adf409a-3c13-4d0a-b084-ebd6015ac8ca ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.510-0500 I STORAGE [conn46] Index build initialized: 369ca097-7ef1-4e33-ac4d-fbab0d130eb9: test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a (39b36a6e-5d33-4d14-8983-d91ca17abe20 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.524-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.523-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796801, 2592) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796801, 2656), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 12751 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 102ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.510-0500 I INDEX [conn46] Waiting for index build to complete: 369ca097-7ef1-4e33-ac4d-fbab0d130eb9
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.529-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695 (10127493-e92e-46d5-ab1a-331bbba01f70) to test5_fsmdb0.agg_out and drop 54ed7f91-a029-472c-b369-ed3056eb5db8.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.525-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a with provided UUID: 39b36a6e-5d33-4d14-8983-d91ca17abe20 and options: { uuid: UUID("39b36a6e-5d33-4d14-8983-d91ca17abe20"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.511-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.529-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (54ed7f91-a029-472c-b369-ed3056eb5db8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 3095), t: 1 } and commit timestamp Timestamp(1574796801, 3095)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.540-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.513-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 7d7b9a1c-2b4c-484e-9747-820a2883bc84: test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e ( d9f423b2-4f76-486d-b074-14eb2df3f1e3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.529-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (54ed7f91-a029-472c-b369-ed3056eb5db8).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.544-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695 (10127493-e92e-46d5-ab1a-331bbba01f70) to test5_fsmdb0.agg_out and drop 54ed7f91-a029-472c-b369-ed3056eb5db8.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.521-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.529-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 10127493-e92e-46d5-ab1a-331bbba01f70 from test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.544-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (54ed7f91-a029-472c-b369-ed3056eb5db8) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 3095), t: 1 } and commit timestamp Timestamp(1574796801, 3095)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.521-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.529-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (54ed7f91-a029-472c-b369-ed3056eb5db8)'. Ident: 'index-1296--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 3095)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.544-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (54ed7f91-a029-472c-b369-ed3056eb5db8).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.530-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.529-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (54ed7f91-a029-472c-b369-ed3056eb5db8)'. Ident: 'index-1305--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 3095)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.544-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 10127493-e92e-46d5-ab1a-331bbba01f70 from test5_fsmdb0.tmp.agg_out.e6fae22f-d5ce-4331-989d-87d23ce6d695 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.537-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.529-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1295--8000595249233899911, commit timestamp: Timestamp(1574796801, 3095)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.544-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (54ed7f91-a029-472c-b369-ed3056eb5db8)'. Ident: 'index-1296--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 3095)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.537-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.532-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b with provided UUID: c5d7a19e-fda3-4451-bba4-92e58d053153 and options: { uuid: UUID("c5d7a19e-fda3-4451-bba4-92e58d053153"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.544-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (54ed7f91-a029-472c-b369-ed3056eb5db8)'. Ident: 'index-1305--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 3095)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.537-0500 I STORAGE [conn110] Index build initialized: 928762fe-c1f2-4862-92a7-a3c7ccb34091: test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b (c5d7a19e-fda3-4451-bba4-92e58d053153 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.548-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.544-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1295--4104909142373009110, commit timestamp: Timestamp(1574796801, 3095)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.537-0500 I INDEX [conn110] Waiting for index build to complete: 928762fe-c1f2-4862-92a7-a3c7ccb34091
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.568-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.550-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b with provided UUID: c5d7a19e-fda3-4451-bba4-92e58d053153 and options: { uuid: UUID("c5d7a19e-fda3-4451-bba4-92e58d053153"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.537-0500 I INDEX [conn112] Index build completed: 7d7b9a1c-2b4c-484e-9747-820a2883bc84
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.568-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.564-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.537-0500 I COMMAND [conn114] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.568-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 0b05ecda-2750-41d0-a143-04aa1c0c9a38: test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8 (35fc03b2-7e7d-4d01-a1ac-c19415506b2e ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.585-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.537-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e appName: "tid:1" command: createIndexes { createIndexes: "tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796801, 2591), signature: { hash: BinData(0, 1E8F9AC6FD6C76503938AC9BA8C832D99CC7A0C5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 15119 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 127ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.568-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.585-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.538-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.agg_out (2adf409a-3c13-4d0a-b084-ebd6015ac8ca) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 4044), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.569-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.585-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: b1a57f1a-2bc9-4c4e-99e2-41a410e50221: test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8 (35fc03b2-7e7d-4d01-a1ac-c19415506b2e ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.538-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.agg_out (2adf409a-3c13-4d0a-b084-ebd6015ac8ca).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.571-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418 (2adf409a-3c13-4d0a-b084-ebd6015ac8ca) to test5_fsmdb0.agg_out and drop 10127493-e92e-46d5-ab1a-331bbba01f70.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.585-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.538-0500 I STORAGE [conn114] renameCollection: renaming collection 35fc03b2-7e7d-4d01-a1ac-c19415506b2e from test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.572-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.586-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.538-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (2adf409a-3c13-4d0a-b084-ebd6015ac8ca)'. Ident: 'index-1298-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 4044)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.572-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (10127493-e92e-46d5-ab1a-331bbba01f70) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 3536), t: 1 } and commit timestamp Timestamp(1574796801, 3536)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.587-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418 (2adf409a-3c13-4d0a-b084-ebd6015ac8ca) to test5_fsmdb0.agg_out and drop 10127493-e92e-46d5-ab1a-331bbba01f70.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.538-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (2adf409a-3c13-4d0a-b084-ebd6015ac8ca)'. Ident: 'index-1299-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 4044)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.572-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (10127493-e92e-46d5-ab1a-331bbba01f70).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.589-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.538-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1296-8224331490264904478, commit timestamp: Timestamp(1574796801, 4044)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.572-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection 2adf409a-3c13-4d0a-b084-ebd6015ac8ca from test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.589-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.agg_out (10127493-e92e-46d5-ab1a-331bbba01f70) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 3536), t: 1 } and commit timestamp Timestamp(1574796801, 3536)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.538-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.572-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (10127493-e92e-46d5-ab1a-331bbba01f70)'. Ident: 'index-1302--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 3536)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.589-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.agg_out (10127493-e92e-46d5-ab1a-331bbba01f70).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.538-0500 I INDEX [conn108] Registering index build: 9ab44662-079a-4331-b7dd-8e10019f48ed
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.572-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (10127493-e92e-46d5-ab1a-331bbba01f70)'. Ident: 'index-1309--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 3536)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.589-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection 2adf409a-3c13-4d0a-b084-ebd6015ac8ca from test5_fsmdb0.tmp.agg_out.fa519990-8229-4281-8db0-8e6085b9c418 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.538-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4462765843889391785, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3582681329171495748, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796801376), clusterTime: Timestamp(1574796801, 2087) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796801, 2088), signature: { hash: BinData(0, 1E8F9AC6FD6C76503938AC9BA8C832D99CC7A0C5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 161ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.572-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1301--8000595249233899911, commit timestamp: Timestamp(1574796801, 3536)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.589-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (10127493-e92e-46d5-ab1a-331bbba01f70)'. Ident: 'index-1302--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 3536)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.538-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 369ca097-7ef1-4e33-ac4d-fbab0d130eb9: test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a ( 39b36a6e-5d33-4d14-8983-d91ca17abe20 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.573-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df with provided UUID: 31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a and options: { uuid: UUID("31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.589-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (10127493-e92e-46d5-ab1a-331bbba01f70)'. Ident: 'index-1309--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 3536)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.539-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.575-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 0b05ecda-2750-41d0-a143-04aa1c0c9a38: test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8 ( 35fc03b2-7e7d-4d01-a1ac-c19415506b2e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.589-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1301--4104909142373009110, commit timestamp: Timestamp(1574796801, 3536)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.541-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107 with generated UUID: 1033b2ac-b419-47ba-a600-768deb92b5e0 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.591-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.592-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: b1a57f1a-2bc9-4c4e-99e2-41a410e50221: test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8 ( 35fc03b2-7e7d-4d01-a1ac-c19415506b2e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.542-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.612-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.592-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df with provided UUID: 31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a and options: { uuid: UUID("31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.544-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 928762fe-c1f2-4862-92a7-a3c7ccb34091: test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b ( c5d7a19e-fda3-4451-bba4-92e58d053153 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.612-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.608-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.567-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.612-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: d2d01958-26e0-4b1e-ae72-fd5f8b065027: test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e (d9f423b2-4f76-486d-b074-14eb2df3f1e3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.629-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.567-0500 I INDEX [conn114] Registering index build: e4eccc0e-2307-41e3-8044-7d9e730f85d2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.612-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.629-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.574-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.613-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.629-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 6140c590-3335-4e34-a4aa-f94a9d35d659: test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e (d9f423b2-4f76-486d-b074-14eb2df3f1e3 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.574-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.614-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.629-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.575-0500 I STORAGE [conn108] Index build initialized: 9ab44662-079a-4331-b7dd-8e10019f48ed: test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df (31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.620-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: d2d01958-26e0-4b1e-ae72-fd5f8b065027: test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e ( d9f423b2-4f76-486d-b074-14eb2df3f1e3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.630-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.575-0500 I INDEX [conn108] Waiting for index build to complete: 9ab44662-079a-4331-b7dd-8e10019f48ed
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.635-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.632-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.575-0500 I INDEX [conn46] Index build completed: 369ca097-7ef1-4e33-ac4d-fbab0d130eb9
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.635-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.637-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 6140c590-3335-4e34-a4aa-f94a9d35d659: test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e ( d9f423b2-4f76-486d-b074-14eb2df3f1e3 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.575-0500 I INDEX [conn110] Index build completed: 928762fe-c1f2-4862-92a7-a3c7ccb34091
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.635-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 87ce7d0a-3369-4389-bb06-23015d7638b9: test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a (39b36a6e-5d33-4d14-8983-d91ca17abe20 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.654-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.575-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.635-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.654-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.575-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a appName: "tid:4" command: createIndexes { createIndexes: "tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796801, 3094), signature: { hash: BinData(0, 1E8F9AC6FD6C76503938AC9BA8C832D99CC7A0C5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 8094 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 125ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.635-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.654-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: 9ba0787e-c576-4bed-bbed-696cc6a9a1af: test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a (39b36a6e-5d33-4d14-8983-d91ca17abe20 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.575-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (35fc03b2-7e7d-4d01-a1ac-c19415506b2e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 4549), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.637-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.654-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.575-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (35fc03b2-7e7d-4d01-a1ac-c19415506b2e).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.637-0500 I COMMAND [ReplWriterWorker-2] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8 (35fc03b2-7e7d-4d01-a1ac-c19415506b2e) to test5_fsmdb0.agg_out and drop 2adf409a-3c13-4d0a-b084-ebd6015ac8ca.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.654-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.575-0500 I STORAGE [conn112] renameCollection: renaming collection d9f423b2-4f76-486d-b074-14eb2df3f1e3 from test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.637-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.agg_out (2adf409a-3c13-4d0a-b084-ebd6015ac8ca) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 4044), t: 1 } and commit timestamp Timestamp(1574796801, 4044)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.656-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8 (35fc03b2-7e7d-4d01-a1ac-c19415506b2e) to test5_fsmdb0.agg_out and drop 2adf409a-3c13-4d0a-b084-ebd6015ac8ca.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.575-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (35fc03b2-7e7d-4d01-a1ac-c19415506b2e)'. Ident: 'index-1304-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 4549)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.637-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.agg_out (2adf409a-3c13-4d0a-b084-ebd6015ac8ca).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.658-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.575-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (35fc03b2-7e7d-4d01-a1ac-c19415506b2e)'. Ident: 'index-1305-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 4549)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.637-0500 I STORAGE [ReplWriterWorker-2] renameCollection: renaming collection 35fc03b2-7e7d-4d01-a1ac-c19415506b2e from test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.658-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (2adf409a-3c13-4d0a-b084-ebd6015ac8ca) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 4044), t: 1 } and commit timestamp Timestamp(1574796801, 4044)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.575-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1302-8224331490264904478, commit timestamp: Timestamp(1574796801, 4549)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.638-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (2adf409a-3c13-4d0a-b084-ebd6015ac8ca)'. Ident: 'index-1308--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 4044)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.658-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (2adf409a-3c13-4d0a-b084-ebd6015ac8ca).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.575-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.638-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (2adf409a-3c13-4d0a-b084-ebd6015ac8ca)'. Ident: 'index-1315--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 4044)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.658-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 35fc03b2-7e7d-4d01-a1ac-c19415506b2e from test5_fsmdb0.tmp.agg_out.a67666ab-f4c4-480c-a026-2203f4c83ce8 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.575-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 4997636500080205671, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 9096995341155807368, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796801357), clusterTime: Timestamp(1574796801, 2019) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796801, 2087), signature: { hash: BinData(0, 1E8F9AC6FD6C76503938AC9BA8C832D99CC7A0C5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 200ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.638-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1307--8000595249233899911, commit timestamp: Timestamp(1574796801, 4044)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.658-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (2adf409a-3c13-4d0a-b084-ebd6015ac8ca)'. Ident: 'index-1308--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 4044)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.576-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.638-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107 with provided UUID: 1033b2ac-b419-47ba-a600-768deb92b5e0 and options: { uuid: UUID("1033b2ac-b419-47ba-a600-768deb92b5e0"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.658-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (2adf409a-3c13-4d0a-b084-ebd6015ac8ca)'. Ident: 'index-1315--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 4044)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.578-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f with generated UUID: f68613f6-f86b-4934-9d39-3662504a0379 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.640-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 87ce7d0a-3369-4389-bb06-23015d7638b9: test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a ( 39b36a6e-5d33-4d14-8983-d91ca17abe20 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.658-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1307--4104909142373009110, commit timestamp: Timestamp(1574796801, 4044)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.587-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.654-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.659-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107 with provided UUID: 1033b2ac-b419-47ba-a600-768deb92b5e0 and options: { uuid: UUID("1033b2ac-b419-47ba-a600-768deb92b5e0"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.606-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.671-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.660-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 9ba0787e-c576-4bed-bbed-696cc6a9a1af: test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a ( 39b36a6e-5d33-4d14-8983-d91ca17abe20 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.606-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.671-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.674-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.606-0500 I STORAGE [conn114] Index build initialized: e4eccc0e-2307-41e3-8044-7d9e730f85d2: test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107 (1033b2ac-b419-47ba-a600-768deb92b5e0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.671-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: 2a0cd720-38a9-4538-9b34-285a3f449069: test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b (c5d7a19e-fda3-4451-bba4-92e58d053153 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.674-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796801, 4044) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796801, 4044), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 534 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 119ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.606-0500 I INDEX [conn114] Waiting for index build to complete: e4eccc0e-2307-41e3-8044-7d9e730f85d2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.671-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.689-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.609-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 9ab44662-079a-4331-b7dd-8e10019f48ed: test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df ( 31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.672-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.689-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.609-0500 I INDEX [conn108] Index build completed: 9ab44662-079a-4331-b7dd-8e10019f48ed
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.674-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.689-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: 15e2eb84-b590-4715-81b2-f956187177b9: test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b (c5d7a19e-fda3-4451-bba4-92e58d053153 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.616-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.677-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 2a0cd720-38a9-4538-9b34-285a3f449069: test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b ( c5d7a19e-fda3-4451-bba4-92e58d053153 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.690-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.616-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.680-0500 I COMMAND [ReplWriterWorker-1] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e (d9f423b2-4f76-486d-b074-14eb2df3f1e3) to test5_fsmdb0.agg_out and drop 35fc03b2-7e7d-4d01-a1ac-c19415506b2e.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.690-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.616-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (d9f423b2-4f76-486d-b074-14eb2df3f1e3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 5554), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.680-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.agg_out (35fc03b2-7e7d-4d01-a1ac-c19415506b2e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 4549), t: 1 } and commit timestamp Timestamp(1574796801, 4549)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.693-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.617-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (d9f423b2-4f76-486d-b074-14eb2df3f1e3).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.680-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.agg_out (35fc03b2-7e7d-4d01-a1ac-c19415506b2e).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.697-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 15e2eb84-b590-4715-81b2-f956187177b9: test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b ( c5d7a19e-fda3-4451-bba4-92e58d053153 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.617-0500 I STORAGE [conn46] renameCollection: renaming collection c5d7a19e-fda3-4451-bba4-92e58d053153 from test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.680-0500 I STORAGE [ReplWriterWorker-1] renameCollection: renaming collection d9f423b2-4f76-486d-b074-14eb2df3f1e3 from test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.699-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e (d9f423b2-4f76-486d-b074-14eb2df3f1e3) to test5_fsmdb0.agg_out and drop 35fc03b2-7e7d-4d01-a1ac-c19415506b2e.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.617-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d9f423b2-4f76-486d-b074-14eb2df3f1e3)'. Ident: 'index-1303-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 5554)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.680-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (35fc03b2-7e7d-4d01-a1ac-c19415506b2e)'. Ident: 'index-1314--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 4549)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.699-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (35fc03b2-7e7d-4d01-a1ac-c19415506b2e) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 4549), t: 1 } and commit timestamp Timestamp(1574796801, 4549)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.617-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d9f423b2-4f76-486d-b074-14eb2df3f1e3)'. Ident: 'index-1309-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 5554)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.680-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (35fc03b2-7e7d-4d01-a1ac-c19415506b2e)'. Ident: 'index-1321--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 4549)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.699-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (35fc03b2-7e7d-4d01-a1ac-c19415506b2e).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.617-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1301-8224331490264904478, commit timestamp: Timestamp(1574796801, 5554)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.680-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1313--8000595249233899911, commit timestamp: Timestamp(1574796801, 4549)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.699-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection d9f423b2-4f76-486d-b074-14eb2df3f1e3 from test5_fsmdb0.tmp.agg_out.da898876-d1b1-47d5-ac74-5b05b492720e to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.617-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.681-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f with provided UUID: f68613f6-f86b-4934-9d39-3662504a0379 and options: { uuid: UUID("f68613f6-f86b-4934-9d39-3662504a0379"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.699-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (35fc03b2-7e7d-4d01-a1ac-c19415506b2e)'. Ident: 'index-1314--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 4549)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.617-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 5643511656620399605, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8629756156808430304, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796801451), clusterTime: Timestamp(1574796801, 3159) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796801, 3223), signature: { hash: BinData(0, 1E8F9AC6FD6C76503938AC9BA8C832D99CC7A0C5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 164ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.695-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.699-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (35fc03b2-7e7d-4d01-a1ac-c19415506b2e)'. Ident: 'index-1321--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 4549)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.617-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (c5d7a19e-fda3-4451-bba4-92e58d053153) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 5555), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.726-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.699-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1313--4104909142373009110, commit timestamp: Timestamp(1574796801, 4549)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.617-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (c5d7a19e-fda3-4451-bba4-92e58d053153).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.726-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.700-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f with provided UUID: f68613f6-f86b-4934-9d39-3662504a0379 and options: { uuid: UUID("f68613f6-f86b-4934-9d39-3662504a0379"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.617-0500 I STORAGE [conn112] renameCollection: renaming collection 39b36a6e-5d33-4d14-8983-d91ca17abe20 from test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.726-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 73c7faae-1d9b-4b3a-a7f2-dc169af375e2: test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df (31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.714-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.617-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c5d7a19e-fda3-4451-bba4-92e58d053153)'. Ident: 'index-1312-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 5555)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.726-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.743-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.617-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c5d7a19e-fda3-4451-bba4-92e58d053153)'. Ident: 'index-1317-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 5555)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.727-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.743-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.617-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1310-8224331490264904478, commit timestamp: Timestamp(1574796801, 5555)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.730-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.743-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 4675a20f-72dd-4e03-92cc-22ce735653d9: test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df (31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.617-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.733-0500 I COMMAND [ReplWriterWorker-8] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b (c5d7a19e-fda3-4451-bba4-92e58d053153) to test5_fsmdb0.agg_out and drop d9f423b2-4f76-486d-b074-14eb2df3f1e3.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.743-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.617-0500 I INDEX [conn110] Registering index build: 28335c92-2a5b-4ba0-a134-e902790ef519
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.734-0500 I STORAGE [ReplWriterWorker-8] dropCollection: test5_fsmdb0.agg_out (d9f423b2-4f76-486d-b074-14eb2df3f1e3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 5554), t: 1 } and commit timestamp Timestamp(1574796801, 5554)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.744-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.617-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7460966868397945550, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8982579860235228562, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796801419), clusterTime: Timestamp(1574796801, 2656) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796801, 2720), signature: { hash: BinData(0, 1E8F9AC6FD6C76503938AC9BA8C832D99CC7A0C5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 197ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.734-0500 I STORAGE [ReplWriterWorker-8] Finishing collection drop for test5_fsmdb0.agg_out (d9f423b2-4f76-486d-b074-14eb2df3f1e3).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.746-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.618-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.734-0500 I STORAGE [ReplWriterWorker-8] renameCollection: renaming collection c5d7a19e-fda3-4451-bba4-92e58d053153 from test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.749-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b (c5d7a19e-fda3-4451-bba4-92e58d053153) to test5_fsmdb0.agg_out and drop d9f423b2-4f76-486d-b074-14eb2df3f1e3.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.620-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1 with generated UUID: ed415e78-e196-4764-ba02-f7c7a993ff28 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.734-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d9f423b2-4f76-486d-b074-14eb2df3f1e3)'. Ident: 'index-1312--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 5554)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.749-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (d9f423b2-4f76-486d-b074-14eb2df3f1e3) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 5554), t: 1 } and commit timestamp Timestamp(1574796801, 5554)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.620-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07 with generated UUID: 7453ea03-25be-4332-9cd8-ae4a023ac454 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.734-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d9f423b2-4f76-486d-b074-14eb2df3f1e3)'. Ident: 'index-1325--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 5554)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.749-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (d9f423b2-4f76-486d-b074-14eb2df3f1e3).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.627-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.734-0500 I STORAGE [ReplWriterWorker-8] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1311--8000595249233899911, commit timestamp: Timestamp(1574796801, 5554)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.749-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection c5d7a19e-fda3-4451-bba4-92e58d053153 from test5_fsmdb0.tmp.agg_out.ad098362-4884-4e15-bb3c-6e0fa6cb3e3b to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.650-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.734-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 73c7faae-1d9b-4b3a-a7f2-dc169af375e2: test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df ( 31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.749-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (d9f423b2-4f76-486d-b074-14eb2df3f1e3)'. Ident: 'index-1312--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 5554)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.650-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.734-0500 I COMMAND [ReplWriterWorker-10] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a (39b36a6e-5d33-4d14-8983-d91ca17abe20) to test5_fsmdb0.agg_out and drop c5d7a19e-fda3-4451-bba4-92e58d053153.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.749-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (d9f423b2-4f76-486d-b074-14eb2df3f1e3)'. Ident: 'index-1325--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 5554)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.650-0500 I STORAGE [conn110] Index build initialized: 28335c92-2a5b-4ba0-a134-e902790ef519: test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f (f68613f6-f86b-4934-9d39-3662504a0379 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.734-0500 I STORAGE [ReplWriterWorker-10] dropCollection: test5_fsmdb0.agg_out (c5d7a19e-fda3-4451-bba4-92e58d053153) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 5555), t: 1 } and commit timestamp Timestamp(1574796801, 5555)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.749-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1311--4104909142373009110, commit timestamp: Timestamp(1574796801, 5554)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.650-0500 I INDEX [conn110] Waiting for index build to complete: 28335c92-2a5b-4ba0-a134-e902790ef519
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.734-0500 I STORAGE [ReplWriterWorker-10] Finishing collection drop for test5_fsmdb0.agg_out (c5d7a19e-fda3-4451-bba4-92e58d053153).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.750-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 4675a20f-72dd-4e03-92cc-22ce735653d9: test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df ( 31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.650-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.735-0500 I STORAGE [ReplWriterWorker-10] renameCollection: renaming collection 39b36a6e-5d33-4d14-8983-d91ca17abe20 from test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.750-0500 I COMMAND [ReplWriterWorker-5] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a (39b36a6e-5d33-4d14-8983-d91ca17abe20) to test5_fsmdb0.agg_out and drop c5d7a19e-fda3-4451-bba4-92e58d053153.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.652-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: e4eccc0e-2307-41e3-8044-7d9e730f85d2: test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107 ( 1033b2ac-b419-47ba-a600-768deb92b5e0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.735-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c5d7a19e-fda3-4451-bba4-92e58d053153)'. Ident: 'index-1320--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 5555)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.750-0500 I STORAGE [ReplWriterWorker-5] dropCollection: test5_fsmdb0.agg_out (c5d7a19e-fda3-4451-bba4-92e58d053153) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 5555), t: 1 } and commit timestamp Timestamp(1574796801, 5555)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.652-0500 I INDEX [conn114] Index build completed: e4eccc0e-2307-41e3-8044-7d9e730f85d2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.735-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c5d7a19e-fda3-4451-bba4-92e58d053153)'. Ident: 'index-1331--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 5555)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.750-0500 I STORAGE [ReplWriterWorker-5] Finishing collection drop for test5_fsmdb0.agg_out (c5d7a19e-fda3-4451-bba4-92e58d053153).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.661-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.735-0500 I STORAGE [ReplWriterWorker-10] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1319--8000595249233899911, commit timestamp: Timestamp(1574796801, 5555)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.750-0500 I STORAGE [ReplWriterWorker-5] renameCollection: renaming collection 39b36a6e-5d33-4d14-8983-d91ca17abe20 from test5_fsmdb0.tmp.agg_out.dc6d058f-dbc6-4027-bd6e-b2b9a74d555a to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.669-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.735-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1 with provided UUID: ed415e78-e196-4764-ba02-f7c7a993ff28 and options: { uuid: UUID("ed415e78-e196-4764-ba02-f7c7a993ff28"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.750-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (c5d7a19e-fda3-4451-bba4-92e58d053153)'. Ident: 'index-1320--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 5555)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.669-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.751-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.750-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (c5d7a19e-fda3-4451-bba4-92e58d053153)'. Ident: 'index-1331--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 5555)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.672-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.751-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07 with provided UUID: 7453ea03-25be-4332-9cd8-ae4a023ac454 and options: { uuid: UUID("7453ea03-25be-4332-9cd8-ae4a023ac454"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.750-0500 I STORAGE [ReplWriterWorker-5] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1319--4104909142373009110, commit timestamp: Timestamp(1574796801, 5555)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.672-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.767-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.752-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1 with provided UUID: ed415e78-e196-4764-ba02-f7c7a993ff28 and options: { uuid: UUID("ed415e78-e196-4764-ba02-f7c7a993ff28"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.672-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (39b36a6e-5d33-4d14-8983-d91ca17abe20) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 6063), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.785-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.770-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.672-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (39b36a6e-5d33-4d14-8983-d91ca17abe20).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.785-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.771-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07 with provided UUID: 7453ea03-25be-4332-9cd8-ae4a023ac454 and options: { uuid: UUID("7453ea03-25be-4332-9cd8-ae4a023ac454"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.672-0500 I STORAGE [conn112] renameCollection: renaming collection 31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a from test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.785-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 78262a42-427d-40c7-84a8-638a85f93244: test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107 (1033b2ac-b419-47ba-a600-768deb92b5e0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.786-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.673-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (39b36a6e-5d33-4d14-8983-d91ca17abe20)'. Ident: 'index-1308-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 6063)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.785-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.806-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.673-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (39b36a6e-5d33-4d14-8983-d91ca17abe20)'. Ident: 'index-1313-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 6063)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.786-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.806-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.673-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1306-8224331490264904478, commit timestamp: Timestamp(1574796801, 6063)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.788-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.806-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: 3b54205e-98ea-4365-ae3c-5b2e149cfdad: test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107 (1033b2ac-b419-47ba-a600-768deb92b5e0 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.673-0500 I INDEX [conn46] Registering index build: cf4a80bd-1906-450a-9ea8-9e6d4f28fc0a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.791-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 78262a42-427d-40c7-84a8-638a85f93244: test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107 ( 1033b2ac-b419-47ba-a600-768deb92b5e0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.806-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.673-0500 I INDEX [conn108] Registering index build: 11e19b61-49c2-47db-8dc8-4153a7478aba
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.805-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.806-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.673-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8067331056730613278, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6298074262756035380, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796801488), clusterTime: Timestamp(1574796801, 3536) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796801, 3536), signature: { hash: BinData(0, 1E8F9AC6FD6C76503938AC9BA8C832D99CC7A0C5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 184ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.805-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.809-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.674-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 28335c92-2a5b-4ba0-a134-e902790ef519: test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f ( f68613f6-f86b-4934-9d39-3662504a0379 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.805-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: c80f22b6-b1c8-4732-85de-12738676bd9b: test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f (f68613f6-f86b-4934-9d39-3662504a0379 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.812-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 3b54205e-98ea-4365-ae3c-5b2e149cfdad: test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107 ( 1033b2ac-b419-47ba-a600-768deb92b5e0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.676-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb with generated UUID: bfd786d5-4348-427e-8337-7cbb30a706e4 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.805-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.827-0500 I INDEX [ReplWriterWorker-5] index build: starting on test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.698-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.805-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.827-0500 I INDEX [ReplWriterWorker-5] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.698-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.806-0500 I COMMAND [ReplWriterWorker-6] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df (31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a) to test5_fsmdb0.agg_out and drop 39b36a6e-5d33-4d14-8983-d91ca17abe20.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.827-0500 I STORAGE [ReplWriterWorker-5] Index build initialized: e771f29b-ce78-48b3-ac19-36234497c0c4: test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f (f68613f6-f86b-4934-9d39-3662504a0379 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.698-0500 I STORAGE [conn46] Index build initialized: cf4a80bd-1906-450a-9ea8-9e6d4f28fc0a: test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07 (7453ea03-25be-4332-9cd8-ae4a023ac454 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.807-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.827-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.698-0500 I INDEX [conn46] Waiting for index build to complete: cf4a80bd-1906-450a-9ea8-9e6d4f28fc0a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.807-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.agg_out (39b36a6e-5d33-4d14-8983-d91ca17abe20) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 6063), t: 1 } and commit timestamp Timestamp(1574796801, 6063)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.828-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.698-0500 I INDEX [conn110] Index build completed: 28335c92-2a5b-4ba0-a134-e902790ef519
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.807-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.agg_out (39b36a6e-5d33-4d14-8983-d91ca17abe20).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.829-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df (31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a) to test5_fsmdb0.agg_out and drop 39b36a6e-5d33-4d14-8983-d91ca17abe20.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.706-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.807-0500 I STORAGE [ReplWriterWorker-6] renameCollection: renaming collection 31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a from test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.830-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.706-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.807-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (39b36a6e-5d33-4d14-8983-d91ca17abe20)'. Ident: 'index-1318--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 6063)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.830-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (39b36a6e-5d33-4d14-8983-d91ca17abe20) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 6063), t: 1 } and commit timestamp Timestamp(1574796801, 6063)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.706-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 6566), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.807-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (39b36a6e-5d33-4d14-8983-d91ca17abe20)'. Ident: 'index-1327--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 6063)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.830-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (39b36a6e-5d33-4d14-8983-d91ca17abe20).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.706-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.807-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1317--8000595249233899911, commit timestamp: Timestamp(1574796801, 6063)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.830-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a from test5_fsmdb0.tmp.agg_out.d766897d-a1c6-444a-bce9-e3482810e1df to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.707-0500 I STORAGE [conn112] renameCollection: renaming collection 1033b2ac-b419-47ba-a600-768deb92b5e0 from test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.808-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb with provided UUID: bfd786d5-4348-427e-8337-7cbb30a706e4 and options: { uuid: UUID("bfd786d5-4348-427e-8337-7cbb30a706e4"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.830-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (39b36a6e-5d33-4d14-8983-d91ca17abe20)'. Ident: 'index-1318--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 6063)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.707-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a)'. Ident: 'index-1316-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 6566)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.810-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: c80f22b6-b1c8-4732-85de-12738676bd9b: test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f ( f68613f6-f86b-4934-9d39-3662504a0379 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.830-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (39b36a6e-5d33-4d14-8983-d91ca17abe20)'. Ident: 'index-1327--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 6063)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.707-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a)'. Ident: 'index-1319-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 6566)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.826-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.830-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1317--4104909142373009110, commit timestamp: Timestamp(1574796801, 6063)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.707-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1314-8224331490264904478, commit timestamp: Timestamp(1574796801, 6566)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.832-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107 (1033b2ac-b419-47ba-a600-768deb92b5e0) to test5_fsmdb0.agg_out and drop 31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.831-0500 I STORAGE [ReplWriterWorker-6] createCollection: test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb with provided UUID: bfd786d5-4348-427e-8337-7cbb30a706e4 and options: { uuid: UUID("bfd786d5-4348-427e-8337-7cbb30a706e4"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.707-0500 I INDEX [conn114] Registering index build: 907833f7-6409-4741-ac7f-d5c9bac88c55
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.832-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 6566), t: 1 } and commit timestamp Timestamp(1574796801, 6566)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.831-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: e771f29b-ce78-48b3-ac19-36234497c0c4: test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f ( f68613f6-f86b-4934-9d39-3662504a0379 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.707-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.832-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.845-0500 I INDEX [ReplWriterWorker-6] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.707-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8589456090957643645, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1142263192884778740, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796801539), clusterTime: Timestamp(1574796801, 4044) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796801, 4044), signature: { hash: BinData(0, 1E8F9AC6FD6C76503938AC9BA8C832D99CC7A0C5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 166ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.832-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection 1033b2ac-b419-47ba-a600-768deb92b5e0 from test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.845-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796801, 6063) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796801, 6064), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 12239 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 152ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.708-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.832-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a)'. Ident: 'index-1324--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 6566)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.850-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107 (1033b2ac-b419-47ba-a600-768deb92b5e0) to test5_fsmdb0.agg_out and drop 31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.710-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f with generated UUID: c878ff31-e3ff-4be3-80b1-5fddf15f473c and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.832-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a)'. Ident: 'index-1335--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 6566)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.850-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 6566), t: 1 } and commit timestamp Timestamp(1574796801, 6566)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.718-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.832-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1323--8000595249233899911, commit timestamp: Timestamp(1574796801, 6566)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.850-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.720-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: cf4a80bd-1906-450a-9ea8-9e6d4f28fc0a: test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07 ( 7453ea03-25be-4332-9cd8-ae4a023ac454 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.833-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f with provided UUID: c878ff31-e3ff-4be3-80b1-5fddf15f473c and options: { uuid: UUID("c878ff31-e3ff-4be3-80b1-5fddf15f473c"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.850-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 1033b2ac-b419-47ba-a600-768deb92b5e0 from test5_fsmdb0.tmp.agg_out.b67030d8-6486-473e-a95d-2e8dd98e9107 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.729-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.845-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.850-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a)'. Ident: 'index-1324--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 6566)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.729-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.861-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.850-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (31e2bb7e-1fe0-4e27-99d9-f9dc53e9234a)'. Ident: 'index-1335--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 6566)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.729-0500 I STORAGE [conn108] Index build initialized: 11e19b61-49c2-47db-8dc8-4153a7478aba: test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1 (ed415e78-e196-4764-ba02-f7c7a993ff28 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.861-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.850-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1323--4104909142373009110, commit timestamp: Timestamp(1574796801, 6566)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.729-0500 I INDEX [conn108] Waiting for index build to complete: 11e19b61-49c2-47db-8dc8-4153a7478aba
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.861-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: 18f5493b-c451-4c14-a320-7dd746c256ed: test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07 (7453ea03-25be-4332-9cd8-ae4a023ac454 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.851-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f with provided UUID: c878ff31-e3ff-4be3-80b1-5fddf15f473c and options: { uuid: UUID("c878ff31-e3ff-4be3-80b1-5fddf15f473c"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.729-0500 I INDEX [conn46] Index build completed: cf4a80bd-1906-450a-9ea8-9e6d4f28fc0a
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.861-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.864-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.729-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.862-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.881-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.737-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.864-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.881-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.752-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.866-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 18f5493b-c451-4c14-a320-7dd746c256ed: test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07 ( 7453ea03-25be-4332-9cd8-ae4a023ac454 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.881-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 50586be7-829a-41dd-b447-9e78764dff62: test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07 (7453ea03-25be-4332-9cd8-ae4a023ac454 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.755-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.882-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.881-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.763-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.882-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.882-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.763-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.882-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: 032269ab-bacd-432b-be62-a26a4cf13619: test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1 (ed415e78-e196-4764-ba02-f7c7a993ff28 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.884-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.763-0500 I STORAGE [conn114] Index build initialized: 907833f7-6409-4741-ac7f-d5c9bac88c55: test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb (bfd786d5-4348-427e-8337-7cbb30a706e4 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.882-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.887-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 50586be7-829a-41dd-b447-9e78764dff62: test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07 ( 7453ea03-25be-4332-9cd8-ae4a023ac454 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.763-0500 I INDEX [conn114] Waiting for index build to complete: 907833f7-6409-4741-ac7f-d5c9bac88c55
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.883-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.899-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.763-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.884-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f (f68613f6-f86b-4934-9d39-3662504a0379) to test5_fsmdb0.agg_out and drop 1033b2ac-b419-47ba-a600-768deb92b5e0.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.899-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.763-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (1033b2ac-b419-47ba-a600-768deb92b5e0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 7074), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.885-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.899-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 1c79b905-fa5b-4c5a-a681-cf765159bf26: test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1 (ed415e78-e196-4764-ba02-f7c7a993ff28 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.763-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (1033b2ac-b419-47ba-a600-768deb92b5e0).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.886-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (1033b2ac-b419-47ba-a600-768deb92b5e0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 7074), t: 1 } and commit timestamp Timestamp(1574796801, 7074)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.899-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.763-0500 I STORAGE [conn112] renameCollection: renaming collection f68613f6-f86b-4934-9d39-3662504a0379 from test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.886-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (1033b2ac-b419-47ba-a600-768deb92b5e0).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.900-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.763-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1033b2ac-b419-47ba-a600-768deb92b5e0)'. Ident: 'index-1321-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 7074)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.886-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection f68613f6-f86b-4934-9d39-3662504a0379 from test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.901-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f (f68613f6-f86b-4934-9d39-3662504a0379) to test5_fsmdb0.agg_out and drop 1033b2ac-b419-47ba-a600-768deb92b5e0.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.763-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1033b2ac-b419-47ba-a600-768deb92b5e0)'. Ident: 'index-1323-8224331490264904478', commit timestamp: 'Timestamp(1574796801, 7074)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.886-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1033b2ac-b419-47ba-a600-768deb92b5e0)'. Ident: 'index-1330--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 7074)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.901-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.763-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1320-8224331490264904478, commit timestamp: Timestamp(1574796801, 7074)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.886-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1033b2ac-b419-47ba-a600-768deb92b5e0)'. Ident: 'index-1341--8000595249233899911', commit timestamp: 'Timestamp(1574796801, 7074)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.902-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (1033b2ac-b419-47ba-a600-768deb92b5e0) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796801, 7074), t: 1 } and commit timestamp Timestamp(1574796801, 7074)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.764-0500 I INDEX [conn110] Registering index build: e284f7cf-0929-4581-8bb6-0a019218cb9f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.886-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1329--8000595249233899911, commit timestamp: Timestamp(1574796801, 7074)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.902-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (1033b2ac-b419-47ba-a600-768deb92b5e0).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.764-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:21.887-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 032269ab-bacd-432b-be62-a26a4cf13619: test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1 ( ed415e78-e196-4764-ba02-f7c7a993ff28 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.902-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection f68613f6-f86b-4934-9d39-3662504a0379 from test5_fsmdb0.tmp.agg_out.616327b6-e729-4542-a67f-681f50bb260f to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.764-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 11e19b61-49c2-47db-8dc8-4153a7478aba: test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1 ( ed415e78-e196-4764-ba02-f7c7a993ff28 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.524-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 with provided UUID: 599be818-51b8-4e66-b20a-d37789c9cd41 and options: { uuid: UUID("599be818-51b8-4e66-b20a-d37789c9cd41"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.902-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (1033b2ac-b419-47ba-a600-768deb92b5e0)'. Ident: 'index-1330--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 7074)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.764-0500 I COMMAND [conn71] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 7308451150255616559, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 461008205411134550, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796801577), clusterTime: Timestamp(1574796801, 4549) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796801, 4549), signature: { hash: BinData(0, 1E8F9AC6FD6C76503938AC9BA8C832D99CC7A0C5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 186ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.539-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.902-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (1033b2ac-b419-47ba-a600-768deb92b5e0)'. Ident: 'index-1341--4104909142373009110', commit timestamp: 'Timestamp(1574796801, 7074)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.764-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.902-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1329--4104909142373009110, commit timestamp: Timestamp(1574796801, 7074)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.767-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 with generated UUID: 599be818-51b8-4e66-b20a-d37789c9cd41 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:21.903-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 1c79b905-fa5b-4c5a-a681-cf765159bf26: test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1 ( ed415e78-e196-4764-ba02-f7c7a993ff28 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.772-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.788-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.788-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.788-0500 I STORAGE [conn110] Index build initialized: e284f7cf-0929-4581-8bb6-0a019218cb9f: test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f (c878ff31-e3ff-4be3-80b1-5fddf15f473c ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.788-0500 I INDEX [conn110] Waiting for index build to complete: e284f7cf-0929-4581-8bb6-0a019218cb9f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.788-0500 I INDEX [conn108] Index build completed: 11e19b61-49c2-47db-8dc8-4153a7478aba
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.789-0500 I COMMAND [conn108] command test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1 appName: "tid:0" command: createIndexes { createIndexes: "tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796801, 6060), signature: { hash: BinData(0, 1E8F9AC6FD6C76503938AC9BA8C832D99CC7A0C5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 2 }, timeAcquiringMicros: { w: 19709 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 127ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.541-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 with provided UUID: 599be818-51b8-4e66-b20a-d37789c9cd41 and options: { uuid: UUID("599be818-51b8-4e66-b20a-d37789c9cd41"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.789-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 907833f7-6409-4741-ac7f-d5c9bac88c55: test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb ( bfd786d5-4348-427e-8337-7cbb30a706e4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:21.797-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.516-0500 I INDEX [conn114] Index build completed: 907833f7-6409-4741-ac7f-d5c9bac88c55
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.516-0500 I COMMAND [conn114] command test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb appName: "tid:2" command: createIndexes { createIndexes: "tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796801, 6566), signature: { hash: BinData(0, 1E8F9AC6FD6C76503938AC9BA8C832D99CC7A0C5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 111 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2809ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.516-0500 I COMMAND [conn112] command test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 appName: "tid:1" command: create { create: "tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97", temp: true, validationLevel: "strict", validationAction: "error", databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796801, 7138), signature: { hash: BinData(0, 1E8F9AC6FD6C76503938AC9BA8C832D99CC7A0C5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2749ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.516-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.516-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (f68613f6-f86b-4934-9d39-3662504a0379) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 1), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.516-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (f68613f6-f86b-4934-9d39-3662504a0379).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.516-0500 I STORAGE [conn46] renameCollection: renaming collection 7453ea03-25be-4332-9cd8-ae4a023ac454 from test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.516-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f68613f6-f86b-4934-9d39-3662504a0379)'. Ident: 'index-1326-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 1)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.517-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f68613f6-f86b-4934-9d39-3662504a0379)'. Ident: 'index-1327-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 1)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.517-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1324-8224331490264904478, commit timestamp: Timestamp(1574796804, 1)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.517-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.517-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07 appName: "tid:4" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07", to: "test5_fsmdb0.agg_out", collectionOptions: { validationLevel: "strict", validationAction: "error" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796801, 7577), signature: { hash: BinData(0, 1E8F9AC6FD6C76503938AC9BA8C832D99CC7A0C5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "admin" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 2734468 } }, Collection: { acquireCount: { r: 1, W: 2 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2734ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.517-0500 I INDEX [conn114] Registering index build: 3343eda7-ecee-46dc-8cd8-cf2ccfb62781
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.517-0500 I COMMAND [conn220] command test5_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796801, 6063), lsid: { id: UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") }, $clusterTime: { clusterTime: Timestamp(1574796801, 6064), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796801, 6063). Collection minimum timestamp is Timestamp(1574796801, 7139)" errName:SnapshotUnavailable errCode:246 reslen:602 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2670000 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 2670ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.517-0500 I COMMAND [conn67] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 627724860242125659, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2009451706516045336, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796801619), clusterTime: Timestamp(1574796801, 5555) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796801, 5555), signature: { hash: BinData(0, 1E8F9AC6FD6C76503938AC9BA8C832D99CC7A0C5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2897ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.517-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.520-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 with generated UUID: 983f282f-409f-4dfa-8f15-1c4a81b6c121 and options: { temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.524-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.538-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.538-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.538-0500 I STORAGE [conn114] Index build initialized: 3343eda7-ecee-46dc-8cd8-cf2ccfb62781: test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 (599be818-51b8-4e66-b20a-d37789c9cd41 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.538-0500 I INDEX [conn114] Waiting for index build to complete: 3343eda7-ecee-46dc-8cd8-cf2ccfb62781
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.539-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.540-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: e284f7cf-0929-4581-8bb6-0a019218cb9f: test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f ( c878ff31-e3ff-4be3-80b1-5fddf15f473c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.540-0500 I INDEX [conn110] Index build completed: e284f7cf-0929-4581-8bb6-0a019218cb9f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.540-0500 I COMMAND [conn110] command test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f appName: "tid:3" command: createIndexes { createIndexes: "tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f", indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796801, 7070), signature: { hash: BinData(0, 1E8F9AC6FD6C76503938AC9BA8C832D99CC7A0C5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:458 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 10992 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 2787ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.548-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.548-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.548-0500 I INDEX [conn46] Registering index build: 5b5774d2-d2a4-4e30-8ed6-1f7b4d2328a9
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.551-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.555-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.556-0500 I INDEX [ReplWriterWorker-14] index build: starting on test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.556-0500 I INDEX [ReplWriterWorker-14] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.556-0500 I STORAGE [ReplWriterWorker-14] Index build initialized: 3faa0e71-afb9-48b4-8341-b3f38792c680: test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb (bfd786d5-4348-427e-8337-7cbb30a706e4 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.556-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.557-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.559-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.561-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: 3343eda7-ecee-46dc-8cd8-cf2ccfb62781: test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 ( 599be818-51b8-4e66-b20a-d37789c9cd41 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.561-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07 (7453ea03-25be-4332-9cd8-ae4a023ac454) to test5_fsmdb0.agg_out and drop f68613f6-f86b-4934-9d39-3662504a0379.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.561-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (f68613f6-f86b-4934-9d39-3662504a0379) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 1), t: 1 } and commit timestamp Timestamp(1574796804, 1)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.561-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (f68613f6-f86b-4934-9d39-3662504a0379).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.561-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection 7453ea03-25be-4332-9cd8-ae4a023ac454 from test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.561-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f68613f6-f86b-4934-9d39-3662504a0379)'. Ident: 'index-1334--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 1)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.561-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f68613f6-f86b-4934-9d39-3662504a0379)'. Ident: 'index-1343--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 1)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.561-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1333--8000595249233899911, commit timestamp: Timestamp(1574796804, 1)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.562-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 with provided UUID: 983f282f-409f-4dfa-8f15-1c4a81b6c121 and options: { uuid: UUID("983f282f-409f-4dfa-8f15-1c4a81b6c121"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.564-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 3faa0e71-afb9-48b4-8341-b3f38792c680: test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb ( bfd786d5-4348-427e-8337-7cbb30a706e4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.569-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.569-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.569-0500 I STORAGE [conn46] Index build initialized: 5b5774d2-d2a4-4e30-8ed6-1f7b4d2328a9: test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 (983f282f-409f-4dfa-8f15-1c4a81b6c121 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.569-0500 I INDEX [conn46] Waiting for index build to complete: 5b5774d2-d2a4-4e30-8ed6-1f7b4d2328a9
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.569-0500 I INDEX [conn114] Index build completed: 3343eda7-ecee-46dc-8cd8-cf2ccfb62781
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.569-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.570-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (7453ea03-25be-4332-9cd8-ae4a023ac454) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 1137), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.570-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (7453ea03-25be-4332-9cd8-ae4a023ac454).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.570-0500 I STORAGE [conn108] renameCollection: renaming collection ed415e78-e196-4764-ba02-f7c7a993ff28 from test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.570-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (7453ea03-25be-4332-9cd8-ae4a023ac454)'. Ident: 'index-1332-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 1137)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.570-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (7453ea03-25be-4332-9cd8-ae4a023ac454)'. Ident: 'index-1333-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 1137)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.570-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1329-8224331490264904478, commit timestamp: Timestamp(1574796804, 1137)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.570-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.570-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (ed415e78-e196-4764-ba02-f7c7a993ff28) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 1138), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.570-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (ed415e78-e196-4764-ba02-f7c7a993ff28).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.570-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8775221017326093672, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1403908174065812826, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796801618), clusterTime: Timestamp(1574796801, 5555) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796801, 5555), signature: { hash: BinData(0, 1E8F9AC6FD6C76503938AC9BA8C832D99CC7A0C5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2950ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.570-0500 I STORAGE [conn112] renameCollection: renaming collection bfd786d5-4348-427e-8337-7cbb30a706e4 from test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.570-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ed415e78-e196-4764-ba02-f7c7a993ff28)'. Ident: 'index-1331-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 1138)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.570-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ed415e78-e196-4764-ba02-f7c7a993ff28)'. Ident: 'index-1337-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 1138)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.570-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1328-8224331490264904478, commit timestamp: Timestamp(1574796804, 1138)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.570-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:24.570-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796801, 5554), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2951ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.570-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2745671230401661388, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 685380461165260423, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796801674), clusterTime: Timestamp(1574796801, 6063) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796801, 6063), signature: { hash: BinData(0, 1E8F9AC6FD6C76503938AC9BA8C832D99CC7A0C5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2895ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.572-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.580-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:24.626-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796801, 6566), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:819 protocol:op_msg 2917ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.572-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:24.571-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796801, 6063), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2896ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:24.651-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796801, 7074), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:819 protocol:op_msg 2885ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.571-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.599-0500 I INDEX [ReplWriterWorker-4] index build: starting on test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:24.688-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796804, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:819 protocol:op_msg 170ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.572-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 973a4c82-f233-49fa-af7f-ae336df3de71: test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb (bfd786d5-4348-427e-8337-7cbb30a706e4 ): indexes: 1
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:24.779-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796804, 2209), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 151ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.573-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.599-0500 I INDEX [ReplWriterWorker-4] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:24.701-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796804, 1138), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 129ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.572-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:24.859-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796804, 2522), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:818 protocol:op_msg 206ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.574-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947 with generated UUID: 0c15630f-e174-4754-9a79-0a5fddb34b68 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.599-0500 I STORAGE [ReplWriterWorker-4] Index build initialized: 1ab25aba-fde9-458c-beab-512418337949: test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f (c878ff31-e3ff-4be3-80b1-5fddf15f473c ): indexes: 1
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:24.747-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796804, 1333), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 172ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.573-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.575-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205 with generated UUID: 6c4f7f30-1ada-46a9-a83d-67ac6f231f66 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.600-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:24.862-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796804, 3416), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:818 protocol:op_msg 138ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.575-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.576-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 5b5774d2-d2a4-4e30-8ed6-1f7b4d2328a9: test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 ( 983f282f-409f-4dfa-8f15-1c4a81b6c121 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.600-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:24.862-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796804, 3027), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:818 protocol:op_msg 173ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.577-0500 I COMMAND [ReplWriterWorker-13] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07 (7453ea03-25be-4332-9cd8-ae4a023ac454) to test5_fsmdb0.agg_out and drop f68613f6-f86b-4934-9d39-3662504a0379.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.576-0500 I INDEX [conn46] Index build completed: 5b5774d2-d2a4-4e30-8ed6-1f7b4d2328a9
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.603-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.577-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 973a4c82-f233-49fa-af7f-ae336df3de71: test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb ( bfd786d5-4348-427e-8337-7cbb30a706e4 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.601-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.609-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 1ab25aba-fde9-458c-beab-512418337949: test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f ( c878ff31-e3ff-4be3-80b1-5fddf15f473c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.578-0500 I STORAGE [ReplWriterWorker-13] dropCollection: test5_fsmdb0.agg_out (f68613f6-f86b-4934-9d39-3662504a0379) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 1), t: 1 } and commit timestamp Timestamp(1574796804, 1)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.607-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.629-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.578-0500 I STORAGE [ReplWriterWorker-13] Finishing collection drop for test5_fsmdb0.agg_out (f68613f6-f86b-4934-9d39-3662504a0379).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.607-0500 I INDEX [conn112] Registering index build: 0fd8da0d-535b-4c6e-a8a4-777efa1b0ff0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.629-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.578-0500 I STORAGE [ReplWriterWorker-13] renameCollection: renaming collection 7453ea03-25be-4332-9cd8-ae4a023ac454 from test5_fsmdb0.tmp.agg_out.c0e46f8d-1c7b-4474-9ca4-53cde0422a07 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.607-0500 I INDEX [conn108] Registering index build: 61659d43-3267-4509-825d-3328f967fac4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.629-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 0291f889-7d0a-462d-bc1e-4d08721d017b: test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 (599be818-51b8-4e66-b20a-d37789c9cd41 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.578-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (f68613f6-f86b-4934-9d39-3662504a0379)'. Ident: 'index-1334--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 1)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.607-0500 I COMMAND [conn110] CMD: drop test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.629-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.578-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (f68613f6-f86b-4934-9d39-3662504a0379)'. Ident: 'index-1343--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 1)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.625-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.630-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.578-0500 I STORAGE [ReplWriterWorker-13] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1333--4104909142373009110, commit timestamp: Timestamp(1574796804, 1)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.625-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.631-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1 (ed415e78-e196-4764-ba02-f7c7a993ff28) to test5_fsmdb0.agg_out and drop 7453ea03-25be-4332-9cd8-ae4a023ac454.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.581-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 with provided UUID: 983f282f-409f-4dfa-8f15-1c4a81b6c121 and options: { uuid: UUID("983f282f-409f-4dfa-8f15-1c4a81b6c121"), temp: true, validationLevel: "strict", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.625-0500 I STORAGE [conn112] Index build initialized: 0fd8da0d-535b-4c6e-a8a4-777efa1b0ff0: test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947 (0c15630f-e174-4754-9a79-0a5fddb34b68 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.633-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.595-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.625-0500 I INDEX [conn112] Waiting for index build to complete: 0fd8da0d-535b-4c6e-a8a4-777efa1b0ff0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.633-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (7453ea03-25be-4332-9cd8-ae4a023ac454) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 1137), t: 1 } and commit timestamp Timestamp(1574796804, 1137)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.616-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.625-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f (c878ff31-e3ff-4be3-80b1-5fddf15f473c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.633-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (7453ea03-25be-4332-9cd8-ae4a023ac454).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.616-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.625-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f (c878ff31-e3ff-4be3-80b1-5fddf15f473c).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.633-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection ed415e78-e196-4764-ba02-f7c7a993ff28 from test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.616-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: dc08c9bc-4174-4526-9a88-c12193a08127: test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f (c878ff31-e3ff-4be3-80b1-5fddf15f473c ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.625-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f (c878ff31-e3ff-4be3-80b1-5fddf15f473c)'. Ident: 'index-1341-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 2145)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.633-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (7453ea03-25be-4332-9cd8-ae4a023ac454)'. Ident: 'index-1340--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 1137)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.616-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.625-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f (c878ff31-e3ff-4be3-80b1-5fddf15f473c)'. Ident: 'index-1343-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 2145)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.633-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (7453ea03-25be-4332-9cd8-ae4a023ac454)'. Ident: 'index-1349--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 1137)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.617-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.625-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f'. Ident: collection-1338-8224331490264904478, commit timestamp: Timestamp(1574796804, 2145)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.633-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1339--8000595249233899911, commit timestamp: Timestamp(1574796804, 1137)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.619-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.625-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.634-0500 I COMMAND [ReplWriterWorker-15] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb (bfd786d5-4348-427e-8337-7cbb30a706e4) to test5_fsmdb0.agg_out and drop ed415e78-e196-4764-ba02-f7c7a993ff28.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.625-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: dc08c9bc-4174-4526-9a88-c12193a08127: test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f ( c878ff31-e3ff-4be3-80b1-5fddf15f473c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.625-0500 I COMMAND [conn70] command test5_fsmdb0.agg_out appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 8697083610219266555, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1670215535441354989, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796801708), clusterTime: Timestamp(1574796801, 6566) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796801, 6566), signature: { hash: BinData(0, 1E8F9AC6FD6C76503938AC9BA8C832D99CC7A0C5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2916ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.634-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.agg_out (ed415e78-e196-4764-ba02-f7c7a993ff28) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 1138), t: 1 } and commit timestamp Timestamp(1574796804, 1138)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.644-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.626-0500 I COMMAND [conn114] CMD: drop test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.634-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.agg_out (ed415e78-e196-4764-ba02-f7c7a993ff28).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.644-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.626-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.634-0500 I STORAGE [ReplWriterWorker-15] renameCollection: renaming collection bfd786d5-4348-427e-8337-7cbb30a706e4 from test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.644-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: c8282325-b823-4930-b22f-9f6fa1869f65: test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 (599be818-51b8-4e66-b20a-d37789c9cd41 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.629-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d with generated UUID: aaa32297-72bb-416d-8098-7e1f16597149 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.634-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ed415e78-e196-4764-ba02-f7c7a993ff28)'. Ident: 'index-1338--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 1138)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.644-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.637-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.634-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ed415e78-e196-4764-ba02-f7c7a993ff28)'. Ident: 'index-1351--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 1138)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.644-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.650-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.634-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1337--8000595249233899911, commit timestamp: Timestamp(1574796804, 1138)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.646-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1 (ed415e78-e196-4764-ba02-f7c7a993ff28) to test5_fsmdb0.agg_out and drop 7453ea03-25be-4332-9cd8-ae4a023ac454.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.650-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.635-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 0291f889-7d0a-462d-bc1e-4d08721d017b: test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 ( 599be818-51b8-4e66-b20a-d37789c9cd41 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.646-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.650-0500 I STORAGE [conn108] Index build initialized: 61659d43-3267-4509-825d-3328f967fac4: test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205 (6c4f7f30-1ada-46a9-a83d-67ac6f231f66 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.652-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.647-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (7453ea03-25be-4332-9cd8-ae4a023ac454) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 1137), t: 1 } and commit timestamp Timestamp(1574796804, 1137)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.650-0500 I INDEX [conn108] Waiting for index build to complete: 61659d43-3267-4509-825d-3328f967fac4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.652-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.647-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (7453ea03-25be-4332-9cd8-ae4a023ac454).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.650-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 (599be818-51b8-4e66-b20a-d37789c9cd41) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.652-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: e6ee1ea8-519b-409a-8ca8-638cc899edb9: test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 (983f282f-409f-4dfa-8f15-1c4a81b6c121 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.647-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection ed415e78-e196-4764-ba02-f7c7a993ff28 from test5_fsmdb0.tmp.agg_out.d4790a8e-066b-4a01-afd2-036e57a3bfc1 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.650-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 (599be818-51b8-4e66-b20a-d37789c9cd41).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.652-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.647-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (7453ea03-25be-4332-9cd8-ae4a023ac454)'. Ident: 'index-1340--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 1137)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.650-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 (599be818-51b8-4e66-b20a-d37789c9cd41)'. Ident: 'index-1346-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 2522)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.653-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.647-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (7453ea03-25be-4332-9cd8-ae4a023ac454)'. Ident: 'index-1349--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 1137)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.650-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 (599be818-51b8-4e66-b20a-d37789c9cd41)'. Ident: 'index-1347-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 2522)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.655-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.647-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1339--4104909142373009110, commit timestamp: Timestamp(1574796804, 1137)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.650-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97'. Ident: collection-1344-8224331490264904478, commit timestamp: Timestamp(1574796804, 2522)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.658-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: e6ee1ea8-519b-409a-8ca8-638cc899edb9: test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 ( 983f282f-409f-4dfa-8f15-1c4a81b6c121 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.647-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb (bfd786d5-4348-427e-8337-7cbb30a706e4) to test5_fsmdb0.agg_out and drop ed415e78-e196-4764-ba02-f7c7a993ff28.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.651-0500 I COMMAND [conn71] command test5_fsmdb0.agg_out appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 1942215408082274862, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 3954673015441888795, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796801765), clusterTime: Timestamp(1574796801, 7074) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796801, 7074), signature: { hash: BinData(0, 1E8F9AC6FD6C76503938AC9BA8C832D99CC7A0C5), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2884ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.660-0500 I STORAGE [ReplWriterWorker-7] createCollection: test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947 with provided UUID: 0c15630f-e174-4754-9a79-0a5fddb34b68 and options: { uuid: UUID("0c15630f-e174-4754-9a79-0a5fddb34b68"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.648-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (ed415e78-e196-4764-ba02-f7c7a993ff28) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 1138), t: 1 } and commit timestamp Timestamp(1574796804, 1138)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.651-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 0fd8da0d-535b-4c6e-a8a4-777efa1b0ff0: test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947 ( 0c15630f-e174-4754-9a79-0a5fddb34b68 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.675-0500 I INDEX [ReplWriterWorker-7] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.648-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (ed415e78-e196-4764-ba02-f7c7a993ff28).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.651-0500 I INDEX [conn112] Index build completed: 0fd8da0d-535b-4c6e-a8a4-777efa1b0ff0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.677-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205 with provided UUID: 6c4f7f30-1ada-46a9-a83d-67ac6f231f66 and options: { uuid: UUID("6c4f7f30-1ada-46a9-a83d-67ac6f231f66"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.648-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection bfd786d5-4348-427e-8337-7cbb30a706e4 from test5_fsmdb0.tmp.agg_out.7c50ba8a-0ed5-4235-bad4-8b45dfbd4dfb to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.659-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.693-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.648-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (ed415e78-e196-4764-ba02-f7c7a993ff28)'. Ident: 'index-1338--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 1138)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.660-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.702-0500 I COMMAND [ReplWriterWorker-15] CMD: drop test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.648-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (ed415e78-e196-4764-ba02-f7c7a993ff28)'. Ident: 'index-1351--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 1138)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.660-0500 I INDEX [conn110] Registering index build: f7a625c0-10b6-4e41-8ebd-1e19a35fbd32
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.702-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f (c878ff31-e3ff-4be3-80b1-5fddf15f473c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 2145), t: 1 } and commit timestamp Timestamp(1574796804, 2145)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.648-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1337--4104909142373009110, commit timestamp: Timestamp(1574796804, 1138)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.660-0500 I COMMAND [conn46] CMD: drop test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.702-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f (c878ff31-e3ff-4be3-80b1-5fddf15f473c).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.648-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: c8282325-b823-4930-b22f-9f6fa1869f65: test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 ( 599be818-51b8-4e66-b20a-d37789c9cd41 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.660-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.702-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f (c878ff31-e3ff-4be3-80b1-5fddf15f473c)'. Ident: 'index-1348--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 2145)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.668-0500 I INDEX [ReplWriterWorker-1] index build: starting on test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.661-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d with generated UUID: ea5b93ee-8c25-4f30-ac71-610e7a69c1fc and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.702-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f (c878ff31-e3ff-4be3-80b1-5fddf15f473c)'. Ident: 'index-1359--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 2145)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.668-0500 I INDEX [ReplWriterWorker-1] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.671-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.702-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f'. Ident: collection-1347--8000595249233899911, commit timestamp: Timestamp(1574796804, 2145)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.668-0500 I STORAGE [ReplWriterWorker-1] Index build initialized: 11272781-02e8-4376-9e3f-36280fea9693: test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 (983f282f-409f-4dfa-8f15-1c4a81b6c121 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.687-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.706-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d with provided UUID: aaa32297-72bb-416d-8098-7e1f16597149 and options: { uuid: UUID("aaa32297-72bb-416d-8098-7e1f16597149"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.668-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.687-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.719-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.669-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.688-0500 I STORAGE [conn110] Index build initialized: f7a625c0-10b6-4e41-8ebd-1e19a35fbd32: test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d (aaa32297-72bb-416d-8098-7e1f16597149 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.739-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.672-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.688-0500 I INDEX [conn110] Waiting for index build to complete: f7a625c0-10b6-4e41-8ebd-1e19a35fbd32
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.739-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.673-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 11272781-02e8-4376-9e3f-36280fea9693: test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 ( 983f282f-409f-4dfa-8f15-1c4a81b6c121 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.688-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.739-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: 2ae54130-0502-41fe-9d09-649fed1445e1: test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947 (0c15630f-e174-4754-9a79-0a5fddb34b68 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.676-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947 with provided UUID: 0c15630f-e174-4754-9a79-0a5fddb34b68 and options: { uuid: UUID("0c15630f-e174-4754-9a79-0a5fddb34b68"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.688-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 (983f282f-409f-4dfa-8f15-1c4a81b6c121) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.739-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.691-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.688-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 (983f282f-409f-4dfa-8f15-1c4a81b6c121).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.741-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.695-0500 I STORAGE [ReplWriterWorker-3] createCollection: test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205 with provided UUID: 6c4f7f30-1ada-46a9-a83d-67ac6f231f66 and options: { uuid: UUID("6c4f7f30-1ada-46a9-a83d-67ac6f231f66"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.688-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 (983f282f-409f-4dfa-8f15-1c4a81b6c121)'. Ident: 'index-1350-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 3027)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.741-0500 I COMMAND [ReplWriterWorker-2] CMD: drop test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.712-0500 I INDEX [ReplWriterWorker-3] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.688-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 (983f282f-409f-4dfa-8f15-1c4a81b6c121)'. Ident: 'index-1351-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 3027)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.742-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 (599be818-51b8-4e66-b20a-d37789c9cd41) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 2522), t: 1 } and commit timestamp Timestamp(1574796804, 2522)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.719-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796804, 1579) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796804, 1707), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 7628 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 109ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.688-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085'. Ident: collection-1348-8224331490264904478, commit timestamp: Timestamp(1574796804, 3027)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.742-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 (599be818-51b8-4e66-b20a-d37789c9cd41).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.720-0500 I COMMAND [ReplWriterWorker-11] CMD: drop test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.688-0500 I COMMAND [conn67] command test5_fsmdb0.agg_out appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2771558137694165990, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 227290448063664998, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796804518), clusterTime: Timestamp(1574796804, 1) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796804, 1), signature: { hash: BinData(0, BC1195FB290DCC40EB38E63898FF400B461D6CD4), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"strict\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"strict\", validationAction: \"error\" }, new options: { validationLevel: \"off\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 168ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.742-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 (599be818-51b8-4e66-b20a-d37789c9cd41)'. Ident: 'index-1354--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 2522)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.720-0500 I STORAGE [ReplWriterWorker-11] dropCollection: test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f (c878ff31-e3ff-4be3-80b1-5fddf15f473c) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 2145), t: 1 } and commit timestamp Timestamp(1574796804, 2145)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.690-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 61659d43-3267-4509-825d-3328f967fac4: test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205 ( 6c4f7f30-1ada-46a9-a83d-67ac6f231f66 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.742-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 (599be818-51b8-4e66-b20a-d37789c9cd41)'. Ident: 'index-1361--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 2522)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.720-0500 I STORAGE [ReplWriterWorker-11] Finishing collection drop for test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f (c878ff31-e3ff-4be3-80b1-5fddf15f473c).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.690-0500 I INDEX [conn108] Index build completed: 61659d43-3267-4509-825d-3328f967fac4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.742-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97'. Ident: collection-1353--8000595249233899911, commit timestamp: Timestamp(1574796804, 2522)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.720-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f (c878ff31-e3ff-4be3-80b1-5fddf15f473c)'. Ident: 'index-1348--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 2145)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.697-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.742-0500 I STORAGE [ReplWriterWorker-10] createCollection: test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d with provided UUID: ea5b93ee-8c25-4f30-ac71-610e7a69c1fc and options: { uuid: UUID("ea5b93ee-8c25-4f30-ac71-610e7a69c1fc"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.720-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f (c878ff31-e3ff-4be3-80b1-5fddf15f473c)'. Ident: 'index-1359--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 2145)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.698-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.744-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.720-0500 I STORAGE [ReplWriterWorker-11] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.3f747d99-ccbd-4f12-a662-fa704e2d439f'. Ident: collection-1347--4104909142373009110, commit timestamp: Timestamp(1574796804, 2145)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.700-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.754-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 2ae54130-0502-41fe-9d09-649fed1445e1: test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947 ( 0c15630f-e174-4754-9a79-0a5fddb34b68 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.724-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d with provided UUID: aaa32297-72bb-416d-8098-7e1f16597149 and options: { uuid: UUID("aaa32297-72bb-416d-8098-7e1f16597149"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.700-0500 I COMMAND [conn112] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.761-0500 I INDEX [ReplWriterWorker-10] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.741-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.700-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.agg_out (bfd786d5-4348-427e-8337-7cbb30a706e4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 3030), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.780-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.758-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.700-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.agg_out (bfd786d5-4348-427e-8337-7cbb30a706e4).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.780-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.758-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.700-0500 I STORAGE [conn112] renameCollection: renaming collection 0c15630f-e174-4754-9a79-0a5fddb34b68 from test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.780-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: c665781f-6d6e-4cca-995e-11ff55c7cb06: test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205 (6c4f7f30-1ada-46a9-a83d-67ac6f231f66 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.758-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: a397342e-0e7f-4d10-b0bc-84abacc9fd71: test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947 (0c15630f-e174-4754-9a79-0a5fddb34b68 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.700-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bfd786d5-4348-427e-8337-7cbb30a706e4)'. Ident: 'index-1336-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 3030)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.781-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.758-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.700-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bfd786d5-4348-427e-8337-7cbb30a706e4)'. Ident: 'index-1340-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 3030)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.781-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.759-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.700-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1334-8224331490264904478, commit timestamp: Timestamp(1574796804, 3030)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.784-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.760-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.701-0500 I INDEX [conn114] Registering index build: 7eb4ff51-adb6-4b13-b8e6-ebd49c22c3e7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.786-0500 I COMMAND [ReplWriterWorker-6] CMD: drop test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.760-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.701-0500 I COMMAND [conn65] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 2833359159844066405, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4996339578342474704, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796804571), clusterTime: Timestamp(1574796804, 1202) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796804, 1333), signature: { hash: BinData(0, BC1195FB290DCC40EB38E63898FF400B461D6CD4), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 127ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.786-0500 I STORAGE [ReplWriterWorker-6] dropCollection: test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 (983f282f-409f-4dfa-8f15-1c4a81b6c121) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 3027), t: 1 } and commit timestamp Timestamp(1574796804, 3027)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.760-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 (599be818-51b8-4e66-b20a-d37789c9cd41) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 2522), t: 1 } and commit timestamp Timestamp(1574796804, 2522)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.701-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: f7a625c0-10b6-4e41-8ebd-1e19a35fbd32: test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d ( aaa32297-72bb-416d-8098-7e1f16597149 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.786-0500 I STORAGE [ReplWriterWorker-6] Finishing collection drop for test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 (983f282f-409f-4dfa-8f15-1c4a81b6c121).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.760-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 (599be818-51b8-4e66-b20a-d37789c9cd41).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.702-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 with generated UUID: 91cbc2af-e9a8-49b0-b8d5-4a4f6be74c09 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.786-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 (983f282f-409f-4dfa-8f15-1c4a81b6c121)'. Ident: 'index-1358--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 3027)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.761-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 (599be818-51b8-4e66-b20a-d37789c9cd41)'. Ident: 'index-1354--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 2522)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.702-0500 I COMMAND [conn67] CMD: dropIndexes test5_fsmdb0.agg_out: { padding: "text" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.786-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 (983f282f-409f-4dfa-8f15-1c4a81b6c121)'. Ident: 'index-1363--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 3027)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.761-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97 (599be818-51b8-4e66-b20a-d37789c9cd41)'. Ident: 'index-1361--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 2522)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.723-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.786-0500 I STORAGE [ReplWriterWorker-6] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085'. Ident: collection-1357--8000595249233899911, commit timestamp: Timestamp(1574796804, 3027)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.761-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.3d960ae0-fd9e-4ef4-bdd8-60c912519b97'. Ident: collection-1353--4104909142373009110, commit timestamp: Timestamp(1574796804, 2522)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.723-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.788-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: c665781f-6d6e-4cca-995e-11ff55c7cb06: test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205 ( 6c4f7f30-1ada-46a9-a83d-67ac6f231f66 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.762-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: a397342e-0e7f-4d10-b0bc-84abacc9fd71: test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947 ( 0c15630f-e174-4754-9a79-0a5fddb34b68 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.723-0500 I STORAGE [conn114] Index build initialized: 7eb4ff51-adb6-4b13-b8e6-ebd49c22c3e7: test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d (ea5b93ee-8c25-4f30-ac71-610e7a69c1fc ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.802-0500 I INDEX [ReplWriterWorker-9] index build: starting on test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.763-0500 I STORAGE [ReplWriterWorker-4] createCollection: test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d with provided UUID: ea5b93ee-8c25-4f30-ac71-610e7a69c1fc and options: { uuid: UUID("ea5b93ee-8c25-4f30-ac71-610e7a69c1fc"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.723-0500 I INDEX [conn110] Index build completed: f7a625c0-10b6-4e41-8ebd-1e19a35fbd32
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.802-0500 I INDEX [ReplWriterWorker-9] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.779-0500 I INDEX [ReplWriterWorker-4] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.723-0500 I INDEX [conn114] Waiting for index build to complete: 7eb4ff51-adb6-4b13-b8e6-ebd49c22c3e7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.802-0500 I STORAGE [ReplWriterWorker-9] Index build initialized: e0fbc609-c5bd-4989-887f-0156687a06fa: test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d (aaa32297-72bb-416d-8098-7e1f16597149 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.798-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.723-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.803-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.798-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.725-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 with generated UUID: deee2566-f8b8-4afd-9ac5-368c69532f75 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.803-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.798-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: 9c2cfc95-9e12-49d8-a67f-2bf63bf03706: test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205 (6c4f7f30-1ada-46a9-a83d-67ac6f231f66 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.729-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.804-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947 (0c15630f-e174-4754-9a79-0a5fddb34b68) to test5_fsmdb0.agg_out and drop bfd786d5-4348-427e-8337-7cbb30a706e4.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.798-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.730-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.806-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.799-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.739-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.806-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (bfd786d5-4348-427e-8337-7cbb30a706e4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 3030), t: 1 } and commit timestamp Timestamp(1574796804, 3030)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.802-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.807-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 9c2cfc95-9e12-49d8-a67f-2bf63bf03706: test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205 ( 6c4f7f30-1ada-46a9-a83d-67ac6f231f66 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.806-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (bfd786d5-4348-427e-8337-7cbb30a706e4).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.745-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.807-0500 I COMMAND [ReplWriterWorker-15] CMD: drop test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.806-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 0c15630f-e174-4754-9a79-0a5fddb34b68 from test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.746-0500 I COMMAND [conn108] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.807-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 (983f282f-409f-4dfa-8f15-1c4a81b6c121) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 3027), t: 1 } and commit timestamp Timestamp(1574796804, 3027)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.806-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bfd786d5-4348-427e-8337-7cbb30a706e4)'. Ident: 'index-1346--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 3030)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.746-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.agg_out (0c15630f-e174-4754-9a79-0a5fddb34b68) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 3536), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.807-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 (983f282f-409f-4dfa-8f15-1c4a81b6c121).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.806-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bfd786d5-4348-427e-8337-7cbb30a706e4)'. Ident: 'index-1355--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 3030)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.746-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.agg_out (0c15630f-e174-4754-9a79-0a5fddb34b68).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.807-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 (983f282f-409f-4dfa-8f15-1c4a81b6c121)'. Ident: 'index-1358--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 3027)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.806-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1345--8000595249233899911, commit timestamp: Timestamp(1574796804, 3030)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.746-0500 I STORAGE [conn108] renameCollection: renaming collection 6c4f7f30-1ada-46a9-a83d-67ac6f231f66 from test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.807-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085 (983f282f-409f-4dfa-8f15-1c4a81b6c121)'. Ident: 'index-1363--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 3027)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.807-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 with provided UUID: 91cbc2af-e9a8-49b0-b8d5-4a4f6be74c09 and options: { uuid: UUID("91cbc2af-e9a8-49b0-b8d5-4a4f6be74c09"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:27.611-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796804, 3600), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:818 protocol:op_msg 2863ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.746-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0c15630f-e174-4754-9a79-0a5fddb34b68)'. Ident: 'index-1355-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 3536)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.807-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.aa4d5665-499e-427c-a748-5bc6ecaea085'. Ident: collection-1357--4104909142373009110, commit timestamp: Timestamp(1574796804, 3027)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.809-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: e0fbc609-c5bd-4989-887f-0156687a06fa: test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d ( aaa32297-72bb-416d-8098-7e1f16597149 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:27.611-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796804, 4043), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 cursorExhausted:1 numYields:0 nreturned:0 reslen:235 protocol:op_msg 2813ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.746-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0c15630f-e174-4754-9a79-0a5fddb34b68)'. Ident: 'index-1357-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 3536)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.823-0500 I INDEX [ReplWriterWorker-2] index build: starting on test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.824-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.746-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1353-8224331490264904478, commit timestamp: Timestamp(1574796804, 3536)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.823-0500 I INDEX [ReplWriterWorker-2] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.841-0500 I STORAGE [ReplWriterWorker-0] createCollection: test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 with provided UUID: deee2566-f8b8-4afd-9ac5-368c69532f75 and options: { uuid: UUID("deee2566-f8b8-4afd-9ac5-368c69532f75"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.746-0500 I INDEX [conn112] Registering index build: 0a88e929-3448-4199-a8ae-4141910a08a6
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.823-0500 I STORAGE [ReplWriterWorker-2] Index build initialized: bbe53bc6-5a94-4b79-b501-a25d802cb5f2: test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d (aaa32297-72bb-416d-8098-7e1f16597149 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.856-0500 I INDEX [ReplWriterWorker-0] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.746-0500 I INDEX [conn110] Registering index build: 439dd579-365b-4527-97bc-26171a052221
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.824-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.872-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.746-0500 I COMMAND [conn68] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2217804373618918201, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8184668230612891656, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796804574), clusterTime: Timestamp(1574796804, 1333) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796804, 1398), signature: { hash: BinData(0, BC1195FB290DCC40EB38E63898FF400B461D6CD4), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 171ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.825-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.826-0500 I COMMAND [ReplWriterWorker-14] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947 (0c15630f-e174-4754-9a79-0a5fddb34b68) to test5_fsmdb0.agg_out and drop bfd786d5-4348-427e-8337-7cbb30a706e4.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.746-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: 7eb4ff51-adb6-4b13-b8e6-ebd49c22c3e7: test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d ( ea5b93ee-8c25-4f30-ac71-610e7a69c1fc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.827-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.749-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d with generated UUID: c922ade8-4d1e-4ceb-b4b3-219b30cfa056 and options: { temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.828-0500 I STORAGE [ReplWriterWorker-14] dropCollection: test5_fsmdb0.agg_out (bfd786d5-4348-427e-8337-7cbb30a706e4) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 3030), t: 1 } and commit timestamp Timestamp(1574796804, 3030)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.770-0500 I INDEX [conn112] index build: starting on test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.872-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.828-0500 I STORAGE [ReplWriterWorker-14] Finishing collection drop for test5_fsmdb0.agg_out (bfd786d5-4348-427e-8337-7cbb30a706e4).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.770-0500 I INDEX [conn112] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.872-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 87d60c76-abac-44d9-930e-76a3705d1ccf: test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d (ea5b93ee-8c25-4f30-ac71-610e7a69c1fc ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.828-0500 I STORAGE [ReplWriterWorker-14] renameCollection: renaming collection 0c15630f-e174-4754-9a79-0a5fddb34b68 from test5_fsmdb0.tmp.agg_out.52f4cf31-de1a-4a53-92ee-7cdf0a9b2947 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.770-0500 I STORAGE [conn112] Index build initialized: 0a88e929-3448-4199-a8ae-4141910a08a6: test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 (91cbc2af-e9a8-49b0-b8d5-4a4f6be74c09 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.873-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.828-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (bfd786d5-4348-427e-8337-7cbb30a706e4)'. Ident: 'index-1346--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 3030)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.770-0500 I INDEX [conn112] Waiting for index build to complete: 0a88e929-3448-4199-a8ae-4141910a08a6
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.873-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.828-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (bfd786d5-4348-427e-8337-7cbb30a706e4)'. Ident: 'index-1355--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 3030)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.770-0500 I INDEX [conn114] Index build completed: 7eb4ff51-adb6-4b13-b8e6-ebd49c22c3e7
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.874-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205 (6c4f7f30-1ada-46a9-a83d-67ac6f231f66) to test5_fsmdb0.agg_out and drop 0c15630f-e174-4754-9a79-0a5fddb34b68.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.828-0500 I STORAGE [ReplWriterWorker-14] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1345--4104909142373009110, commit timestamp: Timestamp(1574796804, 3030)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.778-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.876-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.829-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: bbe53bc6-5a94-4b79-b501-a25d802cb5f2: test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d ( aaa32297-72bb-416d-8098-7e1f16597149 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.778-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.877-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (0c15630f-e174-4754-9a79-0a5fddb34b68) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 3536), t: 1 } and commit timestamp Timestamp(1574796804, 3536)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.829-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 with provided UUID: 91cbc2af-e9a8-49b0-b8d5-4a4f6be74c09 and options: { uuid: UUID("91cbc2af-e9a8-49b0-b8d5-4a4f6be74c09"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.778-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (6c4f7f30-1ada-46a9-a83d-67ac6f231f66) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 4039), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.877-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (0c15630f-e174-4754-9a79-0a5fddb34b68).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.842-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.778-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (6c4f7f30-1ada-46a9-a83d-67ac6f231f66).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.877-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 6c4f7f30-1ada-46a9-a83d-67ac6f231f66 from test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.853-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796804, 3416) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796804, 3416), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 16598 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 126ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.778-0500 I STORAGE [conn46] renameCollection: renaming collection aaa32297-72bb-416d-8098-7e1f16597149 from test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.877-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0c15630f-e174-4754-9a79-0a5fddb34b68)'. Ident: 'index-1366--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 3536)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.857-0500 I STORAGE [ReplWriterWorker-15] createCollection: test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 with provided UUID: deee2566-f8b8-4afd-9ac5-368c69532f75 and options: { uuid: UUID("deee2566-f8b8-4afd-9ac5-368c69532f75"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.778-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (6c4f7f30-1ada-46a9-a83d-67ac6f231f66)'. Ident: 'index-1356-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 4039)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.877-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0c15630f-e174-4754-9a79-0a5fddb34b68)'. Ident: 'index-1371--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 3536)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.871-0500 I INDEX [ReplWriterWorker-15] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.778-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (6c4f7f30-1ada-46a9-a83d-67ac6f231f66)'. Ident: 'index-1359-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 4039)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.877-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1365--8000595249233899911, commit timestamp: Timestamp(1574796804, 3536)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.890-0500 I INDEX [ReplWriterWorker-12] index build: starting on test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.778-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1354-8224331490264904478, commit timestamp: Timestamp(1574796804, 4039)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.879-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 87d60c76-abac-44d9-930e-76a3705d1ccf: test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d ( ea5b93ee-8c25-4f30-ac71-610e7a69c1fc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.890-0500 I INDEX [ReplWriterWorker-12] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.778-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.881-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d with provided UUID: c922ade8-4d1e-4ceb-b4b3-219b30cfa056 and options: { uuid: UUID("c922ade8-4d1e-4ceb-b4b3-219b30cfa056"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.890-0500 I STORAGE [ReplWriterWorker-12] Index build initialized: dfd1a5ca-4a0a-4619-8a9b-913974938b2f: test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d (ea5b93ee-8c25-4f30-ac71-610e7a69c1fc ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.778-0500 I INDEX [conn108] Registering index build: 4e02bc43-4e1a-4663-9af6-8e2700494254
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.896-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.890-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.778-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8243943184009562752, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 3947596171788854505, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796804627), clusterTime: Timestamp(1574796804, 2209) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796804, 2273), signature: { hash: BinData(0, BC1195FB290DCC40EB38E63898FF400B461D6CD4), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 150ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.900-0500 I COMMAND [ReplWriterWorker-4] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d (aaa32297-72bb-416d-8098-7e1f16597149) to test5_fsmdb0.agg_out and drop 6c4f7f30-1ada-46a9-a83d-67ac6f231f66.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.891-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.779-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.900-0500 I STORAGE [ReplWriterWorker-4] dropCollection: test5_fsmdb0.agg_out (6c4f7f30-1ada-46a9-a83d-67ac6f231f66) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 4039), t: 1 } and commit timestamp Timestamp(1574796804, 4039)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.891-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205 (6c4f7f30-1ada-46a9-a83d-67ac6f231f66) to test5_fsmdb0.agg_out and drop 0c15630f-e174-4754-9a79-0a5fddb34b68.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.788-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.900-0500 I STORAGE [ReplWriterWorker-4] Finishing collection drop for test5_fsmdb0.agg_out (6c4f7f30-1ada-46a9-a83d-67ac6f231f66).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.893-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.797-0500 I INDEX [conn110] index build: starting on test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.900-0500 I STORAGE [ReplWriterWorker-4] renameCollection: renaming collection aaa32297-72bb-416d-8098-7e1f16597149 from test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.894-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (0c15630f-e174-4754-9a79-0a5fddb34b68) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 3536), t: 1 } and commit timestamp Timestamp(1574796804, 3536)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.797-0500 I INDEX [conn110] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.900-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (6c4f7f30-1ada-46a9-a83d-67ac6f231f66)'. Ident: 'index-1368--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 4039)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.894-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (0c15630f-e174-4754-9a79-0a5fddb34b68).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.797-0500 I STORAGE [conn110] Index build initialized: 439dd579-365b-4527-97bc-26171a052221: test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 (deee2566-f8b8-4afd-9ac5-368c69532f75 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.900-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (6c4f7f30-1ada-46a9-a83d-67ac6f231f66)'. Ident: 'index-1375--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 4039)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.894-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 6c4f7f30-1ada-46a9-a83d-67ac6f231f66 from test5_fsmdb0.tmp.agg_out.2e871f79-893d-43bd-a008-7962b7c73205 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.797-0500 I INDEX [conn110] Waiting for index build to complete: 439dd579-365b-4527-97bc-26171a052221
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.900-0500 I STORAGE [ReplWriterWorker-4] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1367--8000595249233899911, commit timestamp: Timestamp(1574796804, 4039)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.894-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (0c15630f-e174-4754-9a79-0a5fddb34b68)'. Ident: 'index-1366--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 3536)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.797-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.915-0500 I INDEX [ReplWriterWorker-3] index build: starting on test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.894-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (0c15630f-e174-4754-9a79-0a5fddb34b68)'. Ident: 'index-1371--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 3536)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.798-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 0a88e929-3448-4199-a8ae-4141910a08a6: test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 ( 91cbc2af-e9a8-49b0-b8d5-4a4f6be74c09 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.915-0500 I INDEX [ReplWriterWorker-3] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.894-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1365--4104909142373009110, commit timestamp: Timestamp(1574796804, 3536)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.799-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.915-0500 I STORAGE [ReplWriterWorker-3] Index build initialized: 4c86276e-4771-4a54-b170-f44ff91c24ce: test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 (91cbc2af-e9a8-49b0-b8d5-4a4f6be74c09 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.895-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: dfd1a5ca-4a0a-4619-8a9b-913974938b2f: test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d ( ea5b93ee-8c25-4f30-ac71-610e7a69c1fc ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.799-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4 with generated UUID: 7f8ceb41-20f1-4d33-a74b-6068fab8eabd and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.916-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.908-0500 I STORAGE [ReplWriterWorker-2] createCollection: test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d with provided UUID: c922ade8-4d1e-4ceb-b4b3-219b30cfa056 and options: { uuid: UUID("c922ade8-4d1e-4ceb-b4b3-219b30cfa056"), temp: true, validationLevel: "off", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.815-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.916-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.923-0500 I INDEX [ReplWriterWorker-2] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.816-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.919-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.927-0500 I COMMAND [ReplWriterWorker-7] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d (aaa32297-72bb-416d-8098-7e1f16597149) to test5_fsmdb0.agg_out and drop 6c4f7f30-1ada-46a9-a83d-67ac6f231f66.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.816-0500 I STORAGE [conn108] Index build initialized: 4e02bc43-4e1a-4663-9af6-8e2700494254: test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d (c922ade8-4d1e-4ceb-b4b3-219b30cfa056 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.921-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 4c86276e-4771-4a54-b170-f44ff91c24ce: test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 ( 91cbc2af-e9a8-49b0-b8d5-4a4f6be74c09 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.928-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.agg_out (6c4f7f30-1ada-46a9-a83d-67ac6f231f66) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 4039), t: 1 } and commit timestamp Timestamp(1574796804, 4039)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.816-0500 I INDEX [conn108] Waiting for index build to complete: 4e02bc43-4e1a-4663-9af6-8e2700494254
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.923-0500 I STORAGE [ReplWriterWorker-12] createCollection: test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4 with provided UUID: 7f8ceb41-20f1-4d33-a74b-6068fab8eabd and options: { uuid: UUID("7f8ceb41-20f1-4d33-a74b-6068fab8eabd"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.928-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.agg_out (6c4f7f30-1ada-46a9-a83d-67ac6f231f66).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.816-0500 I INDEX [conn112] Index build completed: 0a88e929-3448-4199-a8ae-4141910a08a6
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.935-0500 I INDEX [ReplWriterWorker-12] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.928-0500 I STORAGE [ReplWriterWorker-7] renameCollection: renaming collection aaa32297-72bb-416d-8098-7e1f16597149 from test5_fsmdb0.tmp.agg_out.6fea3ea6-1ae5-4e33-8183-bcbca064167d to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.816-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.952-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.928-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (6c4f7f30-1ada-46a9-a83d-67ac6f231f66)'. Ident: 'index-1368--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 4039)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.825-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.952-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.928-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (6c4f7f30-1ada-46a9-a83d-67ac6f231f66)'. Ident: 'index-1375--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 4039)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.826-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.952-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: becb959f-d37e-490a-a187-af4392a06e5e: test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 (deee2566-f8b8-4afd-9ac5-368c69532f75 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.928-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1367--4104909142373009110, commit timestamp: Timestamp(1574796804, 4039)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.834-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.952-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.940-0500 I INDEX [ReplWriterWorker-13] index build: starting on test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.834-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: 439dd579-365b-4527-97bc-26171a052221: test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 ( deee2566-f8b8-4afd-9ac5-368c69532f75 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.952-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.940-0500 I INDEX [ReplWriterWorker-13] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.834-0500 I INDEX [conn110] Index build completed: 439dd579-365b-4527-97bc-26171a052221
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.955-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.940-0500 I STORAGE [ReplWriterWorker-13] Index build initialized: 42cfed0d-3e71-4a69-8a39-e64a140422eb: test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 (91cbc2af-e9a8-49b0-b8d5-4a4f6be74c09 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.836-0500 I INDEX [IndexBuildsCoordinatorMongod-3] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.962-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: becb959f-d37e-490a-a187-af4392a06e5e: test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 ( deee2566-f8b8-4afd-9ac5-368c69532f75 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.941-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.837-0500 I INDEX [conn46] Registering index build: fca4c2ac-0d21-4de3-b691-12a08c270dc2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.968-0500 I INDEX [ReplWriterWorker-8] index build: starting on test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.941-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.837-0500 I COMMAND [conn114] CMD: drop test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.968-0500 I INDEX [ReplWriterWorker-8] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.945-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.838-0500 I STORAGE [IndexBuildsCoordinatorMongod-3] Index build completed successfully: 4e02bc43-4e1a-4663-9af6-8e2700494254: test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d ( c922ade8-4d1e-4ceb-b4b3-219b30cfa056 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.968-0500 I STORAGE [ReplWriterWorker-8] Index build initialized: f6d4fa85-d65b-46aa-aecd-7cacd424e9b6: test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d (c922ade8-4d1e-4ceb-b4b3-219b30cfa056 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.946-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 42cfed0d-3e71-4a69-8a39-e64a140422eb: test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 ( 91cbc2af-e9a8-49b0-b8d5-4a4f6be74c09 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.857-0500 I INDEX [conn46] index build: starting on test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.968-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.948-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4 with provided UUID: 7f8ceb41-20f1-4d33-a74b-6068fab8eabd and options: { uuid: UUID("7f8ceb41-20f1-4d33-a74b-6068fab8eabd"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.857-0500 I INDEX [conn46] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.968-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.963-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.857-0500 I STORAGE [conn46] Index build initialized: fca4c2ac-0d21-4de3-b691-12a08c270dc2: test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4 (7f8ceb41-20f1-4d33-a74b-6068fab8eabd ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.971-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.983-0500 I INDEX [ReplWriterWorker-6] index build: starting on test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.857-0500 I INDEX [conn46] Waiting for index build to complete: fca4c2ac-0d21-4de3-b691-12a08c270dc2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.974-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: f6d4fa85-d65b-46aa-aecd-7cacd424e9b6: test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d ( c922ade8-4d1e-4ceb-b4b3-219b30cfa056 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.983-0500 I INDEX [ReplWriterWorker-6] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.857-0500 I INDEX [conn108] Index build completed: 4e02bc43-4e1a-4663-9af6-8e2700494254
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.978-0500 I COMMAND [ReplWriterWorker-1] CMD: drop test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.983-0500 I STORAGE [ReplWriterWorker-6] Index build initialized: acd34821-a40b-4000-bc4b-8e04dcc71687: test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 (deee2566-f8b8-4afd-9ac5-368c69532f75 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.857-0500 I STORAGE [conn114] dropCollection: test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d (ea5b93ee-8c25-4f30-ac71-610e7a69c1fc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.978-0500 I STORAGE [ReplWriterWorker-1] dropCollection: test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d (ea5b93ee-8c25-4f30-ac71-610e7a69c1fc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 5551), t: 1 } and commit timestamp Timestamp(1574796804, 5551)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.983-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.857-0500 I STORAGE [conn114] Finishing collection drop for test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d (ea5b93ee-8c25-4f30-ac71-610e7a69c1fc).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.978-0500 I STORAGE [ReplWriterWorker-1] Finishing collection drop for test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d (ea5b93ee-8c25-4f30-ac71-610e7a69c1fc).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.984-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.857-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.978-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d (ea5b93ee-8c25-4f30-ac71-610e7a69c1fc)'. Ident: 'index-1374--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 5551)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.986-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.857-0500 I STORAGE [conn114] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d (ea5b93ee-8c25-4f30-ac71-610e7a69c1fc)'. Ident: 'index-1366-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 5551)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.978-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d (ea5b93ee-8c25-4f30-ac71-610e7a69c1fc)'. Ident: 'index-1383--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 5551)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:24.996-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: acd34821-a40b-4000-bc4b-8e04dcc71687: test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 ( deee2566-f8b8-4afd-9ac5-368c69532f75 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.857-0500 I STORAGE [conn114] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d (ea5b93ee-8c25-4f30-ac71-610e7a69c1fc)'. Ident: 'index-1367-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 5551)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.978-0500 I STORAGE [ReplWriterWorker-1] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d'. Ident: collection-1373--8000595249233899911, commit timestamp: Timestamp(1574796804, 5551)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.002-0500 I INDEX [ReplWriterWorker-15] index build: starting on test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.857-0500 I STORAGE [conn114] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d'. Ident: collection-1364-8224331490264904478, commit timestamp: Timestamp(1574796804, 5551)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.992-0500 I INDEX [ReplWriterWorker-11] index build: starting on test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.002-0500 I INDEX [ReplWriterWorker-15] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.858-0500 I COMMAND [conn71] command test5_fsmdb0.agg_out appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 1796279948035843322, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 852189570888591976, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796804652), clusterTime: Timestamp(1574796804, 2522) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796804, 2522), signature: { hash: BinData(0, BC1195FB290DCC40EB38E63898FF400B461D6CD4), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:1" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59220", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:988 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 197ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.992-0500 I INDEX [ReplWriterWorker-11] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.003-0500 I STORAGE [ReplWriterWorker-15] Index build initialized: d5c0c1b5-9183-48b1-80bc-85598301b7e5: test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d (c922ade8-4d1e-4ceb-b4b3-219b30cfa056 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.858-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.992-0500 I STORAGE [ReplWriterWorker-11] Index build initialized: 3108e9cd-1f3a-436f-b49b-3daf64d6c62b: test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4 (7f8ceb41-20f1-4d33-a74b-6068fab8eabd ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.003-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.861-0500 I INDEX [IndexBuildsCoordinatorMongod-5] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.993-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.003-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.861-0500 I COMMAND [conn110] CMD: drop test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.993-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.005-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.861-0500 I STORAGE [conn110] dropCollection: test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 (deee2566-f8b8-4afd-9ac5-368c69532f75) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.994-0500 I COMMAND [ReplWriterWorker-12] CMD: drop test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.010-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: d5c0c1b5-9183-48b1-80bc-85598301b7e5: test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d ( c922ade8-4d1e-4ceb-b4b3-219b30cfa056 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.861-0500 I STORAGE [conn110] Finishing collection drop for test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 (deee2566-f8b8-4afd-9ac5-368c69532f75).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.994-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 (deee2566-f8b8-4afd-9ac5-368c69532f75) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 5554), t: 1 } and commit timestamp Timestamp(1574796804, 5554)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.015-0500 I COMMAND [conn106] command admin.run_check_repl_dbhash_background appName: "MongoDB Shell" command: find { find: "run_check_repl_dbhash_background", readConcern: { level: "majority", afterClusterTime: Timestamp(1574796804, 5498) }, limit: 1.0, singleBatch: true, lsid: { id: UUID("db13105a-0202-4d4c-9109-23747867bb60") }, $clusterTime: { clusterTime: Timestamp(1574796804, 5551), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 12610 } }, ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_msg 155ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.861-0500 I STORAGE [conn110] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 (deee2566-f8b8-4afd-9ac5-368c69532f75)'. Ident: 'index-1372-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 5554)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.994-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 (deee2566-f8b8-4afd-9ac5-368c69532f75).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.016-0500 I COMMAND [ReplWriterWorker-9] CMD: drop test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.861-0500 I STORAGE [conn110] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 (deee2566-f8b8-4afd-9ac5-368c69532f75)'. Ident: 'index-1377-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 5554)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.994-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 (deee2566-f8b8-4afd-9ac5-368c69532f75)'. Ident: 'index-1382--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 5554)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.016-0500 I STORAGE [ReplWriterWorker-9] dropCollection: test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d (ea5b93ee-8c25-4f30-ac71-610e7a69c1fc) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 5551), t: 1 } and commit timestamp Timestamp(1574796804, 5551)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.862-0500 I COMMAND [conn108] CMD: drop test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.994-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 (deee2566-f8b8-4afd-9ac5-368c69532f75)'. Ident: 'index-1391--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 5554)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.016-0500 I STORAGE [ReplWriterWorker-9] Finishing collection drop for test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d (ea5b93ee-8c25-4f30-ac71-610e7a69c1fc).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.862-0500 I STORAGE [conn110] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5'. Ident: collection-1371-8224331490264904478, commit timestamp: Timestamp(1574796804, 5554)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.994-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5'. Ident: collection-1381--8000595249233899911, commit timestamp: Timestamp(1574796804, 5554)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.016-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d (ea5b93ee-8c25-4f30-ac71-610e7a69c1fc)'. Ident: 'index-1374--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 5551)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.862-0500 I STORAGE [conn108] dropCollection: test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 (91cbc2af-e9a8-49b0-b8d5-4a4f6be74c09) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.995-0500 I COMMAND [ReplWriterWorker-15] CMD: drop test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.016-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d (ea5b93ee-8c25-4f30-ac71-610e7a69c1fc)'. Ident: 'index-1383--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 5551)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.862-0500 I STORAGE [conn108] Finishing collection drop for test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 (91cbc2af-e9a8-49b0-b8d5-4a4f6be74c09).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.995-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 (91cbc2af-e9a8-49b0-b8d5-4a4f6be74c09) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 5555), t: 1 } and commit timestamp Timestamp(1574796804, 5555)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.016-0500 I STORAGE [ReplWriterWorker-9] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.7bf7c403-489f-4556-a4a4-f220bec4375d'. Ident: collection-1373--4104909142373009110, commit timestamp: Timestamp(1574796804, 5551)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.862-0500 I STORAGE [conn108] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 (91cbc2af-e9a8-49b0-b8d5-4a4f6be74c09)'. Ident: 'index-1370-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 5555)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.995-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 (91cbc2af-e9a8-49b0-b8d5-4a4f6be74c09).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.031-0500 I INDEX [ReplWriterWorker-10] index build: starting on test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.862-0500 I STORAGE [conn108] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 (91cbc2af-e9a8-49b0-b8d5-4a4f6be74c09)'. Ident: 'index-1373-8224331490264904478', commit timestamp: 'Timestamp(1574796804, 5555)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.995-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 (91cbc2af-e9a8-49b0-b8d5-4a4f6be74c09)'. Ident: 'index-1380--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 5555)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.031-0500 I INDEX [ReplWriterWorker-10] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.862-0500 I STORAGE [conn108] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5'. Ident: collection-1368-8224331490264904478, commit timestamp: Timestamp(1574796804, 5555)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.995-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 (91cbc2af-e9a8-49b0-b8d5-4a4f6be74c09)'. Ident: 'index-1387--8000595249233899911', commit timestamp: 'Timestamp(1574796804, 5555)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.031-0500 I STORAGE [ReplWriterWorker-10] Index build initialized: 57c4d8b7-b49f-4f20-9935-ef691e477e69: test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4 (7f8ceb41-20f1-4d33-a74b-6068fab8eabd ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.862-0500 I STORAGE [conn114] createCollection: test5_fsmdb0.tmp.agg_out.7f231b14-f53a-4277-b094-607ef4ca1037 with generated UUID: 99edb5a0-9acc-4cb7-bca9-7453ec0fc251 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.995-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5'. Ident: collection-1379--8000595249233899911, commit timestamp: Timestamp(1574796804, 5555)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.031-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.862-0500 I COMMAND [conn67] command test5_fsmdb0.agg_out appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 6440431445697632405, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 6731747090739340723, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796804724), clusterTime: Timestamp(1574796804, 3416) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796804, 3480), signature: { hash: BinData(0, BC1195FB290DCC40EB38E63898FF400B461D6CD4), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:0" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46066", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:988 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 137ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.996-0500 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.032-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.862-0500 I COMMAND [conn65] command test5_fsmdb0.agg_out appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 5474266441713394470, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 25274431199033535, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796804689), clusterTime: Timestamp(1574796804, 3027) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796804, 3030), signature: { hash: BinData(0, BC1195FB290DCC40EB38E63898FF400B461D6CD4), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:988 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 161ms
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:24.996-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.7f231b14-f53a-4277-b094-607ef4ca1037 with provided UUID: 99edb5a0-9acc-4cb7-bca9-7453ec0fc251 and options: { uuid: UUID("99edb5a0-9acc-4cb7-bca9-7453ec0fc251"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.033-0500 I COMMAND [ReplWriterWorker-15] CMD: drop test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.864-0500 I STORAGE [IndexBuildsCoordinatorMongod-5] Index build completed successfully: fca4c2ac-0d21-4de3-b691-12a08c270dc2: test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4 ( 7f8ceb41-20f1-4d33-a74b-6068fab8eabd ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:25.000-0500 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 3108e9cd-1f3a-436f-b49b-3daf64d6c62b: test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4 ( 7f8ceb41-20f1-4d33-a74b-6068fab8eabd ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.033-0500 I STORAGE [ReplWriterWorker-15] dropCollection: test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 (deee2566-f8b8-4afd-9ac5-368c69532f75) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 5554), t: 1 } and commit timestamp Timestamp(1574796804, 5554)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.864-0500 I INDEX [conn46] Index build completed: fca4c2ac-0d21-4de3-b691-12a08c270dc2
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:25.014-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7f231b14-f53a-4277-b094-607ef4ca1037
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.033-0500 I STORAGE [ReplWriterWorker-15] Finishing collection drop for test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 (deee2566-f8b8-4afd-9ac5-368c69532f75).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.865-0500 I STORAGE [conn108] createCollection: test5_fsmdb0.tmp.agg_out.1d6fe205-9a94-4550-9961-abdbb7f6fe77 with generated UUID: 500efb0b-4159-4060-9da6-2fa4ef29a646 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:25.018-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.1d6fe205-9a94-4550-9961-abdbb7f6fe77 with provided UUID: 500efb0b-4159-4060-9da6-2fa4ef29a646 and options: { uuid: UUID("500efb0b-4159-4060-9da6-2fa4ef29a646"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.033-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 (deee2566-f8b8-4afd-9ac5-368c69532f75)'. Ident: 'index-1382--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 5554)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.865-0500 I STORAGE [conn110] createCollection: test5_fsmdb0.tmp.agg_out.35ec9e8f-a82c-4e2f-ba10-f055e8c7db1c with generated UUID: a4ca71f9-e300-4148-891c-8ee681ffe3d3 and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:25.032-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.1d6fe205-9a94-4550-9961-abdbb7f6fe77
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.033-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5 (deee2566-f8b8-4afd-9ac5-368c69532f75)'. Ident: 'index-1391--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 5554)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.898-0500 I INDEX [conn114] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7f231b14-f53a-4277-b094-607ef4ca1037
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.033-0500 I STORAGE [ReplWriterWorker-15] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.01bd969c-0068-4124-818d-f369db0eceb5'. Ident: collection-1381--4104909142373009110, commit timestamp: Timestamp(1574796804, 5554)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.599-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.35ec9e8f-a82c-4e2f-ba10-f055e8c7db1c with provided UUID: a4ca71f9-e300-4148-891c-8ee681ffe3d3 and options: { uuid: UUID("a4ca71f9-e300-4148-891c-8ee681ffe3d3"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.903-0500 I INDEX [conn108] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.1d6fe205-9a94-4550-9961-abdbb7f6fe77
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.033-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.613-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.35ec9e8f-a82c-4e2f-ba10-f055e8c7db1c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:24.909-0500 I INDEX [conn110] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.35ec9e8f-a82c-4e2f-ba10-f055e8c7db1c
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.034-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 (91cbc2af-e9a8-49b0-b8d5-4a4f6be74c09) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796804, 5555), t: 1 } and commit timestamp Timestamp(1574796804, 5555)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.034-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 (91cbc2af-e9a8-49b0-b8d5-4a4f6be74c09).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.034-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 (91cbc2af-e9a8-49b0-b8d5-4a4f6be74c09)'. Ident: 'index-1380--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 5555)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.034-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5 (91cbc2af-e9a8-49b0-b8d5-4a4f6be74c09)'. Ident: 'index-1387--4104909142373009110', commit timestamp: 'Timestamp(1574796804, 5555)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.034-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.a01dd953-f08b-4abc-9da9-6afa8ae5a3f5'. Ident: collection-1379--4104909142373009110, commit timestamp: Timestamp(1574796804, 5555)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.034-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.034-0500 I STORAGE [ReplWriterWorker-11] createCollection: test5_fsmdb0.tmp.agg_out.7f231b14-f53a-4277-b094-607ef4ca1037 with provided UUID: 99edb5a0-9acc-4cb7-bca9-7453ec0fc251 and options: { uuid: UUID("99edb5a0-9acc-4cb7-bca9-7453ec0fc251"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.036-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: 57c4d8b7-b49f-4f20-9935-ef691e477e69: test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4 ( 7f8ceb41-20f1-4d33-a74b-6068fab8eabd ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.047-0500 I INDEX [ReplWriterWorker-11] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.7f231b14-f53a-4277-b094-607ef4ca1037
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.051-0500 I STORAGE [ReplWriterWorker-9] createCollection: test5_fsmdb0.tmp.agg_out.1d6fe205-9a94-4550-9961-abdbb7f6fe77 with provided UUID: 500efb0b-4159-4060-9da6-2fa4ef29a646 and options: { uuid: UUID("500efb0b-4159-4060-9da6-2fa4ef29a646"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.594-0500 I COMMAND [conn110] command test5_fsmdb0.tmp.agg_out.35ec9e8f-a82c-4e2f-ba10-f055e8c7db1c appName: "tid:4" command: create { create: "tmp.agg_out.35ec9e8f-a82c-4e2f-ba10-f055e8c7db1c", temp: true, validationLevel: "moderate", validationAction: "error", databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796804, 5620), signature: { hash: BinData(0, BC1195FB290DCC40EB38E63898FF400B461D6CD4), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:4" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46076", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 reslen:333 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2729ms
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:25.062-0500 I INDEX [ReplWriterWorker-9] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.1d6fe205-9a94-4550-9961-abdbb7f6fe77
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.595-0500 I INDEX [conn114] Registering index build: f5da0eae-9c5d-471a-9b0c-208eedd8858c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.595-0500 I COMMAND [conn112] command admin.$cmd appName: "tid:2" command: internalRenameIfOptionsAndIndexesMatch { internalRenameIfOptionsAndIndexesMatch: 1, from: "test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d", to: "test5_fsmdb0.agg_out", collectionOptions: { validationLevel: "off", validationAction: "error" }, indexes: [ { v: 2, key: { _id: 1 }, name: "_id_" }, { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796804, 6122), signature: { hash: BinData(0, BC1195FB290DCC40EB38E63898FF400B461D6CD4), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "admin" } numYields:0 ok:0 errMsg:"collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:616 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 2712197 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2712ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.595-0500 I INDEX [conn108] Registering index build: a2fb801e-b2f2-4bcf-818b-863c800c0444
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.595-0500 I COMMAND [conn220] command test5_fsmdb0.$cmd appName: "MongoDB Shell" command: dbHash { dbHash: 1.0, $_internalReadAtClusterTime: Timestamp(1574796804, 5498), lsid: { id: UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") }, $clusterTime: { clusterTime: Timestamp(1574796804, 5551), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796804, 5498). Collection minimum timestamp is Timestamp(1574796804, 5621)" errName:SnapshotUnavailable errCode:246 reslen:602 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 2577748 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 2577ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.595-0500 I INDEX [conn110] Registering index build: 61db388f-32de-48d9-bda6-6f441251d9f3
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.595-0500 I COMMAND [conn112] CMD: drop test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.604-0500 I COMMAND [conn46] command test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4 appName: "tid:3" command: insert { insert: "tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4", bypassDocumentValidation: false, ordered: false, documents: 500, shardVersion: [ Timestamp(0, 0), ObjectId('000000000000000000000000') ], databaseVersion: { uuid: UUID("99da02c1-15be-40e8-8663-3695a490d74c"), lastMod: 1 }, writeConcern: { w: 1, wtimeout: 0 }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796804, 5622), signature: { hash: BinData(0, BC1195FB290DCC40EB38E63898FF400B461D6CD4), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } ninserted:500 keysInserted:1000 numYields:0 reslen:400 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 8 } }, ReplicationStateTransition: { acquireCount: { w: 8 } }, Global: { acquireCount: { w: 8 } }, Database: { acquireCount: { w: 8 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 2704536 } }, Collection: { acquireCount: { w: 8 } }, Mutex: { acquireCount: { r: 1016 } } } flowControl:{ acquireCount: 8 } storage:{ timeWaitingMicros: { schemaLock: 19943 } } protocol:op_msg 2735ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.610-0500 I INDEX [conn114] index build: starting on test5_fsmdb0.tmp.agg_out.7f231b14-f53a-4277-b094-607ef4ca1037 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.610-0500 I INDEX [conn114] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.610-0500 I STORAGE [conn114] Index build initialized: f5da0eae-9c5d-471a-9b0c-208eedd8858c: test5_fsmdb0.tmp.agg_out.7f231b14-f53a-4277-b094-607ef4ca1037 (99edb5a0-9acc-4cb7-bca9-7453ec0fc251 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.610-0500 I INDEX [conn114] Waiting for index build to complete: f5da0eae-9c5d-471a-9b0c-208eedd8858c
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.610-0500 I STORAGE [conn112] dropCollection: test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d (c922ade8-4d1e-4ceb-b4b3-219b30cfa056) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.610-0500 I STORAGE [conn112] Finishing collection drop for test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d (c922ade8-4d1e-4ceb-b4b3-219b30cfa056).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.610-0500 I STORAGE [conn112] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d (c922ade8-4d1e-4ceb-b4b3-219b30cfa056)'. Ident: 'index-1376-8224331490264904478', commit timestamp: 'Timestamp(1574796807, 442)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.610-0500 I STORAGE [conn112] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d (c922ade8-4d1e-4ceb-b4b3-219b30cfa056)'. Ident: 'index-1379-8224331490264904478', commit timestamp: 'Timestamp(1574796807, 442)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.610-0500 I STORAGE [conn112] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d'. Ident: collection-1374-8224331490264904478, commit timestamp: Timestamp(1574796807, 442)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.610-0500 I COMMAND [conn46] renameCollectionForCommand: rename test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4 to test5_fsmdb0.agg_out and drop test5_fsmdb0.agg_out.
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.611-0500 I STORAGE [conn46] dropCollection: test5_fsmdb0.agg_out (aaa32297-72bb-416d-8098-7e1f16597149) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796807, 443), t: 1 } and commit timestamp Timestamp(0, 0)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.611-0500 I STORAGE [conn46] Finishing collection drop for test5_fsmdb0.agg_out (aaa32297-72bb-416d-8098-7e1f16597149).
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.611-0500 I STORAGE [conn46] renameCollection: renaming collection 7f8ceb41-20f1-4d33-a74b-6068fab8eabd from test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.611-0500 I COMMAND [conn68] command test5_fsmdb0.agg_out appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 4821399593509220634, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 9026262237504903373, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796804748), clusterTime: Timestamp(1574796804, 3600) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796804, 3600), signature: { hash: BinData(0, BC1195FB290DCC40EB38E63898FF400B461D6CD4), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:2" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20007", client: "127.0.0.1:46068", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } numYields:0 ok:0 errMsg:"failed while running command { internalRenameIfOptionsAndIndexesMatch: 1, from: \"test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d\", to: \"test5_fsmdb0.agg_out\", collectionOptions: { validationLevel: \"off\", validationAction: \"error\" }, indexes: [ { v: 2, key: { _id: 1 }, name: \"_id_\" }, { v: 2, key: { _id: \"hashed\" }, name: \"_id_hashed\" } ] } :: caused by :: collection options of target collection test5_fsmdb0.agg_out changed during processing. Original options: { validationLevel: \"off\", validationAction: \"error\" }, new options: { validationLevel: \"moderate\", validationAction: \"error\" }" errName:CommandFailed errCode:125 reslen:988 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2862ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.611-0500 I STORAGE [conn46] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (aaa32297-72bb-416d-8098-7e1f16597149)'. Ident: 'index-1362-8224331490264904478', commit timestamp: 'Timestamp(1574796807, 443)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.615-0500 I STORAGE [ReplWriterWorker-13] createCollection: test5_fsmdb0.tmp.agg_out.35ec9e8f-a82c-4e2f-ba10-f055e8c7db1c with provided UUID: a4ca71f9-e300-4148-891c-8ee681ffe3d3 and options: { uuid: UUID("a4ca71f9-e300-4148-891c-8ee681ffe3d3"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.611-0500 I STORAGE [conn46] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (aaa32297-72bb-416d-8098-7e1f16597149)'. Ident: 'index-1363-8224331490264904478', commit timestamp: 'Timestamp(1574796807, 443)'
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.611-0500 I STORAGE [conn46] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1360-8224331490264904478, commit timestamp: Timestamp(1574796807, 443)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.611-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.611-0500 I COMMAND [conn70] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $mergeCursors: { lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, compareWholeSortKey: false, remotes: [ { shardId: "shard-rs0", hostAndPort: "localhost:20001", cursorResponse: { cursor: { id: 2480843597742197779, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } }, { shardId: "shard-rs1", hostAndPort: "localhost:20004", cursorResponse: { cursor: { id: 8202383563970117584, ns: "test5_fsmdb0.fsmcoll0", firstBatch: [] }, ok: 1.0 } } ], tailableMode: "normal", nss: "test5_fsmdb0.fsmcoll0", allowPartialResults: false } }, { $out: "agg_out" } ], fromMongos: true, collation: { locale: "simple" }, cursor: { batchSize: 101 }, runtimeConstants: { localNow: new Date(1574796804798), clusterTime: Timestamp(1574796804, 4043) }, allowImplicitCollectionCreation: false, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1574796804, 4107), signature: { hash: BinData(0, BC1195FB290DCC40EB38E63898FF400B461D6CD4), keyId: 6763700092420489256 } }, $client: { application: { name: "tid:3" }, driver: { name: "MongoDB Internal Client", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" }, mongos: { host: "nz_desktop:20008", client: "127.0.0.1:59212", version: "0.0.0" } }, $configServerState: { opTime: { ts: Timestamp(1574796797, 1), t: 1 } }, $db: "test5_fsmdb0" } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 3 } } } storage:{} protocol:op_msg 2812ms
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.611-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.614-0500 I STORAGE [conn46] createCollection: test5_fsmdb0.tmp.agg_out.f61a7034-2a80-40c1-a416-62ce435a0e35 with generated UUID: b96cafac-e50c-4fba-b89f-189cd3f546dc and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.614-0500 I STORAGE [conn112] createCollection: test5_fsmdb0.tmp.agg_out.dc957488-307a-4bfd-9f34-ba528cc3e6c0 with generated UUID: 2dba5bd9-2911-4f33-8eb2-87a7a688314f and options: { temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.620-0500 I INDEX [IndexBuildsCoordinatorMongod-4] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7f231b14-f53a-4277-b094-607ef4ca1037
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.629-0500 I INDEX [ReplWriterWorker-13] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.35ec9e8f-a82c-4e2f-ba10-f055e8c7db1c
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.634-0500 I COMMAND [ReplWriterWorker-2] CMD: drop test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.635-0500 I STORAGE [ReplWriterWorker-2] dropCollection: test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d (c922ade8-4d1e-4ceb-b4b3-219b30cfa056) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796807, 442), t: 1 } and commit timestamp Timestamp(1574796807, 442)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.635-0500 I STORAGE [ReplWriterWorker-2] Finishing collection drop for test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d (c922ade8-4d1e-4ceb-b4b3-219b30cfa056).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.635-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d (c922ade8-4d1e-4ceb-b4b3-219b30cfa056)'. Ident: 'index-1386--8000595249233899911', commit timestamp: 'Timestamp(1574796807, 442)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.635-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d (c922ade8-4d1e-4ceb-b4b3-219b30cfa056)'. Ident: 'index-1393--8000595249233899911', commit timestamp: 'Timestamp(1574796807, 442)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.635-0500 I STORAGE [ReplWriterWorker-2] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d'. Ident: collection-1385--8000595249233899911, commit timestamp: Timestamp(1574796807, 442)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.635-0500 I COMMAND [ReplWriterWorker-3] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4 (7f8ceb41-20f1-4d33-a74b-6068fab8eabd) to test5_fsmdb0.agg_out and drop aaa32297-72bb-416d-8098-7e1f16597149.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.636-0500 I STORAGE [ReplWriterWorker-3] dropCollection: test5_fsmdb0.agg_out (aaa32297-72bb-416d-8098-7e1f16597149) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796807, 443), t: 1 } and commit timestamp Timestamp(1574796807, 443)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.636-0500 I STORAGE [ReplWriterWorker-3] Finishing collection drop for test5_fsmdb0.agg_out (aaa32297-72bb-416d-8098-7e1f16597149).
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.636-0500 I STORAGE [ReplWriterWorker-3] renameCollection: renaming collection 7f8ceb41-20f1-4d33-a74b-6068fab8eabd from test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.636-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (aaa32297-72bb-416d-8098-7e1f16597149)'. Ident: 'index-1370--8000595249233899911', commit timestamp: 'Timestamp(1574796807, 443)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.636-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (aaa32297-72bb-416d-8098-7e1f16597149)'. Ident: 'index-1377--8000595249233899911', commit timestamp: 'Timestamp(1574796807, 443)'
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.636-0500 I STORAGE [ReplWriterWorker-3] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1369--8000595249233899911, commit timestamp: Timestamp(1574796807, 443)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.640-0500 I COMMAND [ReplWriterWorker-7] CMD: drop test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.640-0500 I INDEX [conn108] index build: starting on test5_fsmdb0.tmp.agg_out.1d6fe205-9a94-4550-9961-abdbb7f6fe77 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.640-0500 I INDEX [conn108] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.640-0500 I STORAGE [conn108] Index build initialized: a2fb801e-b2f2-4bcf-818b-863c800c0444: test5_fsmdb0.tmp.agg_out.1d6fe205-9a94-4550-9961-abdbb7f6fe77 (500efb0b-4159-4060-9da6-2fa4ef29a646 ): indexes: 1
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.640-0500 I INDEX [conn108] Waiting for index build to complete: a2fb801e-b2f2-4bcf-818b-863c800c0444
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.640-0500 I STORAGE [ReplWriterWorker-7] dropCollection: test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d (c922ade8-4d1e-4ceb-b4b3-219b30cfa056) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796807, 442), t: 1 } and commit timestamp Timestamp(1574796807, 442)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.640-0500 I STORAGE [ReplWriterWorker-7] Finishing collection drop for test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d (c922ade8-4d1e-4ceb-b4b3-219b30cfa056).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.641-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d (c922ade8-4d1e-4ceb-b4b3-219b30cfa056)'. Ident: 'index-1386--4104909142373009110', commit timestamp: 'Timestamp(1574796807, 442)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.641-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d (c922ade8-4d1e-4ceb-b4b3-219b30cfa056)'. Ident: 'index-1393--4104909142373009110', commit timestamp: 'Timestamp(1574796807, 442)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.641-0500 I STORAGE [ReplWriterWorker-7] Deferring table drop for collection 'test5_fsmdb0.tmp.agg_out.3208d6bf-be6e-4173-9010-bfdc1c4dd13d'. Ident: collection-1385--4104909142373009110, commit timestamp: Timestamp(1574796807, 442)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.641-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.641-0500 I STORAGE [IndexBuildsCoordinatorMongod-4] Index build completed successfully: f5da0eae-9c5d-471a-9b0c-208eedd8858c: test5_fsmdb0.tmp.agg_out.7f231b14-f53a-4277-b094-607ef4ca1037 ( 99edb5a0-9acc-4cb7-bca9-7453ec0fc251 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.641-0500 I COMMAND [ReplWriterWorker-12] renameCollectionForApplyOps: rename test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4 (7f8ceb41-20f1-4d33-a74b-6068fab8eabd) to test5_fsmdb0.agg_out and drop aaa32297-72bb-416d-8098-7e1f16597149.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.641-0500 I STORAGE [ReplWriterWorker-12] dropCollection: test5_fsmdb0.agg_out (aaa32297-72bb-416d-8098-7e1f16597149) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(1574796807, 443), t: 1 } and commit timestamp Timestamp(1574796807, 443)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.641-0500 I STORAGE [ReplWriterWorker-12] Finishing collection drop for test5_fsmdb0.agg_out (aaa32297-72bb-416d-8098-7e1f16597149).
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.641-0500 I STORAGE [ReplWriterWorker-12] renameCollection: renaming collection 7f8ceb41-20f1-4d33-a74b-6068fab8eabd from test5_fsmdb0.tmp.agg_out.f5bafe5c-6539-4349-821a-a4b8414bbdb4 to test5_fsmdb0.agg_out
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.641-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_' on collection 'test5_fsmdb0.agg_out (aaa32297-72bb-416d-8098-7e1f16597149)'. Ident: 'index-1370--4104909142373009110', commit timestamp: 'Timestamp(1574796807, 443)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.641-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for index '_id_hashed' on collection 'test5_fsmdb0.agg_out (aaa32297-72bb-416d-8098-7e1f16597149)'. Ident: 'index-1377--4104909142373009110', commit timestamp: 'Timestamp(1574796807, 443)'
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.641-0500 I STORAGE [ReplWriterWorker-12] Deferring table drop for collection 'test5_fsmdb0.agg_out'. Ident: collection-1369--4104909142373009110, commit timestamp: Timestamp(1574796807, 443)
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.648-0500 I INDEX [conn46] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f61a7034-2a80-40c1-a416-62ce435a0e35
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.649-0500 I INDEX [conn46] Registering index build: 09288ac8-376e-4b9a-b1e4-1542f2114c3f
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.650-0500 I STORAGE [ReplWriterWorker-5] createCollection: test5_fsmdb0.tmp.agg_out.f61a7034-2a80-40c1-a416-62ce435a0e35 with provided UUID: b96cafac-e50c-4fba-b89f-189cd3f546dc and options: { uuid: UUID("b96cafac-e50c-4fba-b89f-189cd3f546dc"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.653-0500 I INDEX [conn112] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.dc957488-307a-4bfd-9f34-ba528cc3e6c0
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:27.653-0500 I INDEX [conn112] Registering index build: 0ea61470-f636-4348-a5e8-ea4b749650f1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.661-0500 I INDEX [ReplWriterWorker-5] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f61a7034-2a80-40c1-a416-62ce435a0e35
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.662-0500 I STORAGE [ReplWriterWorker-8] createCollection: test5_fsmdb0.tmp.agg_out.dc957488-307a-4bfd-9f34-ba528cc3e6c0 with provided UUID: 2dba5bd9-2911-4f33-8eb2-87a7a688314f and options: { uuid: UUID("2dba5bd9-2911-4f33-8eb2-87a7a688314f"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.663-0500 I STORAGE [ReplWriterWorker-14] createCollection: test5_fsmdb0.tmp.agg_out.f61a7034-2a80-40c1-a416-62ce435a0e35 with provided UUID: b96cafac-e50c-4fba-b89f-189cd3f546dc and options: { uuid: UUID("b96cafac-e50c-4fba-b89f-189cd3f546dc"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.676-0500 I INDEX [ReplWriterWorker-8] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.dc957488-307a-4bfd-9f34-ba528cc3e6c0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.677-0500 I INDEX [ReplWriterWorker-14] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.f61a7034-2a80-40c1-a416-62ce435a0e35
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.678-0500 I STORAGE [ReplWriterWorker-1] createCollection: test5_fsmdb0.tmp.agg_out.dc957488-307a-4bfd-9f34-ba528cc3e6c0 with provided UUID: 2dba5bd9-2911-4f33-8eb2-87a7a688314f and options: { uuid: UUID("2dba5bd9-2911-4f33-8eb2-87a7a688314f"), temp: true, validationLevel: "moderate", validationAction: "error" }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.691-0500 I INDEX [ReplWriterWorker-7] index build: starting on test5_fsmdb0.tmp.agg_out.7f231b14-f53a-4277-b094-607ef4ca1037 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.691-0500 I INDEX [ReplWriterWorker-7] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.691-0500 I STORAGE [ReplWriterWorker-7] Index build initialized: fe04d266-f351-4856-986c-e580c6380cff: test5_fsmdb0.tmp.agg_out.7f231b14-f53a-4277-b094-607ef4ca1037 (99edb5a0-9acc-4cb7-bca9-7453ec0fc251 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.692-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.692-0500 I INDEX [ReplWriterWorker-1] index build: done building index _id_ on ns test5_fsmdb0.tmp.agg_out.dc957488-307a-4bfd-9f34-ba528cc3e6c0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.692-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.694-0500 I INDEX [IndexBuildsCoordinatorMongod-2] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7f231b14-f53a-4277-b094-607ef4ca1037
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:27.697-0500 I STORAGE [IndexBuildsCoordinatorMongod-2] Index build completed successfully: fe04d266-f351-4856-986c-e580c6380cff: test5_fsmdb0.tmp.agg_out.7f231b14-f53a-4277-b094-607ef4ca1037 ( 99edb5a0-9acc-4cb7-bca9-7453ec0fc251 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.704-0500 I INDEX [ReplWriterWorker-0] index build: starting on test5_fsmdb0.tmp.agg_out.7f231b14-f53a-4277-b094-607ef4ca1037 properties: { v: 2, key: { _id: "hashed" }, name: "_id_hashed" } using method: Hybrid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.704-0500 I INDEX [ReplWriterWorker-0] build may temporarily use up to 500 megabytes of RAM
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.704-0500 I STORAGE [ReplWriterWorker-0] Index build initialized: abb4fddd-179c-4c06-bfd7-a8585dd20816: test5_fsmdb0.tmp.agg_out.7f231b14-f53a-4277-b094-607ef4ca1037 (99edb5a0-9acc-4cb7-bca9-7453ec0fc251 ): indexes: 1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.704-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.705-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.706-0500 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index _id_hashed on ns test5_fsmdb0.tmp.agg_out.7f231b14-f53a-4277-b094-607ef4ca1037
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:27.708-0500 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: abb4fddd-179c-4c06-bfd7-a8585dd20816: test5_fsmdb0.tmp.agg_out.7f231b14-f53a-4277-b094-607ef4ca1037 ( 99edb5a0-9acc-4cb7-bca9-7453ec0fc251 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:28.001-0500 E - [ftdc] Assertion: Location13538: couldn't open [/proc/14076/stat] Too many open files src/mongo/util/processinfo_linux.cpp 76
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:28.153-0500 E STORAGE [conn110] WiredTiger error (24) [1574796808:153841][14076:0x7f6bcc5f6700], WT_SESSION.create: __posix_directory_sync, 135: /home/nz_linux/data/job0/resmoke/shard0/node0/: directory-sync: open: Too many open files Raw: [1574796808:153841][14076:0x7f6bcc5f6700], WT_SESSION.create: __posix_directory_sync, 135: /home/nz_linux/data/job0/resmoke/shard0/node0/: directory-sync: open: Too many open files
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:28.153-0500 E STORAGE [conn110] WiredTiger error (24) [1574796808:153890][14076:0x7f6bcc5f6700], WT_SESSION.create: __posix_directory_sync, 151: /home/nz_linux/data/job0/resmoke/shard0/node0/index-1399-8224331490264904478.wt: directory-sync: Too many open files Raw: [1574796808:153890][14076:0x7f6bcc5f6700], WT_SESSION.create: __posix_directory_sync, 151: /home/nz_linux/data/job0/resmoke/shard0/node0/index-1399-8224331490264904478.wt: directory-sync: Too many open files
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:28.153-0500 E STORAGE [conn110] WiredTiger error (-31804) [1574796808:153928][14076:0x7f6bcc5f6700], WT_SESSION.create: __wt_panic, 490: the process must exit and restart: WT_PANIC: WiredTiger library panic Raw: [1574796808:153928][14076:0x7f6bcc5f6700], WT_SESSION.create: __wt_panic, 490: the process must exit and restart: WT_PANIC: WiredTiger library panic
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:28.153-0500 F - [conn110] Fatal Assertion 50853 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 428
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:28.154-0500 F - [conn110]
[ShardedClusterFixture:job0:shard0:primary]
[ShardedClusterFixture:job0:shard0:primary] ***aborting after fassert() failure
[ShardedClusterFixture:job0:shard0:primary]
[ShardedClusterFixture:job0:shard0:primary]
[ShardedClusterFixture:job0:shard0:primary] 2019-11-26T14:33:28.297-0500 F - [conn110] Got signal: 6 (Aborted).
[ShardedClusterFixture:job0:shard0:primary] 55F73F13C3C4 55F73F13D0DF 55F73F13B5CC 55F73F13B656 7F6C85ECF390 7F6C85B29428 7F6C85B2B02A 55F73D491141 55F73D1FBF3C 55F73D83F04B 55F73D208E32 55F73D209296 55F73D204B0E 55F73D811B9B 55F73D80DBF2 55F73D863596 55F73D824FEE 55F73D825A60 55F73D8252EF 55F73D839C82 55F73D839E8A 55F73D781B5F 55F73D791599 55F73DE7A8F3 55F73DF4314D 55F73DF3FEE5 55F73DF35FDF 55F73DF2CDEF 55F73DA4383D 55F73DC8A2EF 55F73DC8AD0D 55F73E1B2ED6 55F73E1B861F 55F73D9AE5FC 55F73D9B099E 55F73D9B19B5 55F73D99E32C 55F73D9AB11C 55F73D9A62BF 55F73D9A9BEC 55F73EBD3162 55F73D9A422D 55F73D9A6FBB 55F73D9A7C5F 55F73D9A621B 55F73D9A9BEC 55F73EBD35FB 55F73EE98E66 55F73EE98ED4 7F6C85EC56BA 7F6C85BFB41D
[ShardedClusterFixture:job0:shard0:primary] ----- BEGIN BACKTRACE -----
[ShardedClusterFixture:job0:shard0:primary] {"backtrace":[{"b":"55F73C6C0000","o":"2A7C3C4","s":"_ZN5mongo15printStackTraceERNS_14StackTraceSinkE"},{"b":"55F73C6C0000","o":"2A7D0DF","s":"_ZN5mongo15printStackTraceERSo"},{"b":"55F73C6C0000","o":"2A7B5CC"},{"b":"55F73C6C0000","o":"2A7B656"},{"b":"7F6C85EBE000","o":"11390"},{"b":"7F6C85AF4000","o":"35428","s":"gsignal"},{"b":"7F6C85AF4000","o":"3702A","s":"abort"},{"b":"55F73C6C0000","o":"DD1141","s":"_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj"},{"b":"55F73C6C0000","o":"B3BF3C"},{"b":"55F73C6C0000","o":"117F04B"},{"b":"55F73C6C0000","o":"B48E32","s":"__wt_err_func"},{"b":"55F73C6C0000","o":"B49296","s":"__wt_panic"},{"b":"55F73C6C0000","o":"B44B0E"},{"b":"55F73C6C0000","o":"1151B9B"},{"b":"55F73C6C0000","o":"114DBF2","s":"__wt_open"},{"b":"55F73C6C0000","o":"11A3596","s":"__wt_block_manager_create"},{"b":"55F73C6C0000","o":"1164FEE","s":"__wt_schema_create"},{"b":"55F73C6C0000","o":"1165A60"},{"b":"55F73C6C0000","o":"11652EF","s":"__wt_schema_create"},{"b":"55F73C6C0000","o":"1179C82","s":"__wt_session_create"},{"b":"55F73C6C0000","o":"1179E8A"},{"b":"55F73C6C0000","o":"10C1B5F","s":"_ZN5mongo15WiredTigerIndex6CreateEPNS_16OperationContextERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESA_"},{"b":"55F73C6C0000","o":"10D1599","s":"_ZN5mongo18WiredTigerKVEngine32createGroupedSortedDataInterfaceEPNS_16OperationContextERKNS_17CollectionOptionsENS_10StringDataEPKNS_15IndexDescriptorENS_8KVPrefixE"},{"b":"55F73C6C0000","o":"17BA8F3","s":"_ZN5mongo18DurableCatalogImpl20prepareForIndexBuildEPNS_16OperationContextENS_8RecordIdEPKNS_15IndexDescriptorEN5boost8optionalINS_4UUIDEEEb"},{"b":"55F73C6C0000","o":"188314D","s":"_ZN5mongo15IndexBuildBlock4initEPNS_16OperationContextEPNS_10CollectionE"},{"b":"55F73C6C0000","o":"187FEE5","s":"_ZN5mongo15MultiIndexBlock4initEPNS_16OperationContextEPNS_10CollectionERKSt6vectorINS_7BSONObjESaIS6_EESt8functionIFNS_6StatusERS8_EE"},{"b":"55F73C6C0000","o":"1875FDF","s":"_ZN5mongo18IndexBuildsManager15setUpIndexBuildEPNS_16OperationContextEPNS_10CollectionERKSt6vectorINS_7BSONObjESaIS6_EERKNS_4UUIDESt8functionIFNS_6StatusERS8_EENS0_12SetupOptionsE"},{"b":"55F73C6C0000","o":"186CDEF","s":"_ZN5mongo22IndexBuildsCoordinator27_registerAndSetUpIndexBuildEPNS_16OperationContextENS_10StringDataENS_4UUIDERKSt6vectorINS_7BSONObjESaIS6_EERKS4_NS_18IndexBuildProtocolEN5boost8optionalINS_19CommitQuorumOptionsEEE"},{"b":"55F73C6C0000","o":"138383D","s":"_ZN5mongo28IndexBuildsCoordinatorMongod15startIndexBuildEPNS_16OperationContextENS_10StringDataENS_4UUIDERKSt6vectorINS_7BSONObjESaIS6_EERKS4_NS_18IndexBuildProtocolENS_22IndexBuildsCoordinator17IndexBuildOptionsE"},{"b":"55F73C6C0000","o":"15CA2EF"},{"b":"55F73C6C0000","o":"15CAD0D"},{"b":"55F73C6C0000","o":"1AF2ED6","s":"_ZN5mongo23ErrmsgCommandDeprecated3runEPNS_16OperationContextERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjERNS_14BSONObjBuilderE"},{"b":"55F73C6C0000","o":"1AF861F","s":"_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE"},{"b":"55F73C6C0000","o":"12EE5FC"},{"b":"55F73C6C0000","o":"12F099E"},{"b":"55F73C6C0000","o":"12F19B5","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"55F73C6C0000","o":"12DE32C","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"55F73C6C0000","o":"12EB11C","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"55F73C6C0000","o":"12E62BF","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"55F73C6C0000","o":"12E9BEC"},{"b":"55F73C6C0000","o":"2513162","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"55F73C6C0000","o":"12E422D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"55F73C6C0000","o":"12E6FBB","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"55F73C6C0000","o":"12E7C5F","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"55F73C6C0000","o":"12E621B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"55F73C6C0000","o":"12E9BEC"},{"b":"55F73C6C0000","o":"25135FB"},{"b":"55F73C6C0000","o":"27D8E66"},{"b":"55F73C6C0000","o":"27D8ED4"},{"b":"7F6C85EBE000","o":"76BA"},{"b":"7F6C85AF4000","o":"10741D","s":"clone"}],"processInfo":{"mongodbVersion":"0.0.0","gitVersion":"unknown","compiledModules":["enterprise","ninja"],"uname":{"sysname":"Linux","release":"4.4.0-112-generic","version":"#135-Ubuntu SMP Fri Jan 19 11:48:36 UTC 2018","machine":"x86_64"},"somap":[{"b":"55F73C6C0000","elfType":3,"buildId":"319A9689EA148B8F6AD1607C5D49FA53F50D9986"},{"b":"7F6C85EBE000","path":"/lib/x86_64-linux-gnu/libpthread.so.0","elfType":3,"buildId":"B17C21299099640A6D863E423D99265824E7BB16"},{"b":"7F6C85AF4000","path":"/lib/x86_64-linux-gnu/libc.so.6","elfType":3,"buildId":"1CA54A6E0D76188105B12E49FE6B8019BF08803A"}]}}
[ShardedClusterFixture:job0:shard0:primary] mongod(_ZN5mongo15printStackTraceERNS_14StackTraceSinkE+0xB4) [0x55F73F13C3C4]
[ShardedClusterFixture:job0:shard0:primary] mongod(_ZN5mongo15printStackTraceERSo+0x2F) [0x55F73F13D0DF]
[ShardedClusterFixture:job0:shard0:primary] mongod(+0x2A7B5CC) [0x55F73F13B5CC]
[ShardedClusterFixture:job0:shard0:primary] mongod(+0x2A7B656) [0x55F73F13B656]
[ShardedClusterFixture:job0:shard0:primary] libpthread.so.0(+0x11390) [0x7F6C85ECF390]
[ShardedClusterFixture:job0:shard0:primary] libc.so.6(gsignal+0x38) [0x7F6C85B29428]
[ShardedClusterFixture:job0:shard0:primary] libc.so.6(abort+0x16A) [0x7F6C85B2B02A]
[ShardedClusterFixture:job0:shard0:primary] mongod(_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj+0) [0x55F73D491141]
[ShardedClusterFixture:job0:shard0:primary] mongod(+0xB3BF3C) [0x55F73D1FBF3C]
[ShardedClusterFixture:job0:shard0:primary] mongod(+0x117F04B) [0x55F73D83F04B]
[ShardedClusterFixture:job0:shard0:primary] mongod(__wt_err_func+0x90) [0x55F73D208E32]
[ShardedClusterFixture:job0:shard0:primary] mongod(__wt_panic+0x39) [0x55F73D209296]
[ShardedClusterFixture:job0:shard0:primary] mongod(+0xB44B0E) [0x55F73D204B0E]
[ShardedClusterFixture:job0:shard0:primary] mongod(+0x1151B9B) [0x55F73D811B9B]
[ShardedClusterFixture:job0:shard0:primary] mongod(__wt_open+0x282) [0x55F73D80DBF2]
[ShardedClusterFixture:job0:shard0:primary] mongod(__wt_block_manager_create+0x56) [0x55F73D863596]
[ShardedClusterFixture:job0:shard0:primary] mongod(__wt_schema_create+0x63E) [0x55F73D824FEE]
[ShardedClusterFixture:job0:shard0:primary] mongod(+0x1165A60) [0x55F73D825A60]
[ShardedClusterFixture:job0:shard0:primary] mongod(__wt_schema_create+0x93F) [0x55F73D8252EF]
[ShardedClusterFixture:job0:shard0:primary] mongod(__wt_session_create+0x242) [0x55F73D839C82]
[ShardedClusterFixture:job0:shard0:primary] mongod(+0x1179E8A) [0x55F73D839E8A]
[ShardedClusterFixture:job0:shard0:primary] mongod(_ZN5mongo15WiredTigerIndex6CreateEPNS_16OperationContextERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESA_+0x8F) [0x55F73D781B5F]
[ShardedClusterFixture:job0:shard0:primary] mongod(_ZN5mongo18WiredTigerKVEngine32createGroupedSortedDataInterfaceEPNS_16OperationContextERKNS_17CollectionOptionsENS_10StringDataEPKNS_15IndexDescriptorENS_8KVPrefixE+0x3C9) [0x55F73D791599]
[ShardedClusterFixture:job0:shard0:primary] mongod(_ZN5mongo18DurableCatalogImpl20prepareForIndexBuildEPNS_16OperationContextENS_8RecordIdEPKNS_15IndexDescriptorEN5boost8optionalINS_4UUIDEEEb+0x603) [0x55F73DE7A8F3]
[ShardedClusterFixture:job0:shard0:primary] mongod(_ZN5mongo15IndexBuildBlock4initEPNS_16OperationContextEPNS_10CollectionE+0x28D) [0x55F73DF4314D]
[ShardedClusterFixture:job0:shard0:primary] mongod(_ZN5mongo15MultiIndexBlock4initEPNS_16OperationContextEPNS_10CollectionERKSt6vectorINS_7BSONObjESaIS6_EESt8functionIFNS_6StatusERS8_EE+0x9B5) [0x55F73DF3FEE5]
[ShardedClusterFixture:job0:shard0:primary] mongod(_ZN5mongo18IndexBuildsManager15setUpIndexBuildEPNS_16OperationContextEPNS_10CollectionERKSt6vectorINS_7BSONObjESaIS6_EERKNS_4UUIDESt8functionIFNS_6StatusERS8_EENS0_12SetupOptionsE+0x4DF) [0x55F73DF35FDF]
[ShardedClusterFixture:job0:shard0:primary] mongod(_ZN5mongo22IndexBuildsCoordinator27_registerAndSetUpIndexBuildEPNS_16OperationContextENS_10StringDataENS_4UUIDERKSt6vectorINS_7BSONObjESaIS6_EERKS4_NS_18IndexBuildProtocolEN5boost8optionalINS_19CommitQuorumOptionsEEE+0x57F) [0x55F73DF2CDEF]
[ShardedClusterFixture:job0:shard0:primary] mongod(_ZN5mongo28IndexBuildsCoordinatorMongod15startIndexBuildEPNS_16OperationContextENS_10StringDataENS_4UUIDERKSt6vectorINS_7BSONObjESaIS6_EERKS4_NS_18IndexBuildProtocolENS_22IndexBuildsCoordinator17IndexBuildOptionsE+0x49D) [0x55F73DA4383D]
[ShardedClusterFixture:job0:shard0:primary] mongod(+0x15CA2EF) [0x55F73DC8A2EF]
[ShardedClusterFixture:job0:shard0:primary] mongod(+0x15CAD0D) [0x55F73DC8AD0D]
[ShardedClusterFixture:job0:shard0:primary] mongod(_ZN5mongo23ErrmsgCommandDeprecated3runEPNS_16OperationContextERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjERNS_14BSONObjBuilderE+0x46) [0x55F73E1B2ED6]
[ShardedClusterFixture:job0:shard0:primary] mongod(_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE+0xAF) [0x55F73E1B861F]
[ShardedClusterFixture:job0:shard0:primary] mongod(+0x12EE5FC) [0x55F73D9AE5FC]
[ShardedClusterFixture:job0:shard0:primary] mongod(+0x12F099E) [0x55F73D9B099E]
[ShardedClusterFixture:job0:shard0:primary] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x535) [0x55F73D9B19B5]
[ShardedClusterFixture:job0:shard0:primary] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x55F73D99E32C]
[ShardedClusterFixture:job0:shard0:primary] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x55F73D9AB11C]
[ShardedClusterFixture:job0:shard0:primary] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x55F73D9A62BF]
[ShardedClusterFixture:job0:shard0:primary] mongod(+0x12E9BEC) [0x55F73D9A9BEC]
[ShardedClusterFixture:job0:shard0:primary] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x55F73EBD3162]
[ShardedClusterFixture:job0:shard0:primary] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x55F73D9A422D]
[ShardedClusterFixture:job0:shard0:primary] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x88B) [0x55F73D9A6FBB]
[ShardedClusterFixture:job0:shard0:primary] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2FF) [0x55F73D9A7C5F]
[ShardedClusterFixture:job0:shard0:primary] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x55F73D9A621B]
[ShardedClusterFixture:job0:shard0:primary] mongod(+0x12E9BEC) [0x55F73D9A9BEC]
[ShardedClusterFixture:job0:shard0:primary] mongod(+0x25135FB) [0x55F73EBD35FB]
[ShardedClusterFixture:job0:shard0:primary] mongod(+0x27D8E66) [0x55F73EE98E66]
[ShardedClusterFixture:job0:shard0:primary] mongod(+0x27D8ED4) [0x55F73EE98ED4]
[ShardedClusterFixture:job0:shard0:primary] libpthread.so.0(+0x76BA) [0x7F6C85EC56BA]
[ShardedClusterFixture:job0:shard0:primary] libc.so.6(clone+0x6D) [0x7F6C85BFB41D]
[ShardedClusterFixture:job0:shard0:primary] ----- END BACKTRACE -----
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:28.507-0500 I NETWORK [conn223] end connection 127.0.0.1:47732 (40 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:28.507-0500 I NETWORK [conn125] end connection 127.0.0.1:46538 (39 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:28.507-0500 2019-11-26T14:33:28.507-0500 I NETWORK [thread2] DBClientConnection failed to receive message from localhost:20001 - HostUnreachable: Connection closed by peer
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:28.507-0500 I NETWORK [conn120] end connection 127.0.0.1:46492 (38 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:28.507-0500 I CONNPOOL [conn58] Ending connection to host localhost:20001 due to bad connection status: HostUnreachable: Connection closed by peer; 1 connections to that host remain open
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:28.508-0500 I NETWORK [conn59] end connection 127.0.0.1:46054 (37 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:28.507-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Marking host localhost:20001 as failed :: caused by :: HostUnreachable: Connection reset by peer
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:33:28.508-0500 I NETWORK [conn31] end connection 127.0.0.1:34610 (13 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:33:28.508-0500 I NETWORK [conn30] end connection 127.0.0.1:51246 (13 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:28.508-0500 I NETWORK [conn58] end connection 127.0.0.1:46050 (36 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:28.508-0500 I NETWORK [conn30] end connection 127.0.0.1:52380 (16 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:28.507-0500 I CONNPOOL [conn201] Ending connection to host localhost:20001 due to bad connection status: HostUnreachable: Connection closed by peer; 2 connections to that host remain open
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:28.508-0500 I NETWORK [conn30] end connection 127.0.0.1:51494 (15 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:28.507-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:28.508-0500 I NETWORK [conn30] end connection 127.0.0.1:55632 (22 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:28.508-0500 I NETWORK [conn5] end connection 127.0.0.1:52032 (15 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:28.508-0500 I NETWORK [conn4] end connection 127.0.0.1:51138 (14 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:28.508-0500 I NETWORK [conn201] Marking host localhost:20001 as failed :: caused by :: HostUnreachable: Connection closed by peer
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:28.508-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Ending connection to host localhost:20001 due to bad connection status: HostUnreachable: Connection reset by peer; 0 connections to that host remain open
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:28.508-0500 I CONNPOOL [ReplCoordExternNetwork] Ending connection to host localhost:20001 due to bad connection status: HostUnreachable: Connection closed by peer; 2 connections to that host remain open
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:28.508-0500 I NETWORK [conn31] end connection 127.0.0.1:55634 (21 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:28.508-0500 I CONNPOOL [conn200] Ending connection to host localhost:20001 due to bad connection status: HostUnreachable: Connection closed by peer; 1 connections to that host remain open
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:28.508-0500 I CONNPOOL [conn59] Ending connection to host localhost:20001 due to bad connection status: HostUnreachable: Connection closed by peer; 0 connections to that host remain open
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:28.508-0500 I REPL [ReplCoordExtern-0] Restarting oplog query due to error: HostUnreachable: error in fetcher batch callback :: caused by :: Connection closed by peer. Last fetched optime: { ts: Timestamp(1574796807, 448), t: 1 }. Restarts remaining: 1
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:28.587-0500 2019-11-26T14:33:28.587-0500 I NETWORK [thread2] DBClientConnection failed to receive message from localhost:20001 - HostUnreachable: Connection closed by peer
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:28.508-0500 I CONNPOOL [conn204] Ending connection to host localhost:20001 due to bad connection status: HostUnreachable: Connection closed by peer; 0 connections to that host remain open
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:28.508-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:29.167-0500 I CONNPOOL [ReplCoord-4] dropping unhealthy pooled connection to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:28.508-0500 I CONNPOOL [ReplCoordExtern-0] dropping unhealthy pooled connection to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:28.508-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:29.007-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.912-0500 2019-11-26T14:33:28.587-0500 I QUERY [thread2] Failed to end session { id: UUID("69ae2258-2be3-4e71-920e-e12aa8ed3a64") } due to HostUnreachable: network error while attempting to run command 'endSessions' on host 'localhost:20001'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.912-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.912-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.912-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.912-0500 [jsTest] [
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.912-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.912-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.912-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.912-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.912-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.912-0500 [jsTest] "preserveFailPointOpTime" : Timestamp(1574796742, 508)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.912-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.912-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.912-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.912-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.912-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.912-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.912-0500 [jsTest] "preserveFailPointOpTime" : Timestamp(1574796741, 5057)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.913-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.913-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.913-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.913-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.913-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.913-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.913-0500 [jsTest] "preserveFailPointOpTime" : Timestamp(1574796741, 5123)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.913-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.913-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.913-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.913-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.913-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.913-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.913-0500 [jsTest] "listDatabaseOpTime" : Timestamp(1574796742, 508)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.913-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.913-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.913-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.913-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.913-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.913-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.913-0500 [jsTest] "listDatabaseOpTime" : Timestamp(1574796741, 5121)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.913-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.913-0500 [jsTest] {
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:29.167-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.914-0500 [jsTest] "node" : connection to localhost:20003,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:28.508-0500 I CONNPOOL [ReplCoordExtern-0] dropping unhealthy pooled connection to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.914-0500 [jsTest] "session" : {
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:29.507-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.914-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.914-0500 [jsTest] },
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:28.587-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] dropping unhealthy pooled connection to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.914-0500 [jsTest] "listDatabaseOpTime" : Timestamp(1574796741, 5123)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.914-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.914-0500 [jsTest] {
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:29.167-0500 I ELECTION [ReplCoord-4] Scheduling catchup takeover at 2019-11-26T14:33:59.167-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.914-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796742, 511),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.915-0500 [jsTest] "signedClusterTime" : {
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:28.508-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.915-0500 [jsTest] "clusterTime" : Timestamp(1574796742, 577),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.915-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.915-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.915-0500 [jsTest] "keyId" : NumberLong(0)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:30.007-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.915-0500 [jsTest] }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:28.587-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.915-0500 [jsTest] }
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:29.168-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.915-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.915-0500 [jsTest] {
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:28.508-0500 I REPL [ReplCoordExtern-0] Scheduled new oplog query Fetcher source: localhost:20001 database: local query: { find: "oplog.rs", filter: { ts: { $gte: Timestamp(1574796807, 448) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 2000, batchSize: 13981010, term: 1, readConcern: { afterClusterTime: Timestamp(0, 1) } } query metadata: { $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" } } active: 1 findNetworkTimeout: 7000ms getMoreNetworkTimeout: 35000ms shutting down?: 0 first: 1 firstCommandScheduler: RemoteCommandRetryScheduler request: RemoteCommand 11832 -- target:localhost:20001 db:local cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp(1574796807, 448) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 2000, batchSize: 13981010, term: 1, readConcern: { afterClusterTime: Timestamp(0, 1) } } active: 1 callbackHandle.valid: 1 callbackHandle.cancelled: 0 attempt: 1 retryPolicy: {type: "NoRetryPolicy"}
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.915-0500 [jsTest] "node" : connection to localhost:20002,
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:30.007-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.916-0500 [jsTest] "session" : {
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:29.087-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.916-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:29.168-0500 I REPL [ReplCoord-4] Member localhost:20001 is now in state RS_DOWN - Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.916-0500 [jsTest] },
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:28.509-0500 I REPL [ReplCoordExtern-2] Error returned from oplog query (no more query restarts left): HostUnreachable: error in fetcher batch callback :: caused by :: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.916-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796742, 511)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:30.507-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.916-0500 [jsTest] },
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:29.587-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.916-0500 [jsTest] {
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:30.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.916-0500 [jsTest] "node" : connection to localhost:20003,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:28.509-0500 W REPL [BackgroundSync] Fetcher stopped querying remote oplog with error: HostUnreachable: error in fetcher batch callback :: caused by :: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.917-0500 [jsTest] "session" : {
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:29.587-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.917-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:28.509-0500 I REPL [BackgroundSync] Clearing sync source localhost:20001 to choose a new one.
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.917-0500 [jsTest] },
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:30.087-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.917-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796742, 513)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:28.509-0500 I REPL [BackgroundSync] could not find member to sync from
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.917-0500 [jsTest] },
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:30.587-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.917-0500 [jsTest] {
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:30.917-0500 I NETWORK [conn106] end connection 127.0.0.1:53296 (13 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.917-0500 [jsTest] "node" : connection to localhost:20001,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:28.509-0500 I CONNPOOL [ReplCoord-4] dropping unhealthy pooled connection to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.918-0500 [jsTest] "session" : {
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:30.913-0500 I NETWORK [conn207] end connection 127.0.0.1:46106 (7 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.918-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:28.510-0500 I ELECTION [ReplCoord-4] Scheduling catchup takeover at 2019-11-26T14:33:58.510-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.918-0500 [jsTest] },
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:28.510-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.918-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796742, 511)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:28.510-0500 I REPL [ReplCoord-7] Member localhost:20001 is now in state RS_DOWN - Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.918-0500 [jsTest] },
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:29.010-0500 I REPL [ReplCoord-8] Canceling catchup takeover callback
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.918-0500 [jsTest] {
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:29.010-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.918-0500 [jsTest] "node" : connection to localhost:20002,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:29.510-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.918-0500 [jsTest] "session" : {
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:29.510-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.919-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:30.010-0500 I REPL_HB [ReplCoord-6] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.919-0500 [jsTest] },
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:30.510-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.919-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796742, 511)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:30.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.919-0500 [jsTest] },
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:30.917-0500 I NETWORK [conn106] end connection 127.0.0.1:54186 (14 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.919-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.919-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.919-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.919-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.919-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.919-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796742, 511)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.919-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.919-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.919-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.919-0500 [jsTest] "operationTime" : Timestamp(1574796745, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796742, 511). Collection minimum timestamp is Timestamp(1574796745, 1)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796742, 1518),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] "ts" : Timestamp(1574796740, 567),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] "clusterTime" : Timestamp(1574796745, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.920-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796745, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] "clusterTime" : Timestamp(1574796745, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796745, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796745, 2)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:30.921-0500 I NETWORK [conn205] end connection 127.0.0.1:46082 (6 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.921-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.922-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.922-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.922-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.922-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.922-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.922-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796745, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.922-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.922-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.922-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.922-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.922-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:30.922-0500 I NETWORK [conn154] end connection 127.0.0.1:57488 (20 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.922-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.922-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796745, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.922-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.922-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.922-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.922-0500 [jsTest] "session" : {
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:30.922-0500 I NETWORK [conn220] end connection 127.0.0.1:47710 (35 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.922-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796745, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] "operationTime" : Timestamp(1574796745, 7),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796745, 1). Collection minimum timestamp is Timestamp(1574796745, 6)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796745, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] "ts" : Timestamp(1574796745, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] "clusterTime" : Timestamp(1574796745, 73),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.923-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796745, 7),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] "clusterTime" : Timestamp(1574796745, 73),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796745, 7)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.924-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796745, 8)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796745, 7)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796745, 7)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796745, 7)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.925-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] "operationTime" : Timestamp(1574796745, 1516),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796745, 7). Collection minimum timestamp is Timestamp(1574796745, 1512)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796745, 8),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] "ts" : Timestamp(1574796745, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] "clusterTime" : Timestamp(1574796745, 1516),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.926-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796745, 1516),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] "clusterTime" : Timestamp(1574796745, 1516),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796745, 1516)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796745, 1517)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.927-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796745, 1516)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796745, 1516)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796745, 1516)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] "operationTime" : Timestamp(1574796745, 2525),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.928-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796745, 1516). Collection minimum timestamp is Timestamp(1574796745, 2524)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796745, 1518),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] "ts" : Timestamp(1574796745, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] "clusterTime" : Timestamp(1574796745, 2525),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796745, 2525),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.929-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.930-0500 [jsTest] "clusterTime" : Timestamp(1574796745, 2525),
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:33:30.929-0500 I NETWORK [conn101] end connection 127.0.0.1:52902 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.930-0500 [jsTest] "signature" : {
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:33:30.929-0500 I NETWORK [conn101] end connection 127.0.0.1:36264 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.930-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.931-0500 [jsTest] "keyId" : NumberLong(0)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:30.929-0500 I NETWORK [conn219] end connection 127.0.0.1:47708 (34 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.931-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.931-0500 [jsTest] }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:30.929-0500 I NETWORK [conn105] end connection 127.0.0.1:54154 (13 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.931-0500 [jsTest] },
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:30.930-0500 I NETWORK [conn153] end connection 127.0.0.1:57486 (19 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.931-0500 [jsTest] {
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:30.930-0500 I NETWORK [conn105] end connection 127.0.0.1:53264 (12 connections now open)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.931-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.931-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.931-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.931-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.931-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796745, 2525)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.931-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.931-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.931-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.931-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796745, 2526)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796745, 2525)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796745, 2525)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796745, 2525)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.932-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] "operationTime" : Timestamp(1574796745, 3037),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796745, 2525). Collection minimum timestamp is Timestamp(1574796745, 3034)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796745, 2526),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] "ts" : Timestamp(1574796745, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] "clusterTime" : Timestamp(1574796745, 3101),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.933-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796745, 3037),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] "clusterTime" : Timestamp(1574796745, 3101),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796745, 3037)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.934-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796745, 3540)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796745, 3037)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796745, 3037)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796745, 3037)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] "operationTime" : Timestamp(1574796748, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.935-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796745, 3037). Collection minimum timestamp is Timestamp(1574796748, 2)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796745, 4043),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] "ts" : Timestamp(1574796747, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] "clusterTime" : Timestamp(1574796748, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.936-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796748, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] "clusterTime" : Timestamp(1574796748, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796748, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796748, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.937-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796748, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796748, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796748, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] "operationTime" : Timestamp(1574796748, 5),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796748, 2). Collection minimum timestamp is Timestamp(1574796748, 4)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.938-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796748, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] "ts" : Timestamp(1574796747, 7),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] "clusterTime" : Timestamp(1574796748, 199),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796748, 5),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] "clusterTime" : Timestamp(1574796748, 199),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.939-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796748, 5)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796748, 6)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.940-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796748, 5)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796748, 5)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796748, 5)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] "operationTime" : Timestamp(1574796748, 1021),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796748, 5). Collection minimum timestamp is Timestamp(1574796748, 1021)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.941-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796748, 6),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] "ts" : Timestamp(1574796747, 7),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] "clusterTime" : Timestamp(1574796748, 1137),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796748, 1021),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] "clusterTime" : Timestamp(1574796748, 1137),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.942-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796748, 1021)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796748, 1201)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796748, 1021)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.943-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796748, 1021)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796748, 1021)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] "operationTime" : Timestamp(1574796748, 1518),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796748, 1021). Collection minimum timestamp is Timestamp(1574796748, 1399)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796748, 1204),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.944-0500 [jsTest] "ts" : Timestamp(1574796747, 7),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] "clusterTime" : Timestamp(1574796748, 1582),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796748, 1518),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] "clusterTime" : Timestamp(1574796748, 1582),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.945-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796748, 1518)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796748, 1582)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796748, 1518)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796748, 1518)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.946-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796748, 1518)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] "operationTime" : Timestamp(1574796749, 316),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796748, 1518). Collection minimum timestamp is Timestamp(1574796749, 316)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796748, 2213),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] "ts" : Timestamp(1574796747, 7),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] "clusterTime" : Timestamp(1574796749, 316),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.947-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796749, 316),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] "clusterTime" : Timestamp(1574796749, 316),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796749, 316)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796749, 318)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.948-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796749, 316)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796749, 316)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796749, 316)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] "operationTime" : Timestamp(1574796749, 1892),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796749, 316). Collection minimum timestamp is Timestamp(1574796749, 1892)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.949-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796749, 1320),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] "ts" : Timestamp(1574796747, 7),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] "clusterTime" : Timestamp(1574796749, 1956),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796749, 1892),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] "clusterTime" : Timestamp(1574796749, 1956),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.950-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796749, 1892)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796749, 2332)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796749, 1892)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.951-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796749, 1892)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796749, 1892)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] "operationTime" : Timestamp(1574796749, 3408),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796749, 1892). Collection minimum timestamp is Timestamp(1574796749, 2339)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796749, 2333),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] "ts" : Timestamp(1574796747, 7),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.952-0500 [jsTest] "clusterTime" : Timestamp(1574796749, 3472),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796749, 3408),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] "clusterTime" : Timestamp(1574796749, 3472),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796749, 3408)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.953-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796749, 3844)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796749, 3408)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796749, 3408)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796749, 3408)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.954-0500 [jsTest] "operationTime" : Timestamp(1574796749, 3848),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796749, 3408). Collection minimum timestamp is Timestamp(1574796749, 3847)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796749, 3847),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] "ts" : Timestamp(1574796747, 7),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] "clusterTime" : Timestamp(1574796749, 3848),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796749, 3848),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.955-0500 [jsTest] "clusterTime" : Timestamp(1574796749, 3848),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796749, 3848)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796749, 3848)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796749, 3848)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.956-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796749, 3848)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796749, 3848)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] "operationTime" : Timestamp(1574796752, 4),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796749, 3848). Collection minimum timestamp is Timestamp(1574796752, 4)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796749, 3848),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] "ts" : Timestamp(1574796747, 7),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.957-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] "clusterTime" : Timestamp(1574796752, 69),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796752, 4),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] "clusterTime" : Timestamp(1574796752, 69),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.958-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796752, 4)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796752, 5)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796752, 4)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796752, 4)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796752, 4)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.959-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] "operationTime" : Timestamp(1574796752, 1079),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796752, 4). Collection minimum timestamp is Timestamp(1574796752, 1011)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796752, 763),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] "ts" : Timestamp(1574796747, 7),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] "clusterTime" : Timestamp(1574796752, 1079),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.960-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796752, 1079),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] "clusterTime" : Timestamp(1574796752, 1079),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796752, 1079)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796752, 1079)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.961-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.962-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796752, 1079)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.962-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.962-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.962-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.962-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.962-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.962-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.962-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796752, 1079)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.962-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.962-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.962-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.962-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.962-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.962-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.962-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796752, 1079)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.962-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.962-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.962-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.962-0500 [jsTest] "operationTime" : Timestamp(1574796752, 2528),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.962-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:30.962-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796752, 1079). Collection minimum timestamp is Timestamp(1574796752, 2528)",
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:31.007-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.309-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.309-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.309-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.309-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.309-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.309-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.309-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796752, 1080),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.309-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.309-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.309-0500 [jsTest] "ts" : Timestamp(1574796747, 7),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.309-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] "clusterTime" : Timestamp(1574796752, 2528),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796752, 2528),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] "clusterTime" : Timestamp(1574796752, 2528),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.310-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796752, 2528)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796752, 2978)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796752, 2528)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.311-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.312-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796752, 2528)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:31.010-0500 I REPL_HB [ReplCoord-6] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.312-0500 [jsTest] },
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:31.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.312-0500 [jsTest] {
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:31.167-0500 I REPL [ReplCoord-4] Canceling catchup takeover callback
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.312-0500 [jsTest] "node" : connection to localhost:20003,
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:31.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.312-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.312-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.312-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.312-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796752, 2528)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.312-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.313-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.313-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.313-0500 [jsTest] "operationTime" : Timestamp(1574796753, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.313-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.313-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796752, 2528). Collection minimum timestamp is Timestamp(1574796752, 3604)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.313-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.313-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.313-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.313-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.313-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.313-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.313-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796752, 3601),
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:31.510-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.313-0500 [jsTest] "$configServerState" : {
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:31.087-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.313-0500 [jsTest] "opTime" : {
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:31.168-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.313-0500 [jsTest] "ts" : Timestamp(1574796747, 7),
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:32.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.313-0500 [jsTest] "t" : NumberLong(1)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:32.010-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.314-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.314-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.314-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.314-0500 [jsTest] "clusterTime" : Timestamp(1574796753, 2),
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:31.587-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.314-0500 [jsTest] "signature" : {
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:31.507-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.314-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:33.168-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.314-0500 [jsTest] "keyId" : NumberLong(0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:32.010-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.314-0500 [jsTest] }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:32.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.314-0500 [jsTest] }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:32.007-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.315-0500 [jsTest] },
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:32.510-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.315-0500 [jsTest] "performNoopWrite" : false
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:32.087-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.315-0500 [jsTest] },
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:32.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.315-0500 [jsTest] {
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:33.010-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.315-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796753, 2),
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:32.587-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.315-0500 [jsTest] "signedClusterTime" : {
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:32.507-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.315-0500 [jsTest] "clusterTime" : Timestamp(1574796753, 2),
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:33.087-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.316-0500 [jsTest] "signature" : {
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:33.007-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.316-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.316-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.316-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.316-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.316-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.316-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.316-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.316-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.316-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.316-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.316-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796753, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.316-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.316-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.316-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.316-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.316-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.316-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.316-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796753, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.316-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.316-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796753, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796753, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796753, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] "operationTime" : Timestamp(1574796753, 510),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796753, 2). Collection minimum timestamp is Timestamp(1574796753, 510)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.317-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796753, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] "ts" : Timestamp(1574796747, 7),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] "clusterTime" : Timestamp(1574796753, 639),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796753, 510),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.318-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] "clusterTime" : Timestamp(1574796753, 639),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796753, 510)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796753, 511)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.319-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796753, 510)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796753, 510)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796753, 510)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] "operationTime" : Timestamp(1574796753, 1712),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796753, 510). Collection minimum timestamp is Timestamp(1574796753, 1518)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.320-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796753, 511),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] "ts" : Timestamp(1574796747, 7),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] "clusterTime" : Timestamp(1574796753, 1776),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796753, 1712),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] "clusterTime" : Timestamp(1574796753, 1776),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.321-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796753, 1904)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796753, 2022)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796753, 1712)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796753, 1712)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.322-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796753, 1712)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] "operationTime" : Timestamp(1574796753, 3033),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796753, 1712). Collection minimum timestamp is Timestamp(1574796753, 2594)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796753, 2023),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] "ts" : Timestamp(1574796747, 7),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] "clusterTime" : Timestamp(1574796753, 3033),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.323-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796753, 3033),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] "clusterTime" : Timestamp(1574796753, 3033),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796753, 3033)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.324-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796753, 3033)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796753, 3033)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796753, 3033)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796753, 3033)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] "operationTime" : Timestamp(1574796753, 4548),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796753, 3033). Collection minimum timestamp is Timestamp(1574796753, 4546)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.325-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796753, 3034),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] "ts" : Timestamp(1574796747, 7),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] "clusterTime" : Timestamp(1574796753, 4548),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796753, 4548),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] "clusterTime" : Timestamp(1574796753, 4548),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.326-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796753, 4548)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796753, 5049)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796753, 4548)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.327-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796753, 4548)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796753, 4548)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] "operationTime" : Timestamp(1574796756, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796753, 4548). Collection minimum timestamp is Timestamp(1574796753, 6007)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796753, 6007),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] "ts" : Timestamp(1574796747, 7),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.328-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] "clusterTime" : Timestamp(1574796756, 53),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796756, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] "clusterTime" : Timestamp(1574796756, 53),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796756, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.329-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796756, 53)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796756, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796756, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796756, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.330-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] "operationTime" : Timestamp(1574796756, 57),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796756, 1). Collection minimum timestamp is Timestamp(1574796756, 55)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796756, 53),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] "ts" : Timestamp(1574796755, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] "clusterTime" : Timestamp(1574796756, 121),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.331-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796756, 57),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] "clusterTime" : Timestamp(1574796756, 121),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796756, 57)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796756, 57)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.332-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796756, 57)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796756, 57)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796756, 57)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] "operationTime" : Timestamp(1574796756, 626),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796756, 57). Collection minimum timestamp is Timestamp(1574796756, 123)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796756, 121),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.333-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] "ts" : Timestamp(1574796755, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] "clusterTime" : Timestamp(1574796756, 690),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796756, 626),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] "clusterTime" : Timestamp(1574796756, 690),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.334-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796756, 626)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796756, 1129)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796756, 626)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796756, 626)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.335-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796756, 626)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] "operationTime" : Timestamp(1574796756, 1570),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796756, 626). Collection minimum timestamp is Timestamp(1574796756, 1569)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796756, 691),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] "ts" : Timestamp(1574796755, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] "clusterTime" : Timestamp(1574796756, 1570),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.336-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.337-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.337-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.337-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.337-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.337-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:33.337-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796756, 1570),
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:33.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.806-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.806-0500 [jsTest] "clusterTime" : Timestamp(1574796756, 1570),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.806-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.806-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.806-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.806-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.806-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.806-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.806-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.806-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.806-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.806-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.806-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.806-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796756, 1570)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.806-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.806-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.806-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.806-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.806-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796756, 1635)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796756, 1570)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796756, 1570)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796756, 1570)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.807-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] "operationTime" : Timestamp(1574796756, 2076),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796756, 1570). Collection minimum timestamp is Timestamp(1574796756, 1637)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796756, 1635),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] "ts" : Timestamp(1574796755, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] "clusterTime" : Timestamp(1574796756, 2140),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.808-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.809-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.809-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.809-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.809-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.809-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796756, 2076),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.809-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.809-0500 [jsTest] "clusterTime" : Timestamp(1574796756, 2140),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.809-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.809-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.809-0500 [jsTest] "keyId" : NumberLong(0)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:33.507-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.809-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.809-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.809-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.809-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.809-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.809-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.809-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.809-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.809-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796756, 2076)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.809-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.810-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.810-0500 [jsTest] "node" : connection to localhost:20003,
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:33.587-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.810-0500 [jsTest] "session" : {
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:34.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.810-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.810-0500 [jsTest] },
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:33.510-0500 I REPL_HB [ReplCoord-6] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.810-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796756, 2142)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:33.507-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.810-0500 [jsTest] },
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:33.587-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.810-0500 [jsTest] {
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:34.011-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.810-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.811-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.811-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.811-0500 [jsTest] },
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:34.007-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.811-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796756, 2076)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:34.087-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.811-0500 [jsTest] },
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:34.511-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.811-0500 [jsTest] {
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:34.507-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.811-0500 [jsTest] "node" : connection to localhost:20002,
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:34.587-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.811-0500 [jsTest] "session" : {
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:34.511-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.811-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796756, 2076)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796756, 2076)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] "operationTime" : Timestamp(1574796757, 3),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796756, 2076). Collection minimum timestamp is Timestamp(1574796757, 3)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796756, 2142),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.812-0500 [jsTest] "ts" : Timestamp(1574796755, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] "clusterTime" : Timestamp(1574796757, 68),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796757, 3),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] "clusterTime" : Timestamp(1574796757, 68),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.813-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796757, 3)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796757, 68)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796757, 3)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796757, 3)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.814-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796757, 3)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] "operationTime" : Timestamp(1574796757, 1834),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796757, 3). Collection minimum timestamp is Timestamp(1574796757, 1514)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796757, 68),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] "ts" : Timestamp(1574796755, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.815-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] "clusterTime" : Timestamp(1574796757, 1898),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796757, 1834),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] "clusterTime" : Timestamp(1574796757, 1898),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796757, 1962)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.816-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796757, 1964)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796757, 1834)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796757, 1834)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.817-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796757, 1834)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] "operationTime" : Timestamp(1574796759, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796757, 1834). Collection minimum timestamp is Timestamp(1574796757, 2085)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796757, 2971),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] "ts" : Timestamp(1574796755, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] "clusterTime" : Timestamp(1574796759, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.818-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796759, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] "clusterTime" : Timestamp(1574796759, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796759, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.819-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796759, 4)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796759, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796759, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796759, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.820-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] "operationTime" : Timestamp(1574796759, 1014),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796759, 2). Collection minimum timestamp is Timestamp(1574796759, 1014)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796759, 5),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] "ts" : Timestamp(1574796757, 2085),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] "clusterTime" : Timestamp(1574796759, 1014),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.821-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796759, 1014),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] "clusterTime" : Timestamp(1574796759, 1014),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796759, 1014)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796759, 1015)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.822-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796759, 1014)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796759, 1014)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796759, 1014)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] "operationTime" : Timestamp(1574796759, 2593),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796759, 1014). Collection minimum timestamp is Timestamp(1574796759, 2593)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.823-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.824-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.824-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.824-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:34.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.824-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.824-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796759, 1016),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.824-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.824-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.824-0500 [jsTest] "ts" : Timestamp(1574796757, 2085),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.824-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.824-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.824-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.824-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.824-0500 [jsTest] "clusterTime" : Timestamp(1574796759, 2657),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.824-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.824-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.824-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.824-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.824-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.824-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.824-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.824-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.824-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796759, 2593),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] "clusterTime" : Timestamp(1574796759, 2657),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796759, 2593)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796759, 2785)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.825-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796759, 2593)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796759, 2593)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796759, 2593)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] "operationTime" : Timestamp(1574796759, 4045),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796759, 2593). Collection minimum timestamp is Timestamp(1574796759, 4045)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.826-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796759, 2785),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] "ts" : Timestamp(1574796757, 2085),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] "clusterTime" : Timestamp(1574796759, 4046),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796759, 4045),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] "clusterTime" : Timestamp(1574796759, 4046),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.827-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796759, 4045)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796759, 4045)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796759, 4045)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.828-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796759, 4045)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796759, 4045)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] "operationTime" : Timestamp(1574796760, 571),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796759, 4045). Collection minimum timestamp is Timestamp(1574796760, 67)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.829-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796759, 4045),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] "ts" : Timestamp(1574796757, 2085),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] "clusterTime" : Timestamp(1574796760, 636),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796760, 571),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] "clusterTime" : Timestamp(1574796760, 636),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.830-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796760, 571)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796760, 571)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796760, 571)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.831-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796760, 571)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796760, 571)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] "operationTime" : Timestamp(1574796765, 3),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796760, 571). Collection minimum timestamp is Timestamp(1574796760, 572)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796762, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] "ts" : Timestamp(1574796757, 2085),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.832-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] "clusterTime" : Timestamp(1574796765, 3),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796765, 3),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] "clusterTime" : Timestamp(1574796765, 3),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.833-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796765, 3)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796765, 4)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796765, 3)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796765, 3)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.834-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796765, 3)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] "operationTime" : Timestamp(1574796765, 331),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796765, 3). Collection minimum timestamp is Timestamp(1574796765, 11)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796765, 4),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] "ts" : Timestamp(1574796757, 2085),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.835-0500 [jsTest] "clusterTime" : Timestamp(1574796765, 459),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796765, 331),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] "clusterTime" : Timestamp(1574796765, 459),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796765, 1075)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.836-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796765, 1512)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796765, 331)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796765, 331)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.837-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796765, 331)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] "operationTime" : Timestamp(1574796765, 2035),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796765, 331). Collection minimum timestamp is Timestamp(1574796765, 1514)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796765, 1513),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] "ts" : Timestamp(1574796757, 2085),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] "clusterTime" : Timestamp(1574796765, 2087),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.838-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796765, 2035),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] "clusterTime" : Timestamp(1574796765, 2087),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796765, 2087)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.839-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796765, 2281)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796765, 2035)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796765, 2035)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796765, 2035)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.840-0500 [jsTest] "operationTime" : Timestamp(1574796765, 3034),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796765, 2035). Collection minimum timestamp is Timestamp(1574796765, 3034)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796765, 2526),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] "ts" : Timestamp(1574796757, 2085),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] "clusterTime" : Timestamp(1574796765, 3034),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796765, 3034),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.841-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] "clusterTime" : Timestamp(1574796765, 3034),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796765, 3034)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796765, 3036)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796765, 3034)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.842-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796765, 3034)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796765, 3034)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] "operationTime" : Timestamp(1574796765, 3674),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796765, 3034). Collection minimum timestamp is Timestamp(1574796765, 3040)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796765, 3036),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.843-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] "ts" : Timestamp(1574796765, 2533),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] "clusterTime" : Timestamp(1574796765, 3802),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796765, 3674),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] "clusterTime" : Timestamp(1574796765, 3802),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.844-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796765, 3674)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796765, 4550)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796765, 3674)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796765, 3674)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.845-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796765, 3674)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] "operationTime" : Timestamp(1574796765, 5560),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796765, 3674). Collection minimum timestamp is Timestamp(1574796765, 5560)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796765, 4551),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] "ts" : Timestamp(1574796765, 2533),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] "clusterTime" : Timestamp(1574796765, 5560),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.846-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796765, 5560),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] "clusterTime" : Timestamp(1574796765, 5560),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796765, 5560)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796765, 5628)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.847-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796765, 5560)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796765, 5560)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796765, 5560)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] "operationTime" : Timestamp(1574796766, 5),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796765, 5560). Collection minimum timestamp is Timestamp(1574796766, 5)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.848-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796765, 7069),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] "ts" : Timestamp(1574796765, 2533),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] "clusterTime" : Timestamp(1574796766, 5),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796766, 5),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] "clusterTime" : Timestamp(1574796766, 5),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.849-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796766, 5)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796766, 5)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 5)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.850-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 5)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 5)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] "operationTime" : Timestamp(1574796766, 511),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796766, 5). Collection minimum timestamp is Timestamp(1574796766, 7)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796766, 5),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] "ts" : Timestamp(1574796765, 2533),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] "clusterTime" : Timestamp(1574796766, 511),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.851-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796766, 511),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] "clusterTime" : Timestamp(1574796766, 511),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796766, 511)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.852-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796766, 1015)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 511)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 511)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 511)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] "operationTime" : Timestamp(1574796766, 1021),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.853-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796766, 511). Collection minimum timestamp is Timestamp(1574796766, 1021)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796766, 1015),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] "ts" : Timestamp(1574796765, 2533),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] "clusterTime" : Timestamp(1574796766, 1021),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796766, 1021),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.854-0500 [jsTest] "clusterTime" : Timestamp(1574796766, 1021),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796766, 1021)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796766, 1525)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 1021)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.855-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 1021)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 1021)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] "operationTime" : Timestamp(1574796766, 3536),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796766, 1021). Collection minimum timestamp is Timestamp(1574796766, 3533)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796766, 1525),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.856-0500 [jsTest] "ts" : Timestamp(1574796765, 2533),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.857-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.857-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.857-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.857-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.857-0500 [jsTest] "clusterTime" : Timestamp(1574796766, 3536),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.857-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.857-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.857-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.857-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.857-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.857-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.857-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.857-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.857-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.857-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796766, 3536),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.857-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.857-0500 [jsTest] "clusterTime" : Timestamp(1574796766, 3536),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.857-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:34.857-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:35.007-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.496-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.496-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.496-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.496-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.496-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.496-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.496-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.496-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.496-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.497-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796766, 3536)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.497-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.497-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.497-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.497-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.497-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.497-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.497-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796766, 3536)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.497-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.497-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.497-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.497-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.497-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.497-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.497-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 3536)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:35.011-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.497-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.497-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.497-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.497-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.497-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.498-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.498-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 3536)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.498-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.498-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.498-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.498-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.498-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.498-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.498-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 3536)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.498-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.498-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.498-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.498-0500 [jsTest] "operationTime" : Timestamp(1574796766, 3536),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.498-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.498-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796766, 3536). Collection minimum timestamp is Timestamp(1574796766, 3537)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.498-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.498-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.498-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.498-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.498-0500 [jsTest] "electionId" : ObjectId("000000000000000000000000")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.498-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.498-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796766, 3536),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.498-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] "ts" : Timestamp(1574796765, 2533),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] "clusterTime" : Timestamp(1574796766, 3537),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796766, 3536),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] "clusterTime" : Timestamp(1574796766, 3538),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.499-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.500-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.500-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.500-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.500-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.500-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.500-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796766, 3536)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.500-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.500-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.500-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.500-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.500-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.500-0500 [jsTest] },
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:35.087-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.500-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796766, 3536)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:35.168-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.500-0500 [jsTest] },
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:36.973-0500 I CONNPOOL [Balancer] dropping unhealthy pooled connection to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.501-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.501-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.501-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.501-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:35.007-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.501-0500 [jsTest] },
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:35.511-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.501-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 3536)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:35.587-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.501-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.501-0500 [jsTest] {
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:36.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.501-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.502-0500 [jsTest] "session" : {
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:36.973-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.502-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.502-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.502-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 3536)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:35.507-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.502-0500 [jsTest] },
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:35.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.502-0500 [jsTest] {
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:36.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.502-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.502-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.502-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.502-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.502-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 3536)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:37.168-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.503-0500 [jsTest] },
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:36.973-0500 I NETWORK [TransactionCoordinator] Marking host localhost:20001 as failed :: caused by :: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.503-0500 [jsTest] {
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:36.007-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.503-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.503-0500 [jsTest] "operationTime" : Timestamp(1574796766, 3540),
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:36.011-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.503-0500 [jsTest] "ok" : 0,
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:36.087-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.503-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796766, 3536). Collection minimum timestamp is Timestamp(1574796766, 3540)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.503-0500 [jsTest] "code" : 246,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:36.974-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] dropping unhealthy pooled connection to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.503-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:36.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.504-0500 [jsTest] "$gleStats" : {
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:36.511-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.504-0500 [jsTest] "lastOpTime" : Timestamp(0, 0),
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:36.587-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.504-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:36.974-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.504-0500 [jsTest] },
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:36.507-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.504-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796766, 3536),
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:37.011-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.504-0500 [jsTest] "$configServerState" : {
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:37.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.504-0500 [jsTest] "opTime" : {
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:37.474-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.505-0500 [jsTest] "ts" : Timestamp(1574796765, 2533),
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:37.007-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.505-0500 [jsTest] "t" : NumberLong(1)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:37.011-0500 I REPL_HB [ReplCoord-6] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.505-0500 [jsTest] }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:37.087-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.505-0500 [jsTest] },
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:37.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.505-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.505-0500 [jsTest] "clusterTime" : Timestamp(1574796766, 3542),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.505-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.505-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.505-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.505-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.505-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.505-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.505-0500 [jsTest] "performNoopWrite" : true
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.505-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.505-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.505-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796766, 3540),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] "clusterTime" : Timestamp(1574796766, 3542),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796766, 3540)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796766, 3540)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.506-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.507-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.507-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.507-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 3540)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.507-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.507-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.507-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.507-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.507-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.507-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.507-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 3540)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.507-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.507-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.507-0500 [jsTest] "node" : connection to localhost:20003,
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:37.507-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.507-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 3540)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] "operationTime" : Timestamp(1574796766, 4049),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796766, 3540). Collection minimum timestamp is Timestamp(1574796766, 4049)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796766, 3540),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] "ts" : Timestamp(1574796765, 2533),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.508-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] "clusterTime" : Timestamp(1574796766, 4049),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796766, 4049),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] "clusterTime" : Timestamp(1574796766, 4049),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.509-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796766, 4049)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796766, 4053)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 4049)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 4049)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.510-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.511-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.511-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.511-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.511-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 4049)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.511-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.511-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.511-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.511-0500 [jsTest] "operationTime" : Timestamp(1574796766, 6204),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.511-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.511-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796766, 4049). Collection minimum timestamp is Timestamp(1574796766, 4186)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.511-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.511-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.511-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.511-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.511-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.511-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.511-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.511-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:37.511-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.511-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.511-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796766, 4054),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.511-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.511-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] "ts" : Timestamp(1574796765, 2533),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] "clusterTime" : Timestamp(1574796766, 6332),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796766, 6204),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] "clusterTime" : Timestamp(1574796766, 6332),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.512-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796766, 6204)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796767, 183)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 6204)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 6204)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.513-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796766, 6204)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] "operationTime" : Timestamp(1574796767, 1134),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796766, 6204). Collection minimum timestamp is Timestamp(1574796767, 1134)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796767, 237),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] "ts" : Timestamp(1574796765, 2533),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.514-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] "clusterTime" : Timestamp(1574796767, 1134),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796767, 1134),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] "clusterTime" : Timestamp(1574796767, 1134),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.515-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796767, 1134)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796767, 1250)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796767, 1134)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796767, 1134)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.516-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796767, 1134)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] "operationTime" : Timestamp(1574796767, 1254),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796767, 1134). Collection minimum timestamp is Timestamp(1574796767, 1252)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796767, 1253),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] "ts" : Timestamp(1574796765, 2533),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.517-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] "clusterTime" : Timestamp(1574796767, 1254),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796767, 1254),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] "clusterTime" : Timestamp(1574796767, 1254),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.518-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796767, 1254)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796767, 1254)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796767, 1254)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796767, 1254)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.519-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796767, 1254)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] "operationTime" : Timestamp(1574796770, 245),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796767, 1254). Collection minimum timestamp is Timestamp(1574796770, 245)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796767, 1254),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] "ts" : Timestamp(1574796767, 747),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] "clusterTime" : Timestamp(1574796770, 245),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.520-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796770, 245),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] "clusterTime" : Timestamp(1574796770, 245),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796770, 245)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.521-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796770, 869)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796770, 245)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796770, 245)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796770, 245)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.522-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] "operationTime" : Timestamp(1574796770, 1883),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796770, 245). Collection minimum timestamp is Timestamp(1574796770, 1883)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796770, 872),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] "ts" : Timestamp(1574796767, 747),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] "clusterTime" : Timestamp(1574796770, 1947),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.523-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796770, 1883),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] "clusterTime" : Timestamp(1574796770, 1947),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796770, 1883)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.524-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796770, 2582)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796770, 1883)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796770, 1883)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796770, 1883)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.525-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] "operationTime" : Timestamp(1574796770, 4035),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796770, 1883). Collection minimum timestamp is Timestamp(1574796770, 3901)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796770, 3085),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] "ts" : Timestamp(1574796767, 747),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] "clusterTime" : Timestamp(1574796770, 4163),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.526-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796770, 4035),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] "clusterTime" : Timestamp(1574796770, 4163),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796770, 4035)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.527-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796770, 5608)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796770, 4035)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796770, 4035)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796770, 4035)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] "operationTime" : Timestamp(1574796773, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.528-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796770, 4035). Collection minimum timestamp is Timestamp(1574796770, 6359)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796770, 6111),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] "ts" : Timestamp(1574796767, 747),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] "clusterTime" : Timestamp(1574796773, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.529-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796773, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] "clusterTime" : Timestamp(1574796773, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796773, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796773, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.530-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796773, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796773, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796773, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] "operationTime" : Timestamp(1574796773, 4048),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796773, 2). Collection minimum timestamp is Timestamp(1574796773, 3539)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.531-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796773, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] "ts" : Timestamp(1574796767, 747),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] "clusterTime" : Timestamp(1574796773, 4178),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796773, 4048),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] "clusterTime" : Timestamp(1574796773, 4178),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.532-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796773, 4048)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796773, 4049)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796773, 4048)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.533-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796773, 4048)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796773, 4048)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] "operationTime" : Timestamp(1574796776, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796773, 4048). Collection minimum timestamp is Timestamp(1574796773, 6013)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796773, 6065),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.534-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] "ts" : Timestamp(1574796767, 747),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] "clusterTime" : Timestamp(1574796776, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796776, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] "clusterTime" : Timestamp(1574796776, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.535-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796776, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796776, 3)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796776, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796776, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.536-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796776, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] "operationTime" : Timestamp(1574796776, 1533),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796776, 2). Collection minimum timestamp is Timestamp(1574796776, 7)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796776, 3),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] "ts" : Timestamp(1574796776, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] "clusterTime" : Timestamp(1574796776, 1725),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.537-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796776, 1533),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] "clusterTime" : Timestamp(1574796776, 1725),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796776, 2009)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796776, 2014)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.538-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796776, 1533)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796776, 1533)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796776, 1533)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] "operationTime" : Timestamp(1574796776, 2212),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796776, 1533). Collection minimum timestamp is Timestamp(1574796776, 2016)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.539-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796776, 2014),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] "ts" : Timestamp(1574796776, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] "clusterTime" : Timestamp(1574796776, 2276),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796776, 2212),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] "clusterTime" : Timestamp(1574796776, 2276),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.540-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796776, 2520)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796776, 2522)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796776, 2212)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.541-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796776, 2212)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796776, 2212)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] "operationTime" : Timestamp(1574796776, 3034),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796776, 2212). Collection minimum timestamp is Timestamp(1574796776, 3033)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796776, 2522),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.542-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] "ts" : Timestamp(1574796776, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] "clusterTime" : Timestamp(1574796776, 3034),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796776, 3034),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] "clusterTime" : Timestamp(1574796776, 3034),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.543-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796776, 3034)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796776, 3034)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796776, 3034)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796776, 3034)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.544-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796776, 3034)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] "operationTime" : Timestamp(1574796776, 5046),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796776, 3034). Collection minimum timestamp is Timestamp(1574796776, 5044)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796776, 3034),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] "ts" : Timestamp(1574796776, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] "clusterTime" : Timestamp(1574796776, 5046),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.545-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796776, 5046),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] "clusterTime" : Timestamp(1574796776, 5046),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796776, 5046)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796776, 5050)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.546-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796776, 5046)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796776, 5046)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796776, 5046)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] "operationTime" : Timestamp(1574796779, 3),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:37.547-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796776, 5046). Collection minimum timestamp is Timestamp(1574796777, 695)",
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:37.587-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.249-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.249-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.249-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.249-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.249-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.249-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.249-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.249-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796777, 879),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] "ts" : Timestamp(1574796777, 886),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] "clusterTime" : Timestamp(1574796779, 3),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796779, 3),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] "clusterTime" : Timestamp(1574796779, 3),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.250-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796779, 3)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796779, 4)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796779, 3)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.251-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796779, 3)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796779, 3)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] "operationTime" : Timestamp(1574796779, 1085),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796779, 3). Collection minimum timestamp is Timestamp(1574796779, 1085)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.252-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796779, 4),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] "ts" : Timestamp(1574796777, 886),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] "clusterTime" : Timestamp(1574796779, 1201),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796779, 1085),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] "clusterTime" : Timestamp(1574796779, 1201),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.253-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.254-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.254-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.254-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.254-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.254-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.254-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.254-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.254-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.254-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.254-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.254-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796779, 1085)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.254-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.254-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.254-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.254-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.254-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:37.974-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.254-0500 [jsTest] },
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:38.007-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.254-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796779, 1332)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.255-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.255-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.255-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.255-0500 [jsTest] "session" : {
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:38.011-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.255-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.255-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.255-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796779, 1085)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.255-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.255-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.255-0500 [jsTest] "node" : connection to localhost:20002,
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:38.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.255-0500 [jsTest] "session" : {
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:38.087-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.255-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:37.974-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.256-0500 [jsTest] },
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:38.507-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.256-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796779, 1085)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.256-0500 [jsTest] },
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:38.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.256-0500 [jsTest] {
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:39.168-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.256-0500 [jsTest] "node" : connection to localhost:20003,
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:38.587-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.256-0500 [jsTest] "session" : {
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:38.474-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.256-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:38.507-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.256-0500 [jsTest] },
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:38.511-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.257-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796779, 1085)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:39.948-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] dropping unhealthy pooled connection to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.257-0500 [jsTest] },
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:38.587-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.257-0500 [jsTest] {
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:38.974-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.257-0500 [jsTest] "transientError" : Error: command failed: {
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:39.007-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.257-0500 [jsTest] "operationTime" : Timestamp(1574796779, 2522),
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:39.011-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.257-0500 [jsTest] "ok" : 0,
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:39.948-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.257-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796779, 1085). Collection minimum timestamp is Timestamp(1574796779, 2520)",
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:39.087-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.258-0500 [jsTest] "code" : 246,
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:39.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.258-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:39.507-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.258-0500 [jsTest] "$gleStats" : {
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:39.511-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.258-0500 [jsTest] "lastOpTime" : {
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:39.948-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Marking host localhost:20001 as failed :: caused by :: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.258-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:39.587-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.258-0500 [jsTest] "t" : NumberLong(1)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:39.474-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.258-0500 [jsTest] },
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:40.007-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.259-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:39.511-0500 I REPL_HB [ReplCoord-6] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.259-0500 [jsTest] },
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:39.948-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.259-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796779, 1333),
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:39.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.259-0500 [jsTest] "$configServerState" : {
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:39.974-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.259-0500 [jsTest] "opTime" : {
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:40.007-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.259-0500 [jsTest] "ts" : Timestamp(1574796777, 886),
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:40.011-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.259-0500 [jsTest] "t" : NumberLong(1)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:40.087-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.259-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.259-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.259-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] "clusterTime" : Timestamp(1574796779, 2522),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796779, 2522),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] "clusterTime" : Timestamp(1574796779, 2522),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796779, 2522)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.260-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796779, 2523)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796779, 2522)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796779, 2522)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.261-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796779, 2522)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] "operationTime" : Timestamp(1574796780, 439),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796779, 2522). Collection minimum timestamp is Timestamp(1574796780, 1)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796779, 2524),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] "ts" : Timestamp(1574796777, 886),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.262-0500 [jsTest] "clusterTime" : Timestamp(1574796780, 439),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796780, 439),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] "clusterTime" : Timestamp(1574796780, 439),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796780, 439)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.263-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796780, 441)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796780, 439)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796780, 439)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.264-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796780, 439)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] "operationTime" : Timestamp(1574796780, 1399),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796780, 439). Collection minimum timestamp is Timestamp(1574796780, 1398)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796780, 441),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] "ts" : Timestamp(1574796777, 886),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] "clusterTime" : Timestamp(1574796780, 1527),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.265-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796780, 1399),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] "clusterTime" : Timestamp(1574796780, 1527),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796780, 1399)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.266-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796780, 1899)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796780, 1399)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796780, 1399)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796780, 1399)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.267-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] "operationTime" : Timestamp(1574796782, 4),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796780, 1399). Collection minimum timestamp is Timestamp(1574796782, 3)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796780, 1899),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] "ts" : Timestamp(1574796777, 886),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] "clusterTime" : Timestamp(1574796782, 56),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.268-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796782, 4),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] "clusterTime" : Timestamp(1574796782, 56),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796782, 4)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.269-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796782, 57)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796782, 4)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796782, 4)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796782, 4)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] "operationTime" : Timestamp(1574796782, 127),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.270-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796782, 4). Collection minimum timestamp is Timestamp(1574796782, 127)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796782, 57),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] "ts" : Timestamp(1574796777, 886),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] "clusterTime" : Timestamp(1574796782, 127),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.271-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796782, 127),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] "clusterTime" : Timestamp(1574796782, 127),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796782, 127)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796782, 628)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.272-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796782, 127)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796782, 127)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796782, 127)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] "operationTime" : Timestamp(1574796782, 1844),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796782, 127). Collection minimum timestamp is Timestamp(1574796782, 1844)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.273-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796782, 823),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] "ts" : Timestamp(1574796777, 886),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] "clusterTime" : Timestamp(1574796782, 1908),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.274-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796782, 1844),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] "clusterTime" : Timestamp(1574796782, 1908),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796782, 1844)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796782, 2076)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.275-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796782, 1844)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796782, 1844)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796782, 1844)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] "operationTime" : Timestamp(1574796782, 3090),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796782, 1844). Collection minimum timestamp is Timestamp(1574796782, 2585)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.276-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796782, 2080),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] "ts" : Timestamp(1574796777, 886),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] "clusterTime" : Timestamp(1574796782, 3154),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796782, 3090),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.277-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] "clusterTime" : Timestamp(1574796782, 3154),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796782, 3090)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796782, 3154)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.278-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796782, 3090)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796782, 3090)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796782, 3090)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] "operationTime" : Timestamp(1574796785, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796782, 3090). Collection minimum timestamp is Timestamp(1574796785, 1)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.279-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796782, 3154),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] "ts" : Timestamp(1574796777, 886),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] "clusterTime" : Timestamp(1574796785, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796785, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] "clusterTime" : Timestamp(1574796785, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.280-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796785, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796785, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796785, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.281-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796785, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796785, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] "operationTime" : Timestamp(1574796785, 1516),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796785, 1). Collection minimum timestamp is Timestamp(1574796785, 1515)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.282-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796785, 4),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] "ts" : Timestamp(1574796785, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] "clusterTime" : Timestamp(1574796785, 1580),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796785, 1516),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] "clusterTime" : Timestamp(1574796785, 1580),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.283-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796785, 1516)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796785, 1582)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796785, 1516)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.284-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796785, 1516)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796785, 1516)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] "operationTime" : Timestamp(1574796786, 377),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796785, 1516). Collection minimum timestamp is Timestamp(1574796786, 377)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796785, 1582),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] "ts" : Timestamp(1574796785, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.285-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] "clusterTime" : Timestamp(1574796786, 441),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796786, 377),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] "clusterTime" : Timestamp(1574796786, 441),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796786, 377)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.286-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796786, 442)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796786, 377)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796786, 377)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796786, 377)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.287-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] "operationTime" : Timestamp(1574796786, 2279),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796786, 377). Collection minimum timestamp is Timestamp(1574796786, 1390)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796786, 442),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] "ts" : Timestamp(1574796785, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] "clusterTime" : Timestamp(1574796786, 2407),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.288-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796786, 2279),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] "clusterTime" : Timestamp(1574796786, 2407),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796786, 2459)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796786, 2526)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.289-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796786, 2279)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796786, 2279)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796786, 2279)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] "operationTime" : Timestamp(1574796786, 4802),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796786, 2279). Collection minimum timestamp is Timestamp(1574796786, 4802)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.290-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796786, 2528),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] "ts" : Timestamp(1574796785, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] "clusterTime" : Timestamp(1574796786, 4802),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796786, 4802),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] "clusterTime" : Timestamp(1574796786, 4802),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.291-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796786, 4802)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796786, 4857)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796786, 4802)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796786, 4802)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.292-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796786, 4802)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] "operationTime" : Timestamp(1574796789, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796786, 4802). Collection minimum timestamp is Timestamp(1574796786, 4856)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796786, 4922),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] "ts" : Timestamp(1574796785, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.293-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] "clusterTime" : Timestamp(1574796789, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796789, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] "clusterTime" : Timestamp(1574796789, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796789, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.294-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796789, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796789, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796789, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796789, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.295-0500 [jsTest] "operationTime" : Timestamp(1574796789, 1575),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796789, 1). Collection minimum timestamp is Timestamp(1574796789, 1575)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796789, 133),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] "ts" : Timestamp(1574796787, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] "clusterTime" : Timestamp(1574796789, 1575),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.296-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796789, 1575),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] "clusterTime" : Timestamp(1574796789, 1575),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796789, 1575)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796789, 1641)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.297-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796789, 1575)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796789, 1575)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796789, 1575)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] "operationTime" : Timestamp(1574796789, 2526),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796789, 1575). Collection minimum timestamp is Timestamp(1574796789, 2526)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.298-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796789, 2016),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] "ts" : Timestamp(1574796787, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] "clusterTime" : Timestamp(1574796789, 2526),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796789, 2526),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] "clusterTime" : Timestamp(1574796789, 2526),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.299-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.300-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.300-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.300-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.300-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796789, 2526)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.300-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.300-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.300-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.300-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.300-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.300-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.300-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796789, 2527)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.300-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.300-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.300-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.300-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.300-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:40.300-0500 [jsTest] },
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:40.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.004-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796789, 2526)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.004-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.004-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.004-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.004-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796789, 2526)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796789, 2526)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] "operationTime" : Timestamp(1574796789, 4041),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796789, 2526). Collection minimum timestamp is Timestamp(1574796789, 4041)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.005-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.006-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796789, 2527),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.006-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.006-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.006-0500 [jsTest] "ts" : Timestamp(1574796787, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.006-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.006-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.006-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.006-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.006-0500 [jsTest] "clusterTime" : Timestamp(1574796789, 4106),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.006-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.006-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.006-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.006-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.006-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.006-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.006-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.006-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.006-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.006-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796789, 4041),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.006-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.006-0500 [jsTest] "clusterTime" : Timestamp(1574796789, 4106),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.006-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.006-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796789, 4041)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796789, 4546)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796789, 4041)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.007-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.008-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.008-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.008-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.008-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.008-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.008-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796789, 4041)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.008-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.008-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.008-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.008-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.008-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.008-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.008-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796789, 4041)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.008-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.008-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.008-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.008-0500 [jsTest] "operationTime" : Timestamp(1574796792, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.008-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.008-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796789, 4041). Collection minimum timestamp is Timestamp(1574796789, 5117)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.008-0500 [jsTest] "code" : 246,
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:40.507-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.008-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:40.511-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.009-0500 [jsTest] "$gleStats" : {
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:40.587-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.009-0500 [jsTest] "lastOpTime" : {
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:41.168-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.009-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.009-0500 [jsTest] "t" : NumberLong(1)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:40.474-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.009-0500 [jsTest] },
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:41.007-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.009-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:40.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.010-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.010-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796789, 5114),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.010-0500 [jsTest] "$configServerState" : {
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:41.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.010-0500 [jsTest] "opTime" : {
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:41.168-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.010-0500 [jsTest] "ts" : Timestamp(1574796787, 1),
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:40.974-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.010-0500 [jsTest] "t" : NumberLong(1)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:41.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.010-0500 [jsTest] }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:41.011-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.011-0500 [jsTest] },
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:41.087-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.011-0500 [jsTest] "$clusterTime" : {
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:42.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.011-0500 [jsTest] "clusterTime" : Timestamp(1574796792, 2),
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:41.474-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.011-0500 [jsTest] "signature" : {
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:41.507-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.011-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:41.511-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.011-0500 [jsTest] "keyId" : NumberLong(0)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:41.587-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.011-0500 [jsTest] }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:41.474-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.012-0500 [jsTest] }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:42.007-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.012-0500 [jsTest] },
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:42.011-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.012-0500 [jsTest] "performNoopWrite" : false
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:42.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.012-0500 [jsTest] },
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:41.974-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.012-0500 [jsTest] {
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:42.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.012-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796792, 2),
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:42.011-0500 I REPL_HB [ReplCoord-6] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.012-0500 [jsTest] "signedClusterTime" : {
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:42.087-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.012-0500 [jsTest] "clusterTime" : Timestamp(1574796792, 2),
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:42.474-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.013-0500 [jsTest] "signature" : {
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:42.507-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.013-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:42.512-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.013-0500 [jsTest] "keyId" : NumberLong(0)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:42.587-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.013-0500 [jsTest] }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:42.974-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.013-0500 [jsTest] }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:43.007-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.013-0500 [jsTest] },
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:43.012-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.013-0500 [jsTest] {
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:42.974-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.014-0500 [jsTest] "node" : connection to localhost:20002,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:43.012-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.014-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.014-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.014-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.014-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796792, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.014-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.014-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.014-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.014-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.014-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.014-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.014-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796792, 4)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.014-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.014-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.014-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.014-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.014-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.014-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.014-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796792, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.014-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.014-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.014-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796792, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796792, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] "operationTime" : Timestamp(1574796792, 1965),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796792, 2). Collection minimum timestamp is Timestamp(1574796792, 1964)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.015-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796792, 325),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] "ts" : Timestamp(1574796787, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] "clusterTime" : Timestamp(1574796792, 2017),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796792, 1965),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] "clusterTime" : Timestamp(1574796792, 2017),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.016-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796792, 1965)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796792, 2017)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796792, 1965)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.017-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796792, 1965)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796792, 1965)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] "operationTime" : Timestamp(1574796792, 3030),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796792, 1965). Collection minimum timestamp is Timestamp(1574796792, 2592)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.018-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796792, 2017),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] "ts" : Timestamp(1574796787, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] "clusterTime" : Timestamp(1574796792, 3094),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796792, 3030),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] "clusterTime" : Timestamp(1574796792, 3094),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.019-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796792, 3030)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796792, 3096)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796792, 3030)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.020-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796792, 3030)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796792, 3030)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] "operationTime" : Timestamp(1574796792, 5256),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796792, 3030). Collection minimum timestamp is Timestamp(1574796792, 3541)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.021-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796792, 3095),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] "ts" : Timestamp(1574796787, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] "clusterTime" : Timestamp(1574796792, 5448),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796792, 5256),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] "clusterTime" : Timestamp(1574796792, 5448),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.022-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796792, 6000)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796792, 6058)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796792, 5256)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.023-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796792, 5256)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796792, 5256)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] "operationTime" : Timestamp(1574796795, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796792, 5256). Collection minimum timestamp is Timestamp(1574796792, 6064)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.024-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796792, 6059),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] "ts" : Timestamp(1574796787, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] "clusterTime" : Timestamp(1574796795, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796795, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] "clusterTime" : Timestamp(1574796795, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.025-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796795, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796795, 506)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796795, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.026-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796795, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796795, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] "operationTime" : Timestamp(1574796795, 1274),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796795, 1). Collection minimum timestamp is Timestamp(1574796795, 1013)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.027-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796795, 504),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] "ts" : Timestamp(1574796787, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] "clusterTime" : Timestamp(1574796795, 1402),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796795, 1274),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] "clusterTime" : Timestamp(1574796795, 1402),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.028-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796795, 2018)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796795, 2023)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796795, 1274)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.029-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796795, 1274)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796795, 1274)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] "operationTime" : Timestamp(1574796795, 3871),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796795, 1274). Collection minimum timestamp is Timestamp(1574796795, 3871)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.030-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796795, 2024),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] "ts" : Timestamp(1574796787, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] "clusterTime" : Timestamp(1574796795, 3935),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796795, 3871),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] "clusterTime" : Timestamp(1574796795, 3935),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.031-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796795, 3871)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796795, 4039)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796795, 3871)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.032-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796795, 3871)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796795, 3871)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] "operationTime" : Timestamp(1574796795, 5625),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796795, 3871). Collection minimum timestamp is Timestamp(1574796795, 5625)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796795, 4039),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.033-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] "ts" : Timestamp(1574796787, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] "clusterTime" : Timestamp(1574796795, 5689),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796795, 5625),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] "clusterTime" : Timestamp(1574796795, 5689),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.034-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796795, 5625)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796795, 6564)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796795, 5625)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.035-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796795, 5625)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796795, 5625)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] "operationTime" : Timestamp(1574796798, 6),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796795, 5625). Collection minimum timestamp is Timestamp(1574796798, 6)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796798, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.036-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] "ts" : Timestamp(1574796795, 5559),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] "clusterTime" : Timestamp(1574796798, 6),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796798, 6),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] "clusterTime" : Timestamp(1574796798, 6),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.037-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796798, 6)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796798, 7)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796798, 6)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.038-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796798, 6)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796798, 6)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] "operationTime" : Timestamp(1574796798, 1519),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796798, 6). Collection minimum timestamp is Timestamp(1574796798, 1519)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796798, 10),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.039-0500 [jsTest] "ts" : Timestamp(1574796797, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] "clusterTime" : Timestamp(1574796798, 1520),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796798, 1519),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] "clusterTime" : Timestamp(1574796798, 1520),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.040-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796798, 1519)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796798, 1520)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796798, 1519)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796798, 1519)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.041-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796798, 1519)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] "operationTime" : Timestamp(1574796798, 2029),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796798, 1519). Collection minimum timestamp is Timestamp(1574796798, 2023)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796798, 1520),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] "ts" : Timestamp(1574796797, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] "clusterTime" : Timestamp(1574796798, 2157),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.042-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796798, 2029),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] "clusterTime" : Timestamp(1574796798, 2157),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796798, 2029)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.043-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796798, 2477)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796798, 2029)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796798, 2029)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796798, 2029)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] "operationTime" : Timestamp(1574796798, 4055),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796798, 2029). Collection minimum timestamp is Timestamp(1574796798, 3041)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.044-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796798, 3032),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] "ts" : Timestamp(1574796797, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] "clusterTime" : Timestamp(1574796798, 4171),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796798, 4055),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.045-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] "clusterTime" : Timestamp(1574796798, 4171),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796798, 4055)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796798, 4239)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796798, 4055)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.046-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796798, 4055)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796798, 4055)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] "operationTime" : Timestamp(1574796798, 5559),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796798, 4055). Collection minimum timestamp is Timestamp(1574796798, 5559)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.047-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796798, 4304),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] "ts" : Timestamp(1574796797, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] "clusterTime" : Timestamp(1574796798, 5559),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796798, 5559),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] "clusterTime" : Timestamp(1574796798, 5559),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.048-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796798, 5559)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796798, 5560)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796798, 5559)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796798, 5559)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.049-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796798, 5559)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] "operationTime" : Timestamp(1574796801, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796798, 5559). Collection minimum timestamp is Timestamp(1574796798, 5627)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796798, 5560),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] "ts" : Timestamp(1574796797, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.050-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] "clusterTime" : Timestamp(1574796801, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796801, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] "clusterTime" : Timestamp(1574796801, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796801, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.051-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796801, 2)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796801, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796801, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796801, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.052-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] "operationTime" : Timestamp(1574796801, 576),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796801, 1). Collection minimum timestamp is Timestamp(1574796801, 509)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796801, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] "ts" : Timestamp(1574796797, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] "clusterTime" : Timestamp(1574796801, 576),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.053-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796801, 576),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] "clusterTime" : Timestamp(1574796801, 576),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796801, 576)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796801, 1013)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.054-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796801, 576)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796801, 576)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796801, 576)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] "operationTime" : Timestamp(1574796801, 2592),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796801, 576). Collection minimum timestamp is Timestamp(1574796801, 2021)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.055-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.056-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.056-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.056-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.056-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.056-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796801, 1017),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:43.056-0500 [jsTest] "$configServerState" : {
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:43.087-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.598-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.599-0500 [jsTest] "ts" : Timestamp(1574796797, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.599-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.599-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.599-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.599-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.599-0500 [jsTest] "clusterTime" : Timestamp(1574796801, 2656),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.599-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.599-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.599-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.599-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.599-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.599-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.599-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.599-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.599-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.599-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796801, 2592),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.599-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.599-0500 [jsTest] "clusterTime" : Timestamp(1574796801, 2656),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.599-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.599-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.599-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.599-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.599-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796801, 2592)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796801, 2784)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796801, 2592)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.600-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796801, 2592)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796801, 2592)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] "operationTime" : Timestamp(1574796801, 4044),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796801, 2592). Collection minimum timestamp is Timestamp(1574796801, 3537)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.601-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796801, 3287),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] "ts" : Timestamp(1574796797, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] "clusterTime" : Timestamp(1574796801, 4044),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796801, 4044),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] "clusterTime" : Timestamp(1574796801, 4044),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.602-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796801, 4044)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796801, 4110)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.603-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796801, 4044)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.604-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.604-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.604-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.604-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.604-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.604-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.604-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796801, 4044)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.604-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.604-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.604-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.604-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.604-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.604-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.604-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796801, 4044)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.604-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.604-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.604-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.604-0500 [jsTest] "operationTime" : Timestamp(1574796801, 6063),
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:43.168-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.604-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.604-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796801, 4044). Collection minimum timestamp is Timestamp(1574796801, 5623)",
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:43.474-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.605-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.605-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.605-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.605-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.605-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.605-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.605-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.605-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.605-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.605-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796801, 4046),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.605-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.605-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.605-0500 [jsTest] "ts" : Timestamp(1574796797, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.605-0500 [jsTest] "t" : NumberLong(1)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:43.507-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.605-0500 [jsTest] }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:43.512-0500 I REPL_HB [ReplCoord-6] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.606-0500 [jsTest] },
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:43.587-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.606-0500 [jsTest] "$clusterTime" : {
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:43.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.606-0500 [jsTest] "clusterTime" : Timestamp(1574796801, 6064),
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:43.974-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.606-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.606-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.606-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.606-0500 [jsTest] }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:43.507-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.606-0500 [jsTest] }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:44.012-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.607-0500 [jsTest] },
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:43.587-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.607-0500 [jsTest] "performNoopWrite" : false
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:45.168-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.607-0500 [jsTest] },
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:44.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.607-0500 [jsTest] {
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:44.007-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.607-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796801, 6063),
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:44.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.607-0500 [jsTest] "signedClusterTime" : {
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:44.087-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.607-0500 [jsTest] "clusterTime" : Timestamp(1574796801, 6064),
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:45.168-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.608-0500 [jsTest] "signature" : {
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:44.474-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.608-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:44.507-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.608-0500 [jsTest] "keyId" : NumberLong(0)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:44.512-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.608-0500 [jsTest] }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:44.587-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.608-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.608-0500 [jsTest] },
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:44.974-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.608-0500 [jsTest] {
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:45.007-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.608-0500 [jsTest] "node" : connection to localhost:20002,
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:45.012-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.608-0500 [jsTest] "session" : {
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:44.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.609-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:45.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.609-0500 [jsTest] },
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:45.007-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.609-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796801, 6063)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:45.512-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.609-0500 [jsTest] },
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:45.087-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.609-0500 [jsTest] {
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:45.474-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.609-0500 [jsTest] "node" : connection to localhost:20003,
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:45.507-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.609-0500 [jsTest] "session" : {
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:45.512-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.609-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:45.587-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.610-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.610-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796801, 6567)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.610-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.610-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.610-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.610-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.610-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.610-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.610-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796801, 6063)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.610-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.610-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.610-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.610-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.610-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.610-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.610-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796801, 6063)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.610-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.610-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.610-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.610-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.610-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.610-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796801, 6063)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] "operationTime" : Timestamp(1574796804, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796801, 6063). Collection minimum timestamp is Timestamp(1574796801, 7139)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796801, 7138),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] "ts" : Timestamp(1574796797, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] "clusterTime" : Timestamp(1574796804, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.611-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796804, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] "clusterTime" : Timestamp(1574796804, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796804, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.612-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796804, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796804, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796804, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.613-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796804, 1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] "operationTime" : Timestamp(1574796804, 1579),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796804, 1). Collection minimum timestamp is Timestamp(1574796804, 1398)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796804, 133),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] "ts" : Timestamp(1574796797, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] "clusterTime" : Timestamp(1574796804, 1707),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.614-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796804, 1579),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] "clusterTime" : Timestamp(1574796804, 1707),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796804, 2143)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.615-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796804, 2338)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796804, 1579)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796804, 1579)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796804, 1579)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.616-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] "operationTime" : Timestamp(1574796804, 3416),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796804, 1579). Collection minimum timestamp is Timestamp(1574796804, 3030)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796804, 2338),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] "ts" : Timestamp(1574796797, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] "clusterTime" : Timestamp(1574796804, 3416),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.617-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796804, 3416),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] "clusterTime" : Timestamp(1574796804, 3416),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796804, 3416)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.618-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796804, 3532)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796804, 3416)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796804, 3416)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796804, 3416)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.619-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] "operationTime" : Timestamp(1574796804, 5498),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796804, 3416). Collection minimum timestamp is Timestamp(1574796804, 5498)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796804, 3533),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] "ts" : Timestamp(1574796797, 1),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] "clusterTime" : Timestamp(1574796804, 5551),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.620-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796804, 5498),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] "clusterTime" : Timestamp(1574796804, 5551),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796804, 5498)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.621-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796804, 5556)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796804, 5498)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796804, 5498)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796804, 5498)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.622-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] "operationTime" : Timestamp(1574796804, 6122),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796804, 5498). Collection minimum timestamp is Timestamp(1574796804, 5621)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796804, 5621),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] "ts" : Timestamp(1574796807, 2),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] "clusterTime" : Timestamp(1574796807, 68),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.623-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796804, 6122),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] "clusterTime" : Timestamp(1574796807, 68),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796804, 6122)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.624-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796807, 442)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796804, 6122)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796804, 6122)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796804, 6122)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] "transientError" : Error: command failed: {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] "operationTime" : Timestamp(1574796807, 448),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.625-0500 [jsTest] "ok" : 0,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] "errmsg" : "Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1574796804, 6122). Collection minimum timestamp is Timestamp(1574796807, 448)",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] "code" : 246,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] "codeName" : "SnapshotUnavailable",
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] "$gleStats" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] "lastOpTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] "ts" : Timestamp(1574796766, 3543),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] "electionId" : ObjectId("7fffffff0000000000000001")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] "lastCommittedOpTime" : Timestamp(1574796807, 443),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] "$configServerState" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] "opTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] "ts" : Timestamp(1574796807, 4),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] "t" : NumberLong(1)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] "$clusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] "clusterTime" : Timestamp(1574796807, 448),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.626-0500 [jsTest] "performNoopWrite" : false
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] "waitForSecondaries" : Timestamp(1574796807, 448),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] "signedClusterTime" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] "clusterTime" : Timestamp(1574796807, 448),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] "signature" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] "keyId" : NumberLong(0)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796807, 448)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] "majorityReadOpTime" : Timestamp(1574796807, 448)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.627-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 [jsTest] "node" : connection to localhost:20001,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 [jsTest] "id" : UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796807, 448)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 [jsTest] "node" : connection to localhost:20002,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 [jsTest] "id" : UUID("db13105a-0202-4d4c-9109-23747867bb60")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796807, 448)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 [jsTest] {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 [jsTest] "node" : connection to localhost:20003,
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 [jsTest] "session" : {
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 [jsTest] "id" : UUID("0c505de9-8c55-40c2-843c-01b5f53198cb")
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 [jsTest] },
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 [jsTest] "readAtClusterTime" : Timestamp(1574796807, 448)
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 [jsTest] }
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 [jsTest] ]
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 [jsTest] ----
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.628-0500 2019-11-26T14:33:30.914-0500 I NETWORK [thread2] trying reconnect to localhost:20001 failed
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.629-0500 2019-11-26T14:33:30.914-0500 I NETWORK [thread2] reconnect localhost:20001 failed failed
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.629-0500 2019-11-26T14:33:30.914-0500 I QUERY [thread2] Failed to end session { id: UUID("1f528513-5191-448e-bdc5-00eb90006431") } due to SocketException: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.629-0500 2019-11-26T14:33:30.917-0500 I NETWORK [thread2] trying reconnect to localhost:20001 failed
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.629-0500 2019-11-26T14:33:30.917-0500 I NETWORK [thread2] reconnect localhost:20001 failed failed
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.629-0500 2019-11-26T14:33:30.917-0500 I QUERY [thread2] Failed to end session { id: UUID("1bf68b0e-50f3-4a70-a84b-9d874fe60ef3") } due to SocketException: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.629-0500 2019-11-26T14:33:30.920-0500 E QUERY [js] Error: Error: error doing query: failed: network error while attempting to run command 'dbHash' on host 'localhost:20001' :
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.629-0500 DB.prototype.runCommand@src/mongo/shell/db.js:169:19
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.629-0500 ReplSetTest/this.getHashesUsingSessions/<@src/mongo/shell/replsettest.js:1708:46
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.629-0500 ReplSetTest/this.getHashesUsingSessions@src/mongo/shell/replsettest.js:1701:16
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.629-0500 checkCollectionHashesForDB@eval:160:13
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.629-0500 checkReplDbhashBackgroundThread@eval:304:26
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.629-0500 _threadStartWrapper@:26:16
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.629-0500 returnData<@jstests/hooks/run_check_repl_dbhash_background.js:487:17
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.629-0500 @jstests/hooks/run_check_repl_dbhash_background.js:485:28
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.629-0500 @jstests/hooks/run_check_repl_dbhash_background.js:20:2
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.629-0500 2019-11-26T14:33:30.920-0500 F - [main] failed to load: jstests/hooks/run_check_repl_dbhash_background.js
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.629-0500 2019-11-26T14:33:30.920-0500 E - [main] exiting with code -3
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.629-0500 2019-11-26T14:33:30.922-0500 I NETWORK [js] DBClientConnection failed to receive message from localhost:20001 - HostUnreachable: Connection closed by peer
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.629-0500 2019-11-26T14:33:30.922-0500 I QUERY [js] Failed to end session { id: UUID("e8cee555-35a7-448e-8702-72b60bd0ca56") } due to HostUnreachable: network error while attempting to run command 'endSessions' on host 'localhost:20001'
[CheckReplDBHashInBackground:job0:agg_out:CheckReplDBHashInBackground] 2019-11-26T14:33:45.630-0500 Check dbhashes of all replica set members while a test is running before running 'agg_out' failed
[executor:fsm_workload_test:job0] 2019-11-26T14:33:45.632-0500 agg_out:CheckReplDBHashInBackground ran in 85.19 seconds: failed.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:45.974-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:46.007-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:46.012-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:46.075-0500 I CONNPOOL [TaskExecutorPool-0] Ending idle connection to host localhost:20004 because the pool meets constraints; 1 connections to that host remain open
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:46.075-0500 I NETWORK [conn80] end connection 127.0.0.1:46136 (33 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:46.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:46.087-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:46.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:46.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:46.474-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:46.474-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:46.507-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:46.512-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:46.587-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:46.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:46.974-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:47.007-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:47.013-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:47.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:47.087-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:47.168-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:47.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:47.474-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:47.507-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:47.513-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:47.587-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:47.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:47.974-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:47.974-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:48.007-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:48.013-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:48.013-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:48.087-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:48.474-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:48.507-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:48.507-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:48.508-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:48.508-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20004
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:48.508-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20004
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:48.508-0500 I COMMAND [conn58] command test5_fsmdb0.fsmcoll0 appName: "tid:3" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("d95d0fed-24c4-45ab-ba3c-e11cdba255bd") }, $clusterTime: { clusterTime: Timestamp(1574796807, 443), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"Could not find host matching read preference { mode: \"primary\" } for set shard-rs0" errName:FailedToSatisfyReadPreference errCode:133 reslen:312 protocol:op_msg 20895ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:48.508-0500 I NETWORK [listener] connection accepted from 127.0.0.1:48636 #224 (34 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:48.508-0500 I COMMAND [conn200] command test5_fsmdb0.fsmcoll0 appName: "tid:0" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("550ce863-3b32-4a30-a03e-bbaf3bb341c5") }, $clusterTime: { clusterTime: Timestamp(1574796804, 5620), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"Could not find host matching read preference { mode: \"primary\" } for set shard-rs0" errName:FailedToSatisfyReadPreference errCode:133 reslen:312 protocol:op_msg 23644ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:48.508-0500 I COMMAND [conn59] command test5_fsmdb0.fsmcoll0 appName: "tid:1" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("32ab370b-4d54-43da-a69a-95db7fd1164f") }, $clusterTime: { clusterTime: Timestamp(1574796804, 5551), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"Could not find host matching read preference { mode: \"primary\" } for set shard-rs0" errName:FailedToSatisfyReadPreference errCode:133 reslen:312 protocol:op_msg 23648ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:48.508-0500 I NETWORK [listener] connection accepted from 127.0.0.1:48642 #225 (35 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:48.508-0500 I COMMAND [conn204] command test5_fsmdb0.fsmcoll0 appName: "tid:4" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("f0c4c6e1-a41a-4a84-9cc1-36dbb149b196") }, $clusterTime: { clusterTime: Timestamp(1574796804, 5620), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"Could not find host matching read preference { mode: \"primary\" } for set shard-rs0" errName:FailedToSatisfyReadPreference errCode:133 reslen:312 protocol:op_msg 23644ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:48.508-0500 I CONNPOOL [TaskExecutorPool-0] Dropping all pooled connections to localhost:20001 due to HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:48.508-0500 I NETWORK [conn224] received client metadata from 127.0.0.1:48636 conn224: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:48.508-0500 I COMMAND [conn201] command test5_fsmdb0.fsmcoll0 appName: "tid:2" command: aggregate { aggregate: "fsmcoll0", pipeline: [ { $match: { flag: true } }, { $out: "agg_out" } ], cursor: {}, lsid: { id: UUID("b7cda8f1-92e4-428e-8c55-54f715ce74c4") }, $clusterTime: { clusterTime: Timestamp(1574796807, 443), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test5_fsmdb0" } nShards:2 numYields:0 ok:0 errMsg:"Could not find host matching read preference { mode: \"primary\" } for set shard-rs0" errName:FailedToSatisfyReadPreference errCode:133 reslen:312 protocol:op_msg 20896ms
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:48.509-0500 I NETWORK [conn225] received client metadata from 127.0.0.1:48642 conn225: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:48.508-0500 I CONNPOOL [TaskExecutorPool-0] Dropping all pooled connections to localhost:20001 due to HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:48.513-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:48.514-0500 I NETWORK [conn204] end connection 127.0.0.1:46076 (4 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:48.514-0500 I NETWORK [conn200] end connection 127.0.0.1:46066 (5 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:48.514-0500 I NETWORK [conn201] end connection 127.0.0.1:46068 (3 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:48.515-0500 I NETWORK [conn59] end connection 127.0.0.1:59220 (2 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:48.515-0500 I NETWORK [conn58] end connection 127.0.0.1:59212 (1 connection now open)
[fsm_workload_test:agg_out] 2019-11-26T14:33:48.518-0500
[fsm_workload_test:agg_out] 2019-11-26T14:33:48.519-0500
[fsm_workload_test:agg_out] 2019-11-26T14:33:48.519-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:33:48.519-0500 [jsTest] New session started with sessionID: { "id" : UUID("d72cfcaa-4848-470a-b89d-d1b112aaa88d") } and options: { "causalConsistency" : false }
[fsm_workload_test:agg_out] 2019-11-26T14:33:48.519-0500 [jsTest] ----
[fsm_workload_test:agg_out] 2019-11-26T14:33:48.519-0500
[fsm_workload_test:agg_out] 2019-11-26T14:33:48.519-0500
[fsm_workload_test:agg_out] 2019-11-26T14:33:48.519-0500 2019-11-26T14:33:48.519-0500 I NETWORK [js] DBClientConnection failed to receive message from localhost:20001 - HostUnreachable: Connection closed by peer
[fsm_workload_test:agg_out] 2019-11-26T14:33:48.520-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: error doing query: failed: network error while attempting to run command 'ismaster' on host 'localhost:20001'
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:48.587-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:33:48.723-0500 2019-11-26T14:33:48.723-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:48.723-0500 2019-11-26T14:33:48.723-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:48.724-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[fsm_workload_test:agg_out] 2019-11-26T14:33:48.928-0500 2019-11-26T14:33:48.928-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:48.928-0500 2019-11-26T14:33:48.928-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:48.928-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:48.974-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:49.007-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:49.013-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:33:49.134-0500 2019-11-26T14:33:49.134-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:49.134-0500 2019-11-26T14:33:49.134-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:49.135-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:49.168-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:49.168-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:49.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:33:49.345-0500 2019-11-26T14:33:49.345-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:49.345-0500 2019-11-26T14:33:49.345-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:49.345-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:49.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:49.474-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:49.513-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:33:49.563-0500 2019-11-26T14:33:49.563-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:49.563-0500 2019-11-26T14:33:49.563-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:49.564-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[fsm_workload_test:agg_out] 2019-11-26T14:33:49.798-0500 2019-11-26T14:33:49.798-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:49.798-0500 2019-11-26T14:33:49.798-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:49.798-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:49.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:49.974-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:50.013-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:33:50.065-0500 2019-11-26T14:33:50.064-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:50.065-0500 2019-11-26T14:33:50.065-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:50.065-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:50.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:33:50.395-0500 2019-11-26T14:33:50.395-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:50.396-0500 2019-11-26T14:33:50.395-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:50.396-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:50.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:50.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:50.474-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:50.513-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:50.513-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:33:50.551-0500 2019-11-26T14:33:50.551-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] dropping unhealthy pooled connection to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:33:50.551-0500 2019-11-26T14:33:50.551-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:33:50.551-0500 2019-11-26T14:33:50.551-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Marking host localhost:20001 as failed :: caused by :: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:33:50.551-0500 2019-11-26T14:33:50.551-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:33:50.854-0500 2019-11-26T14:33:50.854-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:50.854-0500 2019-11-26T14:33:50.854-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:50.854-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:50.974-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:51.013-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:51.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:51.168-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:51.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:51.474-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:51.474-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:51.513-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:33:51.569-0500 2019-11-26T14:33:51.569-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:51.569-0500 2019-11-26T14:33:51.569-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:51.569-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:51.600-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:51.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:51.974-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:52.013-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:52.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:52.474-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:52.513-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:33:52.550-0500 2019-11-26T14:33:52.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:33:52.771-0500 2019-11-26T14:33:52.771-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:52.772-0500 2019-11-26T14:33:52.772-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:52.772-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:52.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:52.974-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:52.974-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:53.013-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:53.013-0500 I REPL_HB [ReplCoord-6] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:53.168-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:53.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:53.474-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:53.513-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:53.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:33:53.974-0500 2019-11-26T14:33:53.974-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:53.974-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:33:53.974-0500 2019-11-26T14:33:53.974-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:53.975-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:54.013-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:54.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:54.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:54.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:54.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:54.474-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:54.513-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:33:54.550-0500 2019-11-26T14:33:54.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:54.974-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:55.013-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:55.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:55.168-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:33:55.177-0500 2019-11-26T14:33:55.177-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:55.177-0500 2019-11-26T14:33:55.177-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:55.177-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:55.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:55.474-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:55.513-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:55.513-0500 I REPL_HB [ReplCoord-6] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:55.600-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:55.974-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:56.014-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:56.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:56.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:33:56.380-0500 2019-11-26T14:33:56.379-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:56.380-0500 2019-11-26T14:33:56.380-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:56.380-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:56.474-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:56.474-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:56.514-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:56.514-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:33:56.550-0500 2019-11-26T14:33:56.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:56.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:56.974-0500 I SHARDING [Balancer] caught exception while doing balance: Could not find host matching read preference { mode: "primary" } for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:56.974-0500 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "nz_desktop:20000-2019-11-26T14:33:56.974-0500-5ddd7e245cde74b6784bbc0d", server: "nz_desktop:20000", shard: "config", clientAddr: "", time: new Date(1574796836974), what: "balancer.round", ns: "", details: { executionTimeMillis: 20001, errorOccured: true, errmsg: "Could not find host matching read preference { mode: "primary" } for set shard-rs0" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:56.974-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:57.014-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:57.168-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:57.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:57.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] dropping unhealthy pooled connection to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:57.358-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Marking host localhost:20001 as failed :: caused by :: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:57.358-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:33:57.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] dropping unhealthy pooled connection to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:33:57.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:33:57.392-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Marking host localhost:20001 as failed :: caused by :: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:33:57.392-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:33:57.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] dropping unhealthy pooled connection to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:33:57.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:33:57.395-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Marking host localhost:20001 as failed :: caused by :: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:33:57.395-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:57.514-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:33:57.582-0500 2019-11-26T14:33:57.582-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:57.583-0500 2019-11-26T14:33:57.582-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:57.583-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:57.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] dropping unhealthy pooled connection to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:57.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:57.620-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Marking host localhost:20001 as failed :: caused by :: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:57.620-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:57.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:57.697-0500 I REPL [ReplCoordExtern-1] Choosing new sync source. Our current sync source is not primary and does not have a sync source, so we require that it is ahead of us. Current sync source: localhost:20003, my last fetched oplog optime: { ts: Timestamp(1574796807, 448), t: 1 }, latest oplog optime of sync source: { ts: Timestamp(1574796807, 448), t: 1 } (sync source does not know the primary)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:57.697-0500 I REPL [ReplCoordExtern-1] Canceling oplog query due to OplogQueryMetadata. We have to choose a new sync source. Current source: localhost:20003, OpTime { ts: Timestamp(1574796807, 448), t: 1 }, its sync source index:-1
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:57.697-0500 W REPL [BackgroundSync] Fetcher stopped querying remote oplog with error: InvalidSyncSource: sync source localhost:20003 (config version: 2; last applied optime: { ts: Timestamp(1574796807, 448), t: 1 }; sync source index: -1; primary index: -1) is no longer valid
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:57.697-0500 I REPL [BackgroundSync] Clearing sync source localhost:20003 to choose a new one.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:57.697-0500 I REPL [BackgroundSync] could not find member to sync from
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:57.698-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:57.707-0500 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to localhost:20003: InvalidSyncSource: Sync source was cleared. Was localhost:20003
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:57.707-0500 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to localhost:20001: InvalidSyncSource: Sync source was cleared. Was localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:57.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:58.014-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:58.198-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:58.199-0500 I REPL_HB [ReplCoord-6] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:58.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:58.514-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:33:58.550-0500 2019-11-26T14:33:58.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:58.699-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:33:58.785-0500 2019-11-26T14:33:58.785-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:58.785-0500 2019-11-26T14:33:58.785-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:58.786-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:59.014-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:59.014-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:33:59.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:59.199-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:59.199-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:33:59.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:33:59.341-0500 I CONNPOOL [AddShard-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:33:59.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:33:59.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:33:59.514-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:33:59.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:33:59.699-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:33:59.988-0500 2019-11-26T14:33:59.988-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:59.988-0500 2019-11-26T14:33:59.988-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:33:59.988-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:00.014-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:00.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:00.199-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:00.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:00.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:00.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:00.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:00.514-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:00.550-0500 2019-11-26T14:34:00.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:00.699-0500 I REPL_HB [ReplCoord-6] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:01.014-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:01.191-0500 2019-11-26T14:34:01.191-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:01.191-0500 2019-11-26T14:34:01.191-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:01.191-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:01.199-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:01.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:01.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:01.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:01.499-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:01.514-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:01.600-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:01.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:01.699-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:01.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:02.014-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:02.199-0500 I REPL_HB [ReplCoord-6] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:02.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:34:02.393-0500 2019-11-26T14:34:02.393-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:02.394-0500 2019-11-26T14:34:02.393-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:02.394-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:02.514-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:02.514-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:02.550-0500 2019-11-26T14:34:02.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:02.699-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:02.699-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:03.015-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:03.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:03.199-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:03.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:03.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:03.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:03.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:03.515-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:03.515-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:03.596-0500 2019-11-26T14:34:03.596-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:03.596-0500 2019-11-26T14:34:03.596-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:03.596-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:03.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:03.699-0500 I REPL_HB [ReplCoord-6] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:03.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:04.015-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:04.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:04.199-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:04.341-0500 I CONNPOOL [AddShard-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:04.515-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:04.550-0500 2019-11-26T14:34:04.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:04.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:04.699-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:04.799-0500 2019-11-26T14:34:04.799-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:04.799-0500 2019-11-26T14:34:04.799-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:04.799-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:05.015-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:05.199-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:05.199-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:05.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:05.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:05.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:05.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:05.515-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:05.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:05.699-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:05.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:34:06.002-0500 2019-11-26T14:34:06.001-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:06.002-0500 2019-11-26T14:34:06.002-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:06.002-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:06.015-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:06.015-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:06.199-0500 I REPL_HB [ReplCoord-6] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:06.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:06.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:06.499-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:06.515-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:06.550-0500 2019-11-26T14:34:06.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:06.699-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:06.977-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:07.015-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:07.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:07.199-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:07.204-0500 2019-11-26T14:34:07.204-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:07.204-0500 2019-11-26T14:34:07.204-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:07.204-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:07.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:07.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:07.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:07.477-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:07.515-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:07.600-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:07.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:07.699-0500 I REPL_HB [ReplCoord-6] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:07.977-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:07.977-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:08.015-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:08.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:08.199-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:08.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:34:08.407-0500 2019-11-26T14:34:08.407-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:08.407-0500 2019-11-26T14:34:08.407-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:08.407-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:08.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:08.477-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:08.515-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:08.550-0500 2019-11-26T14:34:08.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:08.699-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:08.700-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:08.977-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:09.015-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:09.200-0500 I REPL_HB [ReplCoord-6] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:09.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:09.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:09.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:09.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:09.477-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:09.515-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:09.515-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:09.609-0500 2019-11-26T14:34:09.609-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:09.610-0500 2019-11-26T14:34:09.609-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:09.610-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:09.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:09.700-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:09.700-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:09.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:09.948-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:09.977-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:10.015-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:10.200-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:10.341-0500 I CONNPOOL [AddShard-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:10.477-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:10.515-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:10.550-0500 2019-11-26T14:34:10.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:10.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:10.700-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:10.812-0500 2019-11-26T14:34:10.812-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:10.812-0500 2019-11-26T14:34:10.812-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:10.813-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:10.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:10.977-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:11.015-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:11.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:11.200-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:11.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:11.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:11.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:11.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:11.477-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:11.515-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:11.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:11.700-0500 I REPL_HB [ReplCoord-6] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:11.977-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:12.015-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:34:12.015-0500 2019-11-26T14:34:12.015-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:12.015-0500 2019-11-26T14:34:12.015-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:12.015-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:12.016-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:12.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:12.200-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:12.200-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:12.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:12.477-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:12.477-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:12.516-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:12.550-0500 2019-11-26T14:34:12.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:12.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:12.701-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:12.977-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:13.016-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:13.016-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:13.201-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:13.201-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:13.217-0500 2019-11-26T14:34:13.217-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:13.218-0500 2019-11-26T14:34:13.217-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:13.218-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:13.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:13.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:13.477-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:13.499-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:13.516-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:13.701-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:13.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:13.977-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:14.016-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:14.201-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:14.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:14.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:34:14.420-0500 2019-11-26T14:34:14.420-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:14.420-0500 2019-11-26T14:34:14.420-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:14.421-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:14.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:14.477-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:14.516-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:14.550-0500 2019-11-26T14:34:14.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:14.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:14.701-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:14.977-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:14.977-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:15.016-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:15.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:15.201-0500 I REPL_HB [ReplCoord-6] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:15.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:15.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:15.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:15.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:15.477-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:15.516-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:15.600-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:15.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:34:15.624-0500 2019-11-26T14:34:15.623-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:15.624-0500 2019-11-26T14:34:15.624-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:15.624-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:15.701-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:15.977-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:16.017-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:16.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:16.201-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:16.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:16.477-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:16.517-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:16.517-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:16.550-0500 2019-11-26T14:34:16.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:16.701-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:16.702-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:16.826-0500 2019-11-26T14:34:16.826-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:16.826-0500 2019-11-26T14:34:16.826-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:16.827-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:16.977-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:17.017-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:17.201-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:17.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:17.341-0500 I CONNPOOL [AddShard-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:17.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:17.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:17.477-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:17.517-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:17.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:17.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:17.701-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:17.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:17.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:17.977-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:18.017-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:18.029-0500 2019-11-26T14:34:18.029-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:18.029-0500 2019-11-26T14:34:18.029-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:18.029-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:18.201-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:18.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:18.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:18.477-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:18.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:18.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:18.517-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:18.550-0500 2019-11-26T14:34:18.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:18.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:18.701-0500 I REPL_HB [ReplCoord-6] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:18.977-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:19.017-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:19.017-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:19.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:19.201-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:19.202-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:19.231-0500 2019-11-26T14:34:19.231-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:19.232-0500 2019-11-26T14:34:19.232-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:19.232-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:19.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:19.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:19.477-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:19.477-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:19.517-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:19.702-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:19.977-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:20.017-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:20.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:20.202-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:20.202-0500 I REPL_HB [ReplCoord-6] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:20.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:20.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:34:20.434-0500 2019-11-26T14:34:20.434-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:20.434-0500 2019-11-26T14:34:20.434-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:20.435-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:20.477-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:20.499-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:20.517-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:20.550-0500 2019-11-26T14:34:20.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:34:20.551-0500 2019-11-26T14:34:20.551-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:20.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:20.702-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:20.977-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:21.017-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:21.202-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:21.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:21.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:21.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:21.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:21.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:21.477-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:21.518-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:21.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:34:21.637-0500 2019-11-26T14:34:21.637-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:21.637-0500 2019-11-26T14:34:21.637-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:21.637-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:21.702-0500 I REPL_HB [ReplCoord-6] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:21.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:21.977-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:21.977-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:22.018-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:22.202-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:22.477-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:22.518-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:22.519-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:22.550-0500 2019-11-26T14:34:22.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:22.600-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:22.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:22.702-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:22.839-0500 2019-11-26T14:34:22.839-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:22.840-0500 2019-11-26T14:34:22.839-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:22.840-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:22.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:22.977-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:23.020-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:23.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:23.202-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:23.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:23.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:23.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:23.477-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:23.520-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:23.520-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:23.702-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:23.703-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:23.977-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:24.020-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:24.042-0500 2019-11-26T14:34:24.042-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:24.042-0500 2019-11-26T14:34:24.042-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:24.042-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:24.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:24.203-0500 I REPL_HB [ReplCoord-6] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:24.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:24.341-0500 I CONNPOOL [AddShard-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:24.477-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:24.520-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:24.550-0500 2019-11-26T14:34:24.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:24.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:24.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:24.703-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:24.703-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:24.977-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:25.020-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:25.203-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:25.245-0500 2019-11-26T14:34:25.245-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:25.245-0500 2019-11-26T14:34:25.245-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:25.245-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:25.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:25.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:25.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:25.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:25.477-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:25.521-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:25.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:25.704-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:25.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:25.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:25.977-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:26.021-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:26.021-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:26.204-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:26.447-0500 2019-11-26T14:34:26.447-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:26.448-0500 2019-11-26T14:34:26.447-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:26.448-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:26.477-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:26.477-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:26.521-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:26.550-0500 2019-11-26T14:34:26.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:26.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:26.704-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:26.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:26.976-0500 I SHARDING [Balancer] caught exception while doing balance: Could not find host matching read preference { mode: "primary" } for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:26.976-0500 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "nz_desktop:20000-2019-11-26T14:34:26.976-0500-5ddd7e425cde74b6784bbc45", server: "nz_desktop:20000", shard: "config", clientAddr: "", time: new Date(1574796866976), what: "balancer.round", ns: "", details: { executionTimeMillis: 20000, errorOccured: true, errmsg: "Could not find host matching read preference { mode: "primary" } for set shard-rs0" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:26.977-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:27.021-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:27.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:27.204-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:27.205-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:27.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:27.359-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:27.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:27.392-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:27.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:27.395-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:27.499-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:27.522-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:27.620-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:34:27.651-0500 2019-11-26T14:34:27.651-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:27.651-0500 2019-11-26T14:34:27.651-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:27.652-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:27.706-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:27.707-0500 I CONNPOOL [ReplCoordExternNetwork] Ending idle connection to host localhost:20003 because the pool meets constraints; 1 connections to that host remain open
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:27.707-0500 I NETWORK [conn24] end connection 127.0.0.1:52220 (12 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:28.023-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:28.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:28.206-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:28.207-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:28.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:28.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:28.524-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:28.550-0500 2019-11-26T14:34:28.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:28.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:28.708-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:28.854-0500 2019-11-26T14:34:28.854-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:28.854-0500 2019-11-26T14:34:28.854-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:28.855-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:29.025-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:29.208-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:29.209-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:29.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:29.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:29.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:29.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:29.525-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:29.526-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:29.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:29.710-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:29.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:30.027-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:30.057-0500 2019-11-26T14:34:30.057-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:30.057-0500 2019-11-26T14:34:30.057-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:30.058-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:30.210-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:30.211-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:30.341-0500 I CONNPOOL [AddShard-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:30.527-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:30.527-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:30.550-0500 2019-11-26T14:34:30.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:30.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:30.711-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:30.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:31.027-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:31.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:31.211-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:31.212-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:31.260-0500 2019-11-26T14:34:31.260-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:31.260-0500 2019-11-26T14:34:31.260-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:31.260-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:31.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:31.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:31.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:31.527-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:31.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:31.712-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:32.027-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:32.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:32.212-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:32.212-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:32.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:34:32.463-0500 2019-11-26T14:34:32.463-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:32.463-0500 2019-11-26T14:34:32.463-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:32.463-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:32.499-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:32.527-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:32.550-0500 2019-11-26T14:34:32.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:32.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:32.712-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:33.027-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:33.027-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:33.212-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:33.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:33.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:33.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:33.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:33.527-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:33.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:34:33.665-0500 2019-11-26T14:34:33.665-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:33.666-0500 2019-11-26T14:34:33.666-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:33.666-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:33.712-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:33.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:34.027-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:34.212-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:34.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:34.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:34.527-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:34.550-0500 2019-11-26T14:34:34.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:34.600-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:34.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:34.712-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:34.868-0500 2019-11-26T14:34:34.868-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:34.868-0500 2019-11-26T14:34:34.868-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:34.869-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:34.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:35.027-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:35.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:35.212-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:35.341-0500 I CONNPOOL [AddShard-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:35.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:35.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:35.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:35.527-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:35.712-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:35.713-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:36.027-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:36.071-0500 2019-11-26T14:34:36.071-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:36.071-0500 2019-11-26T14:34:36.071-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:36.071-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:36.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:36.213-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:36.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:36.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:36.527-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:36.527-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:36.550-0500 2019-11-26T14:34:36.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:36.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:36.713-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:36.713-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:36.978-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:37.027-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:37.213-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:37.274-0500 2019-11-26T14:34:37.273-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:37.274-0500 2019-11-26T14:34:37.274-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:37.274-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:37.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:37.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:37.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:37.478-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:37.478-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:37.527-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:37.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:37.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:37.713-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:37.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:37.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:37.978-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:38.027-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:38.213-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:38.477-0500 2019-11-26T14:34:38.477-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:38.477-0500 2019-11-26T14:34:38.477-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:38.477-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:38.478-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:38.499-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:38.527-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:38.550-0500 2019-11-26T14:34:38.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:38.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:38.713-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:38.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:38.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:38.978-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:39.027-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:39.027-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:39.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:39.213-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:39.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:39.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:39.478-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:39.527-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:39.679-0500 2019-11-26T14:34:39.679-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:39.680-0500 2019-11-26T14:34:39.679-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:39.680-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:39.713-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:39.948-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:39.978-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:39.978-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:40.028-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:40.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:40.213-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:40.213-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:40.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:40.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:40.478-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:40.527-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:40.550-0500 2019-11-26T14:34:40.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:40.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:40.713-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:40.882-0500 2019-11-26T14:34:40.882-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:40.882-0500 2019-11-26T14:34:40.882-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:40.883-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:40.978-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:41.028-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:41.213-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:41.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:41.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:41.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:41.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:41.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:41.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:41.478-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:41.528-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:41.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:41.713-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:41.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:41.978-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:42.028-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:42.085-0500 2019-11-26T14:34:42.085-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:42.085-0500 2019-11-26T14:34:42.085-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:42.086-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:42.213-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:42.341-0500 I CONNPOOL [AddShard-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:42.478-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:42.528-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:42.528-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:42.550-0500 2019-11-26T14:34:42.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:42.600-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:42.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:42.713-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:42.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:42.978-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:43.028-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:43.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:43.214-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:43.288-0500 2019-11-26T14:34:43.288-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:43.288-0500 2019-11-26T14:34:43.288-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:43.288-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:43.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:43.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:43.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:43.478-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:43.528-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:43.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:43.714-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:43.714-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:43.978-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:44.028-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:44.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:44.214-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:44.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:44.478-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:44.478-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:34:44.491-0500 2019-11-26T14:34:44.491-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:44.491-0500 2019-11-26T14:34:44.491-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:44.491-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:44.529-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:44.550-0500 2019-11-26T14:34:44.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:44.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:44.714-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:44.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:44.978-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:45.029-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:45.029-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:45.214-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:45.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:45.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:45.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:45.478-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:45.499-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:45.529-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:45.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:34:45.693-0500 2019-11-26T14:34:45.693-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:45.693-0500 2019-11-26T14:34:45.693-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:45.694-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:45.714-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:45.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:45.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:45.978-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:46.029-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:46.215-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:46.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:46.478-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:46.529-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:46.550-0500 2019-11-26T14:34:46.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:46.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:46.715-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:46.896-0500 2019-11-26T14:34:46.896-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:46.896-0500 2019-11-26T14:34:46.896-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:46.897-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:46.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:46.978-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:46.978-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:47.029-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:47.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:47.215-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:47.215-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:47.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:47.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:47.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:47.478-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:47.529-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:47.716-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:47.978-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:48.029-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:48.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:34:48.099-0500 2019-11-26T14:34:48.099-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:48.099-0500 2019-11-26T14:34:48.099-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:48.099-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:48.216-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:48.216-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:48.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:48.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:48.478-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:48.508-0500 I CONNPOOL [TaskExecutorPool-0] Ending idle connection to host localhost:20004 because the pool meets constraints; 2 connections to that host remain open
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:48.508-0500 I NETWORK [conn65] end connection 127.0.0.1:46104 (34 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:48.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:48.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:48.509-0500 I CONNPOOL [TaskExecutorPool-0] Ending idle connection to host localhost:20004 because the pool meets constraints; 1 connections to that host remain open
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:48.509-0500 I CONNPOOL [TaskExecutorPool-0] Ending idle connection to host localhost:20004 because the pool meets constraints; 1 connections to that host remain open
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:48.509-0500 I NETWORK [conn225] end connection 127.0.0.1:48642 (33 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:48.509-0500 I NETWORK [conn81] end connection 127.0.0.1:46139 (32 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:48.529-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:48.529-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:48.550-0500 2019-11-26T14:34:48.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:48.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:48.716-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:48.978-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:49.029-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:49.216-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:49.302-0500 2019-11-26T14:34:49.301-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:49.302-0500 2019-11-26T14:34:49.302-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:49.302-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:49.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:49.341-0500 I CONNPOOL [AddShard-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:49.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:49.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:49.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:49.478-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:49.529-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:49.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:49.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:49.716-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:49.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:49.978-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:50.029-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:50.216-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:50.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:50.478-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:34:50.504-0500 2019-11-26T14:34:50.504-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:50.504-0500 2019-11-26T14:34:50.504-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:50.505-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:50.529-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:50.550-0500 2019-11-26T14:34:50.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:34:50.551-0500 2019-11-26T14:34:50.551-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:50.600-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:50.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:50.716-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:50.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:50.978-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:51.029-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:51.030-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:51.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:51.216-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:51.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:51.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:51.478-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:51.478-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:51.531-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:51.707-0500 2019-11-26T14:34:51.707-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:51.707-0500 2019-11-26T14:34:51.707-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:51.707-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:51.716-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:51.716-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:51.978-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:52.031-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:52.032-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:52.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:52.216-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:52.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:52.478-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:52.499-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:52.533-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:52.550-0500 2019-11-26T14:34:52.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:52.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:52.716-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:52.910-0500 2019-11-26T14:34:52.910-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:52.910-0500 2019-11-26T14:34:52.910-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:52.910-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:52.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:52.978-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:53.033-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:53.034-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:53.216-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:53.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:53.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:53.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:53.478-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:53.535-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:53.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:53.716-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:53.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:53.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:53.978-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:53.978-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:54.035-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:54.036-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:54.114-0500 2019-11-26T14:34:54.114-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:54.114-0500 2019-11-26T14:34:54.114-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:54.115-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:54.216-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:54.478-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:54.535-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:54.550-0500 2019-11-26T14:34:54.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:54.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:54.716-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:54.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:54.978-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:55.036-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:55.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:55.216-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:55.217-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:55.318-0500 2019-11-26T14:34:55.318-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:55.318-0500 2019-11-26T14:34:55.318-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:55.319-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:55.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:55.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:55.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:55.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:55.478-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:55.537-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:55.718-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:55.978-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:56.038-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:56.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:56.218-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:56.219-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:56.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:56.341-0500 I CONNPOOL [AddShard-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:56.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:56.478-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:34:56.522-0500 2019-11-26T14:34:56.522-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:56.522-0500 2019-11-26T14:34:56.522-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:56.523-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:56.539-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:56.550-0500 2019-11-26T14:34:56.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:56.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:56.720-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:56.978-0500 I SHARDING [Balancer] caught exception while doing balance: Could not find host matching read preference { mode: "primary" } for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:56.978-0500 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "nz_desktop:20000-2019-11-26T14:34:56.978-0500-5ddd7e605cde74b6784bbc7d", server: "nz_desktop:20000", shard: "config", clientAddr: "", time: new Date(1574796896978), what: "balancer.round", ns: "", details: { executionTimeMillis: 20000, errorOccured: true, errmsg: "Could not find host matching read preference { mode: "primary" } for set shard-rs0" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:56.978-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:57.040-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:57.220-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:57.221-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:57.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:57.359-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:57.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:57.392-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:57.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:57.396-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:57.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:57.540-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:57.541-0500 I REPL_HB [ReplCoord-7] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:57.620-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:57.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:57.722-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:57.726-0500 2019-11-26T14:34:57.726-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:57.727-0500 2019-11-26T14:34:57.727-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:57.727-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:57.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:58.042-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:58.222-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:58.223-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:34:58.499-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:58.542-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:58.543-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:58.550-0500 2019-11-26T14:34:58.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:34:58.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:58.724-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:34:58.931-0500 2019-11-26T14:34:58.930-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:58.931-0500 2019-11-26T14:34:58.931-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:34:58.931-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:34:58.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:59.044-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:34:59.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:59.224-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:59.225-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:34:59.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:34:59.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:59.544-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:34:59.545-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:34:59.726-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:00.046-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:00.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:00.133-0500 2019-11-26T14:35:00.133-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:00.134-0500 2019-11-26T14:35:00.133-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:00.134-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:00.226-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:00.226-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:00.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:00.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:00.546-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:00.547-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:00.550-0500 2019-11-26T14:35:00.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:00.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:00.726-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:01.048-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:01.226-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:01.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:01.336-0500 2019-11-26T14:35:01.336-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:01.337-0500 2019-11-26T14:35:01.336-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:01.337-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:01.341-0500 I CONNPOOL [AddShard-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:01.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:01.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:01.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:01.548-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:01.549-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:01.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:01.726-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:01.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:02.050-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:02.226-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:02.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:02.539-0500 2019-11-26T14:35:02.539-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:02.540-0500 2019-11-26T14:35:02.539-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:02.540-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[fsm_workload_test:agg_out] 2019-11-26T14:35:02.550-0500 2019-11-26T14:35:02.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:02.550-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:02.551-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:02.600-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:02.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:02.726-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:02.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:03.051-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:03.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:03.226-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:03.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:03.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:03.499-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:03.551-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:03.552-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:03.726-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:03.726-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:03.743-0500 2019-11-26T14:35:03.743-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:03.743-0500 2019-11-26T14:35:03.743-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:03.743-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:04.053-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:04.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:04.226-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:04.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:04.550-0500 2019-11-26T14:35:04.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:04.553-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:04.554-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:04.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:04.726-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:04.946-0500 2019-11-26T14:35:04.946-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:04.946-0500 2019-11-26T14:35:04.946-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:04.946-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:04.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:05.054-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:05.226-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:05.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:05.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:05.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:05.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:05.554-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:05.554-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:05.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:05.726-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:05.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:05.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:06.054-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:06.149-0500 2019-11-26T14:35:06.149-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:06.150-0500 2019-11-26T14:35:06.149-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:06.150-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:06.226-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:06.341-0500 I CONNPOOL [AddShard-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:06.550-0500 2019-11-26T14:35:06.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:06.554-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:06.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:06.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:06.727-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:06.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:06.980-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:07.054-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:07.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:07.227-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:07.228-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:07.352-0500 2019-11-26T14:35:07.352-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:07.352-0500 2019-11-26T14:35:07.352-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:07.353-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:07.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:07.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:07.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:07.480-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:07.554-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:07.728-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:07.980-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:08.054-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:08.054-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:08.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:08.228-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:08.228-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:08.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:08.480-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:08.480-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:35:08.550-0500 2019-11-26T14:35:08.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:08.554-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:08.555-0500 2019-11-26T14:35:08.555-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:08.555-0500 2019-11-26T14:35:08.555-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:08.556-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:08.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:08.729-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:08.980-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:09.054-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:09.229-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:09.229-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:09.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:09.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:09.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:09.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:09.480-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:09.499-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:09.554-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:09.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:09.729-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:09.758-0500 2019-11-26T14:35:09.758-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:09.758-0500 2019-11-26T14:35:09.758-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:09.758-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:09.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:09.948-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:09.980-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:10.054-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:10.229-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:10.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:10.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:10.480-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:35:10.550-0500 2019-11-26T14:35:10.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:10.554-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:10.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:10.729-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:10.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:10.961-0500 2019-11-26T14:35:10.961-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:10.961-0500 2019-11-26T14:35:10.961-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:10.961-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:10.980-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:10.980-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:11.054-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:11.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:11.230-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:11.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:11.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:11.480-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:11.554-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:11.554-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:11.600-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:11.731-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:11.980-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:12.055-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:12.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:12.164-0500 2019-11-26T14:35:12.164-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:12.164-0500 2019-11-26T14:35:12.164-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:12.164-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:12.231-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:12.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:12.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:12.480-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:35:12.550-0500 2019-11-26T14:35:12.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:12.555-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:12.556-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:12.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:12.731-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:12.732-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:12.980-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:13.057-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:13.232-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:13.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:13.341-0500 I CONNPOOL [AddShard-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:13.368-0500 2019-11-26T14:35:13.368-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:13.368-0500 2019-11-26T14:35:13.368-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:13.369-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:13.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:13.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:13.480-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:13.557-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:13.558-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:13.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:13.732-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:13.732-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:13.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:13.980-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:14.059-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:14.232-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:14.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:14.480-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:35:14.550-0500 2019-11-26T14:35:14.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:14.559-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:14.560-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:14.572-0500 2019-11-26T14:35:14.572-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:14.572-0500 2019-11-26T14:35:14.572-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:14.573-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:14.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:14.732-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:14.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:14.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:14.980-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:15.061-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:15.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:15.232-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:15.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:15.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:15.480-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:15.480-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:15.561-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:15.562-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:15.733-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:15.777-0500 2019-11-26T14:35:15.776-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:15.777-0500 2019-11-26T14:35:15.777-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:15.778-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:15.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:15.980-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:16.063-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:16.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:16.234-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:16.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:16.480-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:16.499-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:16.550-0500 2019-11-26T14:35:16.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:16.563-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:16.564-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:16.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:16.734-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:16.980-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:35:16.981-0500 2019-11-26T14:35:16.981-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:16.981-0500 2019-11-26T14:35:16.981-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:16.981-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:17.065-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:17.234-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:17.235-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:17.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:17.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:17.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:17.480-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:17.565-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:17.566-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:17.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:17.736-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:17.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:17.980-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:17.981-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:18.067-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:18.185-0500 2019-11-26T14:35:18.185-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:18.185-0500 2019-11-26T14:35:18.185-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:18.185-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:18.236-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:18.237-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:18.481-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:18.508-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:18.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:18.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:35:18.550-0500 2019-11-26T14:35:18.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:18.567-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:18.568-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:18.738-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:18.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:18.981-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:18.981-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:19.068-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:19.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:19.238-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:19.239-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:19.389-0500 2019-11-26T14:35:19.388-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:19.389-0500 2019-11-26T14:35:19.389-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:19.389-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:19.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:19.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:19.481-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:19.568-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:19.568-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:19.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:19.740-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:19.981-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:20.068-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:20.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:20.240-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:20.240-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:20.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:20.481-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:35:20.550-0500 2019-11-26T14:35:20.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:20.551-0500 2019-11-26T14:35:20.551-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:20.568-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:20.592-0500 2019-11-26T14:35:20.592-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:20.592-0500 2019-11-26T14:35:20.592-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:20.593-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:20.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:20.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:20.740-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:20.981-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:21.068-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:21.240-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:21.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:21.341-0500 I CONNPOOL [AddShard-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:21.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:21.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:21.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:21.481-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:21.568-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:21.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:21.740-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:21.795-0500 2019-11-26T14:35:21.795-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:21.795-0500 2019-11-26T14:35:21.795-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:21.796-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:21.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:21.981-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:22.068-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:22.068-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:22.240-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:22.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:22.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:22.481-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:35:22.550-0500 2019-11-26T14:35:22.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:22.568-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:22.600-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:22.740-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:22.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:22.981-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:35:22.998-0500 2019-11-26T14:35:22.998-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:22.998-0500 2019-11-26T14:35:22.998-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:22.999-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:23.068-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:23.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:23.240-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:23.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:23.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:23.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:23.481-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:23.481-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:23.568-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:23.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:23.740-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:23.740-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:23.981-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:24.068-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:24.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:24.201-0500 2019-11-26T14:35:24.201-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:24.202-0500 2019-11-26T14:35:24.202-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:24.202-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:24.240-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:24.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:24.481-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:24.499-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:24.550-0500 2019-11-26T14:35:24.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:24.568-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:24.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:24.740-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:24.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:24.981-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:25.068-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:25.240-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:25.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:25.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:25.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:25.404-0500 2019-11-26T14:35:25.404-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:25.405-0500 2019-11-26T14:35:25.404-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:25.405-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:25.481-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:25.568-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:25.568-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:25.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:25.740-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:25.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:25.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:25.981-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:25.981-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:26.068-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:26.240-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:26.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:26.481-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:35:26.550-0500 2019-11-26T14:35:26.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:26.568-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:26.607-0500 2019-11-26T14:35:26.607-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:26.607-0500 2019-11-26T14:35:26.607-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:26.608-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:26.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:26.740-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:26.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:26.980-0500 I SHARDING [Balancer] caught exception while doing balance: Could not find host matching read preference { mode: "primary" } for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:26.980-0500 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "nz_desktop:20000-2019-11-26T14:35:26.980-0500-5ddd7e7e5cde74b6784bbcb5", server: "nz_desktop:20000", shard: "config", clientAddr: "", time: new Date(1574796926980), what: "balancer.round", ns: "", details: { executionTimeMillis: 20000, errorOccured: true, errmsg: "Could not find host matching read preference { mode: "primary" } for set shard-rs0" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:26.981-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:27.068-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:27.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:27.240-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:27.241-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:27.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:27.358-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:27.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:27.392-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:27.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:27.395-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:27.568-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:27.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:27.620-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:27.741-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:27.810-0500 2019-11-26T14:35:27.810-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:27.810-0500 2019-11-26T14:35:27.810-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:27.810-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:28.068-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:28.068-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:28.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:28.241-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:28.241-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:28.341-0500 I CONNPOOL [AddShard-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:28.550-0500 2019-11-26T14:35:28.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:28.568-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:28.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:28.741-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:29.013-0500 2019-11-26T14:35:29.013-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:29.013-0500 2019-11-26T14:35:29.013-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:29.013-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:29.068-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:29.241-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:29.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:29.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:29.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:29.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:29.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:29.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:29.568-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:29.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:29.741-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:29.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:30.068-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:30.216-0500 2019-11-26T14:35:30.215-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:30.216-0500 2019-11-26T14:35:30.216-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:30.216-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:30.241-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:30.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:30.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:30.499-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:30.550-0500 2019-11-26T14:35:30.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:30.568-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:30.600-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:30.741-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:30.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:31.068-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:31.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:31.241-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:31.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:31.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:31.418-0500 2019-11-26T14:35:31.418-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:31.418-0500 2019-11-26T14:35:31.418-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:31.419-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:31.568-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:31.568-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:31.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:31.741-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:31.741-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:32.068-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:32.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:32.241-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:32.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:32.550-0500 2019-11-26T14:35:32.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:32.568-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:32.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:32.621-0500 2019-11-26T14:35:32.621-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:32.621-0500 2019-11-26T14:35:32.621-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:32.621-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:32.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:32.741-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:32.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:33.068-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:33.241-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:33.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:33.341-0500 I CONNPOOL [AddShard-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:33.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:33.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:33.568-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:33.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:33.741-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:33.824-0500 2019-11-26T14:35:33.823-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:33.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:33.824-0500 2019-11-26T14:35:33.824-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:33.824-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:33.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:34.068-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:34.068-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:34.241-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:34.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:34.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:34.550-0500 2019-11-26T14:35:34.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:34.568-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:34.741-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:34.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:35.026-0500 2019-11-26T14:35:35.026-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:35.027-0500 2019-11-26T14:35:35.026-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:35.027-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:35.068-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:35.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:35.241-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:35.241-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:35.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:35.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:35.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:35.499-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:35.568-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:35.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:35.741-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:36.068-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:36.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:36.229-0500 2019-11-26T14:35:36.229-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:36.229-0500 2019-11-26T14:35:36.229-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:36.229-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:36.241-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:36.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:36.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:36.550-0500 2019-11-26T14:35:36.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:36.569-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:36.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:36.741-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:36.982-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:36.982-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:37.069-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:37.242-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:37.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:37.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:37.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:37.432-0500 2019-11-26T14:35:37.432-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:37.432-0500 2019-11-26T14:35:37.432-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:37.432-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:37.482-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:37.569-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:37.569-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:37.600-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:37.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:37.742-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:37.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:37.982-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:38.069-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:38.242-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:38.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:38.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:38.482-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:35:38.550-0500 2019-11-26T14:35:38.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:38.569-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:38.634-0500 2019-11-26T14:35:38.634-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:38.635-0500 2019-11-26T14:35:38.634-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:38.635-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:38.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:38.742-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:38.742-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:38.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:38.982-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:39.069-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:39.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:39.242-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:39.341-0500 I CONNPOOL [AddShard-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:39.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:39.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:39.482-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:39.569-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:39.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:39.742-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:39.837-0500 2019-11-26T14:35:39.837-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:39.837-0500 2019-11-26T14:35:39.837-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:39.837-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:39.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:39.948-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:39.982-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:40.069-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:40.069-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:40.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:40.242-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:40.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:40.482-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:35:40.550-0500 2019-11-26T14:35:40.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:40.569-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:40.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:40.742-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:40.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:40.982-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:35:41.040-0500 2019-11-26T14:35:41.040-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:41.040-0500 2019-11-26T14:35:41.040-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:41.040-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:41.069-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:41.242-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:41.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:41.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:41.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:41.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:41.482-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:41.482-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:41.569-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:41.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:41.742-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:41.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:41.982-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:42.069-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:42.242-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:42.242-0500 2019-11-26T14:35:42.242-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:42.243-0500 2019-11-26T14:35:42.242-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:42.242-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:42.243-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:42.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:42.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:42.482-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:42.499-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:42.550-0500 2019-11-26T14:35:42.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:42.569-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:42.742-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:42.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:42.982-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:43.069-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:43.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:43.242-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:43.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:43.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:43.445-0500 2019-11-26T14:35:43.445-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:43.445-0500 2019-11-26T14:35:43.445-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:43.446-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:43.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:43.482-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:43.569-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:43.569-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:43.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:43.743-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:43.982-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:43.982-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:44.069-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:44.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:44.243-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:44.482-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:35:44.550-0500 2019-11-26T14:35:44.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:44.569-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:44.600-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:44.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:44.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:44.648-0500 2019-11-26T14:35:44.648-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:44.648-0500 2019-11-26T14:35:44.648-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:44.648-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:44.743-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:44.982-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:45.069-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:45.243-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:45.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:45.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:45.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:45.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:45.482-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:45.569-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:45.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:45.743-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:45.743-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:45.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:45.850-0500 2019-11-26T14:35:45.850-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:45.851-0500 2019-11-26T14:35:45.850-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:45.851-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:45.982-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:46.069-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:46.069-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:46.243-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:46.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:46.341-0500 I CONNPOOL [AddShard-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:46.482-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:35:46.550-0500 2019-11-26T14:35:46.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:46.569-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:46.744-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:46.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:46.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:46.982-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:35:47.053-0500 2019-11-26T14:35:47.053-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:47.053-0500 2019-11-26T14:35:47.053-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:47.054-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:47.069-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:47.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:47.244-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:47.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:47.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:47.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:47.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:47.482-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:47.569-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:47.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:47.744-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:47.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:47.982-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:48.069-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:48.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:48.245-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:48.256-0500 2019-11-26T14:35:48.256-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:48.256-0500 2019-11-26T14:35:48.256-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:48.256-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:48.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:48.482-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:48.482-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:48.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:48.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:35:48.550-0500 2019-11-26T14:35:48.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:48.569-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:48.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:48.745-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:48.982-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:49.069-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:49.245-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:49.245-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:49.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:49.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:49.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:49.459-0500 2019-11-26T14:35:49.458-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:49.459-0500 2019-11-26T14:35:49.459-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:49.459-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:49.482-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:49.499-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:49.569-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:49.569-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:49.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:49.745-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:49.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:49.982-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:50.069-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:50.245-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:50.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:50.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:50.482-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:35:50.550-0500 2019-11-26T14:35:50.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:50.551-0500 2019-11-26T14:35:50.551-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:50.569-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:50.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:50.661-0500 2019-11-26T14:35:50.661-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:50.661-0500 2019-11-26T14:35:50.661-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:50.662-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:50.745-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:50.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:50.982-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:50.982-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:51.069-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:51.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:51.245-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:51.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:51.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:51.482-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:51.569-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:51.600-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:51.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:51.745-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:51.864-0500 2019-11-26T14:35:51.864-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:51.864-0500 2019-11-26T14:35:51.864-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:51.865-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:51.982-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:52.069-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:52.069-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:52.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:52.245-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:52.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:52.482-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:35:52.550-0500 2019-11-26T14:35:52.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:52.569-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:52.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:52.745-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:52.745-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:52.982-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:35:53.067-0500 2019-11-26T14:35:53.067-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:53.067-0500 2019-11-26T14:35:53.067-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:53.068-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:53.069-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:53.245-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:53.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:53.341-0500 I CONNPOOL [AddShard-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:53.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:53.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:53.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:53.482-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:53.569-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:53.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:53.745-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:53.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:53.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:53.982-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:54.069-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:54.245-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:54.270-0500 2019-11-26T14:35:54.270-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:54.270-0500 2019-11-26T14:35:54.270-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:54.270-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:54.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:54.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:54.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:54.482-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:35:54.550-0500 2019-11-26T14:35:54.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:54.569-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:54.745-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:54.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:54.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:54.982-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:55.069-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:55.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:55.245-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:55.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:55.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:55.473-0500 2019-11-26T14:35:55.473-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:55.473-0500 2019-11-26T14:35:55.473-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:55.473-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:55.482-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:55.482-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:55.507-0500 I SH_REFR [ConfigServerCatalogCacheLoader-1] Refresh for collection config.system.sessions to version 1|0||5ddd7d713bbfe7fa5630d44a took 1 ms
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:55.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:55.569-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:55.569-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:55.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:55.745-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:55.982-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:56.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:56.069-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:56.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:56.245-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:56.245-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:56.482-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:56.499-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:56.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:35:56.550-0500 2019-11-26T14:35:56.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:56.569-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:56.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:56.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:35:56.675-0500 2019-11-26T14:35:56.675-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:56.675-0500 2019-11-26T14:35:56.675-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:56.676-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:56.745-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:56.982-0500 I SHARDING [Balancer] caught exception while doing balance: Could not find host matching read preference { mode: "primary" } for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:56.982-0500 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "nz_desktop:20000-2019-11-26T14:35:56.982-0500-5ddd7e9c5cde74b6784bbcef", server: "nz_desktop:20000", shard: "config", clientAddr: "", time: new Date(1574796956982), what: "balancer.round", ns: "", details: { executionTimeMillis: 20000, errorOccured: true, errmsg: "Could not find host matching read preference { mode: "primary" } for set shard-rs0" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:56.982-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:57.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:57.069-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:57.134-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:57.134-0500 I NETWORK [listener] connection accepted from 127.0.0.1:39400 #155 (20 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:57.135-0500 I NETWORK [conn155] received client metadata from 127.0.0.1:39400 conn155: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:57.135-0500 I SH_REFR [ConfigServerCatalogCacheLoader-1] Refresh for collection config.system.sessions to version 1|0||5ddd7d713bbfe7fa5630d44a took 1 ms
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:57.136-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:57.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:57.245-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:57.358-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:57.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:57.392-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:57.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:57.395-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:57.398-0500 I CONNPOOL [AddShard-TaskExecutor] Dropping all pooled connections to localhost:20004 due to ShutdownInProgress: Pool for localhost:20004 has expired.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:57.398-0500 I NETWORK [conn34] end connection 127.0.0.1:45872 (31 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:57.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:57.508-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:57.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:57.569-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:57.591-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:57.620-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:57.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:57.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:57.745-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:57.878-0500 2019-11-26T14:35:57.878-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:57.878-0500 2019-11-26T14:35:57.878-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:57.878-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:58.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:58.069-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:58.070-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:58.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:58.245-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:58.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:58.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:58.522-0500 I CONNPOOL [TaskExecutorPool-0] Dropping all pooled connections to localhost:20004 due to ShutdownInProgress: Pool for localhost:20004 has expired.
[fsm_workload_test:agg_out] 2019-11-26T14:35:58.550-0500 2019-11-26T14:35:58.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:58.570-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:58.600-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:58.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:58.745-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:58.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:35:58.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:59.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:59.070-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:59.070-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:35:59.081-0500 2019-11-26T14:35:59.081-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:59.081-0500 2019-11-26T14:35:59.081-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:35:59.081-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:59.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:59.245-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:59.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:35:59.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:35:59.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:35:59.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:35:59.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:35:59.570-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:35:59.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:59.745-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:35:59.745-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:00.008-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:00.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:00.071-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:00.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:00.245-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:00.283-0500 2019-11-26T14:36:00.283-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:00.283-0500 2019-11-26T14:36:00.283-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:00.284-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:00.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:00.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:00.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:00.550-0500 2019-11-26T14:36:00.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:00.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:00.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:00.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:00.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:00.745-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:00.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:01.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:01.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:01.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:01.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:01.245-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:01.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:01.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:01.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:36:01.486-0500 2019-11-26T14:36:01.486-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:01.486-0500 2019-11-26T14:36:01.486-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:01.486-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:01.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:01.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:01.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:01.636-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:01.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:01.745-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:01.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:02.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:02.071-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:02.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:02.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:02.245-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:02.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:02.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:02.550-0500 2019-11-26T14:36:02.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:02.571-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:02.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:02.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:02.689-0500 2019-11-26T14:36:02.689-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:02.689-0500 2019-11-26T14:36:02.689-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:02.689-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:02.745-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:02.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:03.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:03.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:03.136-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:03.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:03.245-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:03.245-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:03.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:03.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:03.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:03.508-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:03.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:03.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:03.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:03.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:03.745-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:03.892-0500 2019-11-26T14:36:03.892-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:03.892-0500 2019-11-26T14:36:03.892-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:03.892-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:04.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:04.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:04.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:04.245-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:04.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:04.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:04.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:04.550-0500 2019-11-26T14:36:04.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:04.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:04.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:04.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:04.745-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:04.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:05.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:05.071-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:05.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:05.094-0500 2019-11-26T14:36:05.094-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:05.095-0500 2019-11-26T14:36:05.094-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:05.095-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:05.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:05.245-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:05.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:05.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:05.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:05.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:05.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:05.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:05.600-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:05.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:05.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:05.745-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:06.008-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:06.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:06.071-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:06.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:06.245-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:06.297-0500 2019-11-26T14:36:06.297-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:06.297-0500 2019-11-26T14:36:06.297-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:06.297-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:06.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:06.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:06.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:06.550-0500 2019-11-26T14:36:06.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:06.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:06.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:06.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:06.745-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:06.745-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:06.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:06.984-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:07.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:07.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:07.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:07.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:07.245-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:07.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:07.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:07.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:07.484-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:07.484-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:07.500-0500 2019-11-26T14:36:07.500-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:07.500-0500 2019-11-26T14:36:07.500-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:07.500-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:07.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:07.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:07.636-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:07.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:07.745-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:07.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:07.984-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:08.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:08.071-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:08.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:08.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:08.245-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:08.484-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:08.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:08.550-0500 2019-11-26T14:36:08.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:08.571-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:08.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:08.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:08.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:08.702-0500 2019-11-26T14:36:08.702-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:08.702-0500 2019-11-26T14:36:08.702-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:08.703-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:08.746-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:08.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:08.984-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:08.984-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:09.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:09.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:09.136-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:09.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:09.246-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:09.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:09.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:09.484-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:09.508-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:09.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:09.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:09.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:09.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:09.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:09.746-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:09.905-0500 2019-11-26T14:36:09.905-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:09.905-0500 2019-11-26T14:36:09.905-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:09.905-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:09.948-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:09.984-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:10.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:10.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:10.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:10.246-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:10.246-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:10.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:10.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:10.484-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:10.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:10.550-0500 2019-11-26T14:36:10.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:10.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:10.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:10.746-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:10.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:10.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:10.984-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:11.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:11.071-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:11.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:11.108-0500 2019-11-26T14:36:11.108-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:11.108-0500 2019-11-26T14:36:11.108-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:11.108-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:11.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:11.246-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:11.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:11.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:11.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:11.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:11.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:11.484-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:11.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:11.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:11.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:11.746-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:11.984-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:12.008-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:12.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:12.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:12.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:12.246-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:12.310-0500 2019-11-26T14:36:12.310-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:12.311-0500 2019-11-26T14:36:12.311-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:12.311-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:12.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:12.484-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:12.484-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:12.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:12.550-0500 2019-11-26T14:36:12.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:12.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:12.600-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:12.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:12.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:12.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:12.746-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:12.984-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:13.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:13.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:13.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:13.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:13.246-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:13.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:13.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:13.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:13.484-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:13.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:13.513-0500 2019-11-26T14:36:13.513-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:13.513-0500 2019-11-26T14:36:13.513-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:13.514-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:13.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:13.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:13.636-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:13.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:13.746-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:13.746-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:13.984-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:13.984-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:14.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:14.071-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:14.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:14.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:14.246-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:14.484-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:14.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:14.550-0500 2019-11-26T14:36:14.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:14.571-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:14.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:14.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:14.716-0500 2019-11-26T14:36:14.716-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:14.716-0500 2019-11-26T14:36:14.716-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:14.716-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:14.746-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:14.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:14.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:14.984-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:15.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:15.071-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:15.136-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:15.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:15.246-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:15.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:15.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:15.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:15.484-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:15.508-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:15.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:15.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:15.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:15.746-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:15.918-0500 2019-11-26T14:36:15.918-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:15.919-0500 2019-11-26T14:36:15.918-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:15.919-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:15.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:15.984-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:16.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:16.071-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:16.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:16.246-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:16.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:16.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:16.484-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:16.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:16.550-0500 2019-11-26T14:36:16.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:16.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:16.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:16.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:16.746-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:16.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:16.984-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:17.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:17.071-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:17.071-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:17.121-0500 2019-11-26T14:36:17.121-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:17.121-0500 2019-11-26T14:36:17.121-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:17.121-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:17.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:17.246-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:17.246-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:17.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:17.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:17.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:17.484-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:17.484-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:17.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:17.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:17.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:17.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:17.746-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:17.984-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:18.008-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:18.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:18.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:18.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:18.246-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:18.324-0500 2019-11-26T14:36:18.324-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:18.324-0500 2019-11-26T14:36:18.324-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:18.324-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:18.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:18.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:18.484-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:18.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:18.550-0500 2019-11-26T14:36:18.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:18.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:18.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:18.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:18.746-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:18.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:18.984-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:18.984-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:19.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:19.071-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:19.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:19.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:19.246-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:19.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:19.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:19.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:19.484-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:19.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:19.526-0500 2019-11-26T14:36:19.526-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:19.527-0500 2019-11-26T14:36:19.527-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:19.527-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:19.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:19.600-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:19.636-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:19.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:19.746-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:19.984-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:20.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:20.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:20.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:20.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:20.246-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:20.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:20.484-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:20.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:20.550-0500 2019-11-26T14:36:20.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:36:20.551-0500 2019-11-26T14:36:20.551-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:20.571-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:20.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:20.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:20.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:20.729-0500 2019-11-26T14:36:20.729-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:20.729-0500 2019-11-26T14:36:20.729-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:20.730-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:20.746-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:20.747-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:20.984-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:21.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:21.071-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:21.136-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:21.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:21.247-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:21.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:21.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:21.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:21.484-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:21.508-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:21.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:21.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:21.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:21.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:21.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:21.747-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:21.747-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:21.932-0500 2019-11-26T14:36:21.932-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:21.932-0500 2019-11-26T14:36:21.932-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:21.932-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:21.984-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:22.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:22.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:22.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:22.247-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:22.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:22.484-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:22.484-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:22.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:22.550-0500 2019-11-26T14:36:22.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:22.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:22.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:22.747-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:22.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:22.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:22.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:22.984-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:23.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:23.071-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:23.071-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:23.135-0500 2019-11-26T14:36:23.135-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:23.135-0500 2019-11-26T14:36:23.135-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:23.135-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:23.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:23.247-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:23.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:23.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:23.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:23.484-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:23.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:23.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:23.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:23.747-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:23.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:23.984-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:23.984-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:24.008-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:24.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:24.071-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:24.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:24.247-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:24.337-0500 2019-11-26T14:36:24.337-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:24.338-0500 2019-11-26T14:36:24.338-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:24.338-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:24.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:24.484-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:24.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:24.550-0500 2019-11-26T14:36:24.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:24.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:24.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:24.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:24.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:24.747-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:24.984-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:25.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:25.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:25.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:25.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:25.247-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:25.247-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:25.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:25.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:25.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:25.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:25.484-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:25.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:25.540-0500 2019-11-26T14:36:25.540-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:25.540-0500 2019-11-26T14:36:25.540-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:25.541-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:25.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:25.582-0500 I CONNPOOL [ReplCoordExternNetwork] Dropping all pooled connections to localhost:20002 due to ShutdownInProgress: Pool for localhost:20002 has expired.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:25.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:25.636-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:25.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:25.747-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:25.984-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:26.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:26.071-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:26.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:26.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:26.248-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:26.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:26.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:26.484-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:26.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:26.550-0500 2019-11-26T14:36:26.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:26.571-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:26.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:26.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:26.743-0500 2019-11-26T14:36:26.743-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:26.743-0500 2019-11-26T14:36:26.743-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:26.743-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:26.748-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:26.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:26.984-0500 I SHARDING [Balancer] caught exception while doing balance: Could not find host matching read preference { mode: "primary" } for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:26.984-0500 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "nz_desktop:20000-2019-11-26T14:36:26.984-0500-5ddd7eba5cde74b6784bbd2a", server: "nz_desktop:20000", shard: "config", clientAddr: "", time: new Date(1574796986984), what: "balancer.round", ns: "", details: { executionTimeMillis: 20000, errorOccured: true, errmsg: "Could not find host matching read preference { mode: "primary" } for set shard-rs0" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:26.984-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:27.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:27.071-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:27.136-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:27.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:27.248-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:27.358-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:27.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:27.392-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:27.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:27.395-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:27.508-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:27.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:27.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:27.620-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:27.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:27.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:27.748-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:27.748-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:27.946-0500 2019-11-26T14:36:27.946-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:27.946-0500 2019-11-26T14:36:27.946-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:27.946-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:28.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:28.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:28.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:28.249-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:28.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:28.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:28.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:28.550-0500 2019-11-26T14:36:28.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:28.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:28.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:28.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:28.749-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:28.749-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:28.824-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:29.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:29.071-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:29.071-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:29.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:29.149-0500 2019-11-26T14:36:29.148-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:29.149-0500 2019-11-26T14:36:29.149-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:29.149-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:29.249-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:29.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:29.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:29.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:29.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:29.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:29.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:29.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:29.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:29.749-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:29.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:30.008-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:30.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:30.071-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:30.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:30.249-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:30.351-0500 2019-11-26T14:36:30.351-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:30.351-0500 2019-11-26T14:36:30.351-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:30.352-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:30.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:30.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:30.550-0500 2019-11-26T14:36:30.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:30.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:30.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:30.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:30.749-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:30.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:30.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:31.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:31.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:31.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:31.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:31.249-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:31.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:31.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:31.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:31.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:31.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:31.554-0500 2019-11-26T14:36:31.554-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:31.554-0500 2019-11-26T14:36:31.554-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:31.555-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:31.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:31.636-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:31.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:31.749-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:32.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:32.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:32.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:32.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:32.249-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:32.249-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:32.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:32.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:32.550-0500 2019-11-26T14:36:32.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:32.571-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:32.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:32.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:32.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:32.749-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:32.757-0500 2019-11-26T14:36:32.757-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:32.757-0500 2019-11-26T14:36:32.757-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:32.757-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:32.831-0500 I CONNPOOL [ShardRegistry] Dropping all pooled connections to localhost:20004 due to ShutdownInProgress: Pool for localhost:20004 has expired.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:32.831-0500 I NETWORK [conn138] end connection 127.0.0.1:46692 (30 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:33.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:33.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:33.136-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:33.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:33.249-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:33.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:33.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:33.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:33.508-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:33.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:33.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:33.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:33.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:33.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:33.749-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:33.960-0500 2019-11-26T14:36:33.959-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:33.960-0500 2019-11-26T14:36:33.960-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:33.960-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:34.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:34.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:34.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:34.249-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:34.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:34.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:34.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:34.550-0500 2019-11-26T14:36:34.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:34.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:34.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:34.749-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:34.750-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:34.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:35.008-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:35.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:35.071-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:35.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:35.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:35.162-0500 2019-11-26T14:36:35.162-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:35.163-0500 2019-11-26T14:36:35.162-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:35.163-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:35.250-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:35.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:35.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:35.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:35.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:35.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:35.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:35.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:35.750-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:35.750-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:36.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:36.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:36.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:36.250-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:36.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:36:36.365-0500 2019-11-26T14:36:36.365-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:36.365-0500 2019-11-26T14:36:36.365-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:36.365-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[fsm_workload_test:agg_out] 2019-11-26T14:36:36.550-0500 2019-11-26T14:36:36.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:36.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:36.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:36.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:36.636-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:36.750-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:36.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:36.986-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:36.986-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:37.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:37.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:37.136-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:37.250-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:37.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:37.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:37.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:37.486-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:37.568-0500 2019-11-26T14:36:37.568-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:37.568-0500 2019-11-26T14:36:37.568-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:37.568-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:37.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:37.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:37.750-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:37.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:37.986-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:38.071-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:38.251-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:38.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:38.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:38.486-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:38.550-0500 2019-11-26T14:36:38.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:38.571-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:38.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:38.750-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:38.770-0500 2019-11-26T14:36:38.770-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:38.770-0500 2019-11-26T14:36:38.770-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:38.771-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:38.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:38.986-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:39.071-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:39.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:39.250-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:39.251-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:39.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:39.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:39.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:39.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:39.486-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:39.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:39.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:39.751-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:39.948-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:39.973-0500 2019-11-26T14:36:39.973-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:39.973-0500 2019-11-26T14:36:39.973-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:39.973-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:39.986-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:40.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:40.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:40.251-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:40.251-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:40.486-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:40.486-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:40.550-0500 2019-11-26T14:36:40.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:40.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:40.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:40.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:40.751-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:40.986-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:41.071-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:41.071-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:41.176-0500 2019-11-26T14:36:41.175-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:41.176-0500 2019-11-26T14:36:41.176-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:41.176-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:41.251-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:41.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:41.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:41.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:41.486-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:41.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:41.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:41.751-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:41.986-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:41.986-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:42.071-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:42.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:42.251-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:42.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:42.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:36:42.378-0500 2019-11-26T14:36:42.378-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:42.378-0500 2019-11-26T14:36:42.378-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:42.379-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:42.486-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:42.550-0500 2019-11-26T14:36:42.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:42.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:42.751-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:42.751-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:42.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:42.986-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:43.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:43.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:43.251-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:43.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:43.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:43.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:43.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:43.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:43.486-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:43.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:43.581-0500 2019-11-26T14:36:43.581-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:43.581-0500 2019-11-26T14:36:43.581-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:43.581-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:43.751-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:43.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:43.986-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:44.071-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:44.251-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:44.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:44.486-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:44.550-0500 2019-11-26T14:36:44.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:44.571-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:44.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:44.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:44.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:44.751-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:44.784-0500 2019-11-26T14:36:44.784-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:44.784-0500 2019-11-26T14:36:44.784-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:44.784-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:44.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:44.986-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:45.071-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:45.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:45.251-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:45.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:45.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:45.486-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:45.486-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:45.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:45.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:45.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:45.751-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:45.986-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:45.986-0500 2019-11-26T14:36:45.986-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:45.987-0500 2019-11-26T14:36:45.986-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:45.987-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:46.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:46.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:46.251-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:46.251-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:46.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:46.486-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:46.550-0500 2019-11-26T14:36:46.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:46.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:46.751-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:46.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:46.986-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:46.986-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:47.071-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:47.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:47.189-0500 2019-11-26T14:36:47.189-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:47.189-0500 2019-11-26T14:36:47.189-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:47.189-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:47.251-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:47.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:47.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:47.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:47.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:47.486-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:47.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:47.751-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:47.986-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:48.071-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:48.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:48.251-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:48.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:48.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:36:48.392-0500 2019-11-26T14:36:48.392-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:48.392-0500 2019-11-26T14:36:48.392-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:48.392-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:48.486-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:48.550-0500 2019-11-26T14:36:48.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:48.571-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:48.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:48.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:48.751-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:48.751-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:48.986-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:49.071-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:49.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:49.251-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:49.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:49.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:49.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:49.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:49.486-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:49.571-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:49.594-0500 2019-11-26T14:36:49.594-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:49.594-0500 2019-11-26T14:36:49.594-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:49.595-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:49.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:49.751-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:49.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:49.986-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:50.072-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:50.251-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:50.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:50.486-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:50.486-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:50.550-0500 2019-11-26T14:36:50.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:36:50.551-0500 2019-11-26T14:36:50.551-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:50.572-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:50.572-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:50.638-0500 I NETWORK [conn18] end connection 127.0.0.1:51178 (11 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:50.751-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:50.797-0500 2019-11-26T14:36:50.797-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:50.797-0500 2019-11-26T14:36:50.797-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:50.797-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:50.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:50.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:50.986-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:51.072-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:51.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:51.251-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:51.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:51.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:51.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:51.486-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:51.572-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:51.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:51.751-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:51.986-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:51.986-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:52.000-0500 2019-11-26T14:36:51.999-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:52.000-0500 2019-11-26T14:36:52.000-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:52.000-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:52.072-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:52.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:52.251-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:52.251-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:52.486-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:52.550-0500 2019-11-26T14:36:52.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:52.572-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:52.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:52.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:52.751-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:52.986-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:53.072-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:53.072-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:53.202-0500 2019-11-26T14:36:53.202-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:53.202-0500 2019-11-26T14:36:53.202-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:53.202-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:53.251-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:53.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:53.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:53.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:53.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:53.486-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:53.572-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:53.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:53.751-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:53.986-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:54.072-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:54.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:54.251-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:54.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:54.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:36:54.405-0500 2019-11-26T14:36:54.405-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:54.405-0500 2019-11-26T14:36:54.405-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:54.405-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:54.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:54.486-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:54.550-0500 2019-11-26T14:36:54.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:54.573-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:54.751-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:54.752-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:54.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:54.986-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:55.073-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:55.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:55.252-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:55.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:55.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:55.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:55.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:55.486-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:55.486-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:55.573-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:55.608-0500 2019-11-26T14:36:55.607-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:55.608-0500 2019-11-26T14:36:55.608-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:55.608-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:55.752-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:55.752-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:55.986-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:56.073-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:56.252-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:56.486-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:36:56.550-0500 2019-11-26T14:36:56.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:56.573-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:56.573-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:56.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:56.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:56.752-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:56.810-0500 2019-11-26T14:36:56.810-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:56.810-0500 2019-11-26T14:36:56.810-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:56.810-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:56.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:56.986-0500 I SHARDING [Balancer] caught exception while doing balance: Could not find host matching read preference { mode: "primary" } for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:56.986-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:56.986-0500 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "nz_desktop:20000-2019-11-26T14:36:56.986-0500-5ddd7ed85cde74b6784bbd62", server: "nz_desktop:20000", shard: "config", clientAddr: "", time: new Date(1574797016986), what: "balancer.round", ns: "", details: { executionTimeMillis: 20000, errorOccured: true, errmsg: "Could not find host matching read preference { mode: "primary" } for set shard-rs0" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:56.986-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:57.073-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:57.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:57.135-0500 I CONNPOOL [ShardRegistry] Ending idle connection to host localhost:20000 because the pool meets constraints; 1 connections to that host remain open
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:57.135-0500 I NETWORK [conn23] end connection 127.0.0.1:55596 (19 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:57.252-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:57.358-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:57.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:57.392-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:57.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:57.395-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:57.522-0500 I NETWORK [conn40] end connection 127.0.0.1:45914 (29 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:57.573-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:57.620-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:57.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:57.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:57.752-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:57.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:36:58.013-0500 2019-11-26T14:36:58.013-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:58.013-0500 2019-11-26T14:36:58.013-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:58.013-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:58.073-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:36:58.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:58.252-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:58.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:58.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:36:58.550-0500 2019-11-26T14:36:58.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:58.573-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:58.752-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:36:58.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:59.073-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:59.073-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:36:59.216-0500 2019-11-26T14:36:59.215-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:59.216-0500 2019-11-26T14:36:59.216-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:36:59.216-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:59.252-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:59.252-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:36:59.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:36:59.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:36:59.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:36:59.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:36:59.573-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:36:59.752-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:00.074-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:00.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:00.253-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:00.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:00.419-0500 2019-11-26T14:37:00.419-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:00.419-0500 2019-11-26T14:37:00.419-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:00.419-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:00.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:00.550-0500 2019-11-26T14:37:00.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:00.574-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:00.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:00.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:00.752-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:01.074-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:01.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:01.253-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:01.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:01.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:01.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:01.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:01.574-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:01.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:01.622-0500 2019-11-26T14:37:01.622-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:01.622-0500 2019-11-26T14:37:01.622-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:01.622-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:01.753-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:01.753-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:02.075-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:02.254-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:02.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:02.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:02.550-0500 2019-11-26T14:37:02.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:02.575-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:02.575-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:02.754-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:02.754-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:37:02.824-0500 2019-11-26T14:37:02.824-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:02.825-0500 2019-11-26T14:37:02.825-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:02.825-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:02.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:03.075-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:03.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:03.254-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:03.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:03.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:03.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:03.575-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:03.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:03.754-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:03.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:04.027-0500 2019-11-26T14:37:04.027-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:04.027-0500 2019-11-26T14:37:04.027-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:04.028-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:04.075-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:04.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:04.254-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:04.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:04.550-0500 2019-11-26T14:37:04.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:04.576-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:04.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:04.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:04.754-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:04.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:05.076-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:05.077-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:37:05.230-0500 2019-11-26T14:37:05.230-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:05.230-0500 2019-11-26T14:37:05.230-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:05.230-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:05.254-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:05.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:05.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:05.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:05.508-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:05.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:05.577-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:05.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:05.754-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:06.077-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:06.077-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:06.254-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:06.254-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:06.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:06.433-0500 2019-11-26T14:37:06.432-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:06.433-0500 2019-11-26T14:37:06.433-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:06.433-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[fsm_workload_test:agg_out] 2019-11-26T14:37:06.550-0500 2019-11-26T14:37:06.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:06.577-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:06.754-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:06.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:06.988-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:06.988-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:07.077-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:07.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:07.135-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:07.254-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:07.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:07.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:07.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:07.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:07.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:07.488-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:07.577-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:37:07.635-0500 2019-11-26T14:37:07.635-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:07.635-0500 2019-11-26T14:37:07.635-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:07.636-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:07.754-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:07.988-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:08.077-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:08.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:08.254-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:08.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:08.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:08.488-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:37:08.550-0500 2019-11-26T14:37:08.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:08.577-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:08.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:08.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:08.754-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:08.754-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:37:08.838-0500 2019-11-26T14:37:08.838-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:08.838-0500 2019-11-26T14:37:08.838-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:08.839-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:08.988-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:09.077-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:09.254-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:09.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:09.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:09.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:09.488-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:09.577-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:09.577-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:09.621-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:09.755-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:09.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:09.948-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:09.988-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:37:10.041-0500 2019-11-26T14:37:10.041-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:10.041-0500 2019-11-26T14:37:10.041-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:10.041-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:10.077-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:10.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:10.255-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:10.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:10.488-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:10.488-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:37:10.550-0500 2019-11-26T14:37:10.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:10.577-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:10.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:10.755-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:10.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:10.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:10.988-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:11.077-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:11.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:11.244-0500 2019-11-26T14:37:11.244-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:11.244-0500 2019-11-26T14:37:11.244-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:11.244-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:11.255-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:11.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:11.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:11.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:11.488-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:11.577-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:11.755-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:11.988-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:11.988-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:12.077-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:12.077-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:12.255-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:12.255-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:12.426-0500 I CONNPOOL [ShardRegistry] Dropping all pooled connections to localhost:20004 due to ShutdownInProgress: Pool for localhost:20004 has expired.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:12.426-0500 I NETWORK [conn74] end connection 127.0.0.1:46126 (28 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:37:12.446-0500 2019-11-26T14:37:12.446-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:12.447-0500 2019-11-26T14:37:12.446-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:12.447-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:12.488-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:12.529-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:12.550-0500 2019-11-26T14:37:12.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:12.554-0500 I CONNPOOL [ShardRegistry] Dropping all pooled connections to localhost:20004 due to ShutdownInProgress: Pool for localhost:20004 has expired.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:12.554-0500 I NETWORK [conn96] end connection 127.0.0.1:46200 (27 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:12.577-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:12.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:12.755-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:12.988-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:13.077-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:13.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:13.255-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:13.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:13.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:13.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:13.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:13.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:13.488-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:13.558-0500 I CONNPOOL [TaskExecutorPool-0] Dropping all pooled connections to localhost:20004 due to ShutdownInProgress: Pool for localhost:20004 has expired.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:13.577-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:13.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:13.649-0500 2019-11-26T14:37:13.649-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:13.649-0500 2019-11-26T14:37:13.649-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:13.650-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:13.755-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:13.988-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:14.078-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:14.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:14.255-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:14.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:14.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:14.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:14.488-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:37:14.550-0500 2019-11-26T14:37:14.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:14.578-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:14.755-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:14.755-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:37:14.852-0500 2019-11-26T14:37:14.852-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:14.852-0500 2019-11-26T14:37:14.852-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:14.852-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:14.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:14.988-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:15.079-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:15.255-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:15.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:15.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:15.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:15.488-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:15.488-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:15.579-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:15.579-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:15.755-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:15.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:15.988-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:37:16.055-0500 2019-11-26T14:37:16.055-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:16.055-0500 2019-11-26T14:37:16.055-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:16.055-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:16.079-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:16.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:16.255-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:16.488-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:37:16.550-0500 2019-11-26T14:37:16.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:16.579-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:16.613-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:16.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:16.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:16.755-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:16.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:16.988-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:16.988-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:17.079-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:17.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:17.255-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:37:17.257-0500 2019-11-26T14:37:17.257-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:17.257-0500 2019-11-26T14:37:17.257-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:17.258-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:17.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:17.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:17.488-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:17.579-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:17.755-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:17.932-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:17.988-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:18.079-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:18.079-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:18.255-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:18.255-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:18.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:18.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:18.460-0500 2019-11-26T14:37:18.460-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:18.460-0500 2019-11-26T14:37:18.460-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:18.460-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:18.488-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:37:18.550-0500 2019-11-26T14:37:18.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:18.579-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:18.755-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:18.988-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:19.079-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:19.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:19.255-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:19.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:19.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:19.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:19.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:19.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:19.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:19.488-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:19.579-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:19.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:19.663-0500 2019-11-26T14:37:19.662-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:19.663-0500 2019-11-26T14:37:19.663-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:19.663-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:19.755-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:19.963-0500 I CONNPOOL [ShardRegistry] Dropping all pooled connections to localhost:20004 due to ShutdownInProgress: Pool for localhost:20004 has expired.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:19.963-0500 I NETWORK [conn108] end connection 127.0.0.1:46344 (26 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:19.988-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:20.079-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:20.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:20.255-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:20.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:20.488-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:20.488-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:37:20.550-0500 2019-11-26T14:37:20.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:20.551-0500 2019-11-26T14:37:20.551-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:20.579-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:20.755-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:20.755-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:37:20.865-0500 2019-11-26T14:37:20.865-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:20.865-0500 2019-11-26T14:37:20.865-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:20.866-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:20.988-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:21.079-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:21.255-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:21.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:21.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:21.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:21.488-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:21.579-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:21.579-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:21.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:21.755-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:21.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:21.988-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:21.988-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:37:22.068-0500 2019-11-26T14:37:22.068-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:22.068-0500 2019-11-26T14:37:22.068-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:22.068-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:22.079-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:22.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:22.255-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:22.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:22.488-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:37:22.550-0500 2019-11-26T14:37:22.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:22.579-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:22.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:22.755-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:22.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:22.988-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:23.079-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:23.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:23.255-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:37:23.271-0500 2019-11-26T14:37:23.271-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:23.271-0500 2019-11-26T14:37:23.271-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:23.271-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:23.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:23.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:23.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:23.488-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:23.579-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:23.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:23.755-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:23.988-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:24.079-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:24.079-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:24.255-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:24.255-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:24.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:24.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:24.473-0500 2019-11-26T14:37:24.473-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:24.473-0500 2019-11-26T14:37:24.473-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:24.474-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:24.488-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:37:24.550-0500 2019-11-26T14:37:24.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:24.579-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:24.756-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:24.988-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:25.079-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:25.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:25.256-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:25.256-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:25.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:25.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:25.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:25.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:25.488-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:25.488-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:25.579-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:25.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:25.676-0500 2019-11-26T14:37:25.676-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:25.676-0500 2019-11-26T14:37:25.676-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:25.676-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:25.756-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:25.988-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:26.079-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:26.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:26.256-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:26.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:26.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:26.488-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:37:26.550-0500 2019-11-26T14:37:26.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:26.579-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:26.756-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:37:26.879-0500 2019-11-26T14:37:26.878-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:26.879-0500 2019-11-26T14:37:26.879-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:26.879-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:26.988-0500 I SHARDING [Balancer] caught exception while doing balance: Could not find host matching read preference { mode: "primary" } for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:26.988-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:26.988-0500 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "nz_desktop:20000-2019-11-26T14:37:26.988-0500-5ddd7ef65cde74b6784bbd9a", server: "nz_desktop:20000", shard: "config", clientAddr: "", time: new Date(1574797046988), what: "balancer.round", ns: "", details: { executionTimeMillis: 20000, errorOccured: true, errmsg: "Could not find host matching read preference { mode: "primary" } for set shard-rs0" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:26.988-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:27.079-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:27.257-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:27.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:27.358-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:27.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:27.392-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:27.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:27.395-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:27.579-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:27.579-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:27.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:27.620-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:27.756-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:27.756-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:28.079-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:37:28.081-0500 2019-11-26T14:37:28.081-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:28.081-0500 2019-11-26T14:37:28.081-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:28.082-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:28.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:28.257-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:28.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:28.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:28.550-0500 2019-11-26T14:37:28.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:28.579-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:28.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:28.757-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:28.757-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:29.080-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:29.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:29.257-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:37:29.284-0500 2019-11-26T14:37:29.284-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:29.284-0500 2019-11-26T14:37:29.284-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:29.285-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:29.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:29.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:29.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:29.580-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:29.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:29.757-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:29.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:30.080-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:30.080-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:30.257-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:30.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:30.487-0500 2019-11-26T14:37:30.487-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:30.487-0500 2019-11-26T14:37:30.487-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:30.487-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[fsm_workload_test:agg_out] 2019-11-26T14:37:30.550-0500 2019-11-26T14:37:30.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:30.580-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:30.757-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:30.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:31.080-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:31.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:31.257-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:31.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:31.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:31.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:31.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:31.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:31.581-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:31.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:31.689-0500 2019-11-26T14:37:31.689-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:31.689-0500 2019-11-26T14:37:31.689-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:31.690-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:31.757-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:32.081-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:32.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:32.257-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:32.257-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:32.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:32.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:32.550-0500 2019-11-26T14:37:32.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:32.581-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:32.757-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:37:32.892-0500 2019-11-26T14:37:32.892-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:32.892-0500 2019-11-26T14:37:32.892-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:32.892-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:33.081-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:33.257-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:33.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:33.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:33.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:33.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:33.581-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:33.582-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:33.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:33.757-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:34.082-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:34.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:34.094-0500 2019-11-26T14:37:34.094-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:34.095-0500 2019-11-26T14:37:34.095-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:34.095-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:34.257-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:34.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:34.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:34.550-0500 2019-11-26T14:37:34.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:34.582-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:34.582-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:34.757-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:34.758-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:35.082-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:35.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:35.258-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:37:35.297-0500 2019-11-26T14:37:35.297-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:35.297-0500 2019-11-26T14:37:35.297-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:35.297-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:35.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:35.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:35.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:35.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:35.582-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:35.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:35.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:35.758-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:35.758-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:36.082-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:36.258-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:36.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:36.499-0500 2019-11-26T14:37:36.499-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:36.500-0500 2019-11-26T14:37:36.500-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:36.500-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[fsm_workload_test:agg_out] 2019-11-26T14:37:36.550-0500 2019-11-26T14:37:36.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:36.582-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:36.758-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:36.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:36.990-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:36.990-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:37.082-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:37.082-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:37.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:37.135-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:37.258-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:37.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:37.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:37.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:37.490-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:37.582-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:37.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:37.702-0500 2019-11-26T14:37:37.702-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:37.702-0500 2019-11-26T14:37:37.702-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:37.702-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:37.758-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:37.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:37.990-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:38.082-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:38.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:38.258-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:38.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:38.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:38.490-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:37:38.550-0500 2019-11-26T14:37:38.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:38.582-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:38.758-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:37:38.905-0500 2019-11-26T14:37:38.904-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:38.905-0500 2019-11-26T14:37:38.905-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:38.905-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:38.990-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:39.082-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:39.258-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:39.259-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:39.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:39.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:39.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:39.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:39.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:39.490-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:39.582-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:39.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:39.759-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:39.948-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:39.990-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:40.082-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:40.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:40.107-0500 2019-11-26T14:37:40.107-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:40.107-0500 2019-11-26T14:37:40.107-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:40.108-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:40.259-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:40.260-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:40.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:40.490-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:40.490-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:37:40.550-0500 2019-11-26T14:37:40.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:40.582-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:40.582-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:40.759-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:40.990-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:41.082-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:41.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:41.259-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:37:41.310-0500 2019-11-26T14:37:41.310-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:41.310-0500 2019-11-26T14:37:41.310-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:41.310-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:41.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:41.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:41.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:41.490-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:41.583-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:41.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:41.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:41.759-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:41.990-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:41.990-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:42.082-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:42.259-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:42.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:42.490-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:37:42.512-0500 2019-11-26T14:37:42.512-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:42.512-0500 2019-11-26T14:37:42.512-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:42.513-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[fsm_workload_test:agg_out] 2019-11-26T14:37:42.550-0500 2019-11-26T14:37:42.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:42.582-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:42.759-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:42.759-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:42.990-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:43.082-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:43.082-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:43.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:43.259-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:43.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:43.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:43.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:43.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:43.490-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:43.582-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:43.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:43.715-0500 2019-11-26T14:37:43.715-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:43.715-0500 2019-11-26T14:37:43.715-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:43.715-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:43.759-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:43.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:43.990-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:44.082-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:44.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:44.259-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:44.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:44.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:44.490-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:37:44.550-0500 2019-11-26T14:37:44.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:44.582-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:44.759-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:37:44.917-0500 2019-11-26T14:37:44.917-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:44.918-0500 2019-11-26T14:37:44.917-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:44.918-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:44.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:44.990-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:45.082-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:45.259-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:45.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:45.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:45.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:45.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:45.490-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:45.490-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:45.582-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:45.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:45.759-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:45.990-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:46.082-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:46.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:46.120-0500 2019-11-26T14:37:46.120-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:46.120-0500 2019-11-26T14:37:46.120-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:46.121-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:46.259-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:46.259-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:46.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:46.490-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:37:46.550-0500 2019-11-26T14:37:46.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:46.582-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:46.582-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:46.759-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:46.990-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:46.990-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:47.082-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:47.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:47.259-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:37:47.323-0500 2019-11-26T14:37:47.323-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:47.323-0500 2019-11-26T14:37:47.323-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:47.323-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:47.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:47.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:47.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:47.490-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:47.582-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:47.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:47.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:47.759-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:47.990-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:48.083-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:48.259-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:48.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:48.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:48.490-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:37:48.526-0500 2019-11-26T14:37:48.525-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:48.526-0500 2019-11-26T14:37:48.526-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:48.526-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[fsm_workload_test:agg_out] 2019-11-26T14:37:48.550-0500 2019-11-26T14:37:48.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:48.583-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:48.759-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:48.759-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:48.990-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:49.083-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:49.083-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:49.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:49.259-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:49.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:49.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:49.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:49.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:49.490-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:49.583-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:49.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:49.728-0500 2019-11-26T14:37:49.728-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:49.728-0500 2019-11-26T14:37:49.728-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:49.729-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:49.759-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:49.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:49.990-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:50.084-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:50.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:50.259-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:50.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:50.490-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:50.490-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:37:50.550-0500 2019-11-26T14:37:50.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:50.551-0500 2019-11-26T14:37:50.551-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:50.584-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:50.759-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:37:50.931-0500 2019-11-26T14:37:50.931-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:50.931-0500 2019-11-26T14:37:50.931-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:50.931-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:50.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:50.990-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:51.084-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:51.259-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:51.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:51.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:51.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:51.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:51.490-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:51.584-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:51.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:51.759-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:51.990-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:51.990-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:52.084-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:52.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:52.134-0500 2019-11-26T14:37:52.134-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:52.134-0500 2019-11-26T14:37:52.134-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:52.134-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:52.259-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:52.259-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:52.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:52.490-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:37:52.550-0500 2019-11-26T14:37:52.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:52.584-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:52.584-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:52.759-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:52.990-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:53.084-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:53.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:53.259-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:37:53.336-0500 2019-11-26T14:37:53.336-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:53.336-0500 2019-11-26T14:37:53.336-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:53.337-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:53.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:53.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:53.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:53.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:53.490-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:53.584-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:53.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:53.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:53.759-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:53.990-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:54.084-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:54.259-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:54.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:54.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:54.490-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:37:54.539-0500 2019-11-26T14:37:54.539-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:54.539-0500 2019-11-26T14:37:54.539-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:54.539-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[fsm_workload_test:agg_out] 2019-11-26T14:37:54.550-0500 2019-11-26T14:37:54.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:54.584-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:54.759-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:54.759-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:54.990-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:55.084-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:55.084-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:55.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:55.259-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:55.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:55.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:55.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:55.490-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:55.490-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:55.584-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:55.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:55.742-0500 2019-11-26T14:37:55.741-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:55.742-0500 2019-11-26T14:37:55.742-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:55.742-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:55.759-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:55.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:55.990-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:56.084-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:56.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:56.259-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:56.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:56.490-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:37:56.550-0500 2019-11-26T14:37:56.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:56.584-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:56.759-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:37:56.944-0500 2019-11-26T14:37:56.944-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:56.944-0500 2019-11-26T14:37:56.944-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:56.945-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:56.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:56.990-0500 I SHARDING [Balancer] caught exception while doing balance: Could not find host matching read preference { mode: "primary" } for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:56.990-0500 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "nz_desktop:20000-2019-11-26T14:37:56.990-0500-5ddd7f145cde74b6784bbdd2", server: "nz_desktop:20000", shard: "config", clientAddr: "", time: new Date(1574797076990), what: "balancer.round", ns: "", details: { executionTimeMillis: 20000, errorOccured: true, errmsg: "Could not find host matching read preference { mode: "primary" } for set shard-rs0" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:56.990-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:56.990-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:57.084-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:57.259-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:57.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:57.358-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:57.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:57.392-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:57.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:57.395-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:57.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:57.584-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:57.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:57.620-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:57.759-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:58.084-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:58.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:58.147-0500 2019-11-26T14:37:58.147-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:58.147-0500 2019-11-26T14:37:58.147-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:58.148-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:58.259-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:58.259-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:37:58.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:58.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:37:58.550-0500 2019-11-26T14:37:58.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:58.584-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:58.584-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:58.759-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:59.084-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:37:59.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:59.259-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:37:59.350-0500 2019-11-26T14:37:59.350-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:59.350-0500 2019-11-26T14:37:59.350-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:37:59.350-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:37:59.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:37:59.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:37:59.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:59.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:59.584-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:37:59.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:37:59.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:37:59.759-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:00.084-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:00.259-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:00.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:00.550-0500 2019-11-26T14:38:00.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:00.552-0500 2019-11-26T14:38:00.552-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:00.553-0500 2019-11-26T14:38:00.553-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:00.553-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:00.584-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:00.759-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:00.760-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:01.084-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:01.084-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:01.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:01.260-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:01.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:01.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:01.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:01.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:01.584-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:01.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:01.755-0500 2019-11-26T14:38:01.755-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:01.755-0500 2019-11-26T14:38:01.755-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:01.756-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:01.760-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:01.760-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:02.084-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:02.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:02.260-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:02.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:02.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:02.550-0500 2019-11-26T14:38:02.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:02.584-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:02.760-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:02.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:02.958-0500 2019-11-26T14:38:02.958-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:02.958-0500 2019-11-26T14:38:02.958-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:02.958-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:03.085-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:03.260-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:03.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:03.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:03.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:03.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:03.585-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:03.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:03.760-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:03.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:04.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:04.085-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:04.161-0500 2019-11-26T14:38:04.161-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:04.161-0500 2019-11-26T14:38:04.161-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:04.161-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:04.260-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:04.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:04.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:04.550-0500 2019-11-26T14:38:04.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:04.585-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:04.585-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:04.760-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:05.085-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:05.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:05.260-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:05.260-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:05.363-0500 2019-11-26T14:38:05.363-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:05.364-0500 2019-11-26T14:38:05.364-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:05.364-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:05.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:05.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:05.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:05.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:05.585-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:05.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:05.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:05.760-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:06.085-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:06.260-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:06.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:06.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:06.550-0500 2019-11-26T14:38:06.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:06.566-0500 2019-11-26T14:38:06.566-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:06.566-0500 2019-11-26T14:38:06.566-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:06.567-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:06.585-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:06.760-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:06.992-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:06.992-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:07.085-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:07.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:07.085-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:07.135-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:07.260-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:07.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:07.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:07.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:07.492-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:07.585-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:07.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:07.760-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:07.760-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:07.769-0500 2019-11-26T14:38:07.769-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:07.769-0500 2019-11-26T14:38:07.769-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:07.769-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:07.992-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:08.085-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:08.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:08.260-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:08.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:08.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:08.492-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:08.550-0500 2019-11-26T14:38:08.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:08.585-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:08.760-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:08.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:08.972-0500 2019-11-26T14:38:08.972-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:08.972-0500 2019-11-26T14:38:08.972-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:08.972-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:08.992-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:09.085-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:09.260-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:09.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:09.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:09.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:09.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:09.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:09.492-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:09.586-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:09.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:09.760-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:09.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:09.948-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:09.992-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:10.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:10.086-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:10.175-0500 2019-11-26T14:38:10.174-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:10.175-0500 2019-11-26T14:38:10.175-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:10.175-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:10.260-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:10.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:10.492-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:10.492-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:10.550-0500 2019-11-26T14:38:10.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:10.586-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:10.586-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:10.760-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:10.992-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:11.086-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:11.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:11.260-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:11.260-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:11.377-0500 2019-11-26T14:38:11.377-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:11.377-0500 2019-11-26T14:38:11.377-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:11.378-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:11.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:11.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:11.492-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:11.586-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:11.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:11.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:11.760-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:11.992-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:11.992-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:12.086-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:12.260-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:12.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:12.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:12.492-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:12.550-0500 2019-11-26T14:38:12.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:12.562-0500 I NETWORK [conn84] end connection 127.0.0.1:46142 (25 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:38:12.580-0500 2019-11-26T14:38:12.580-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:12.580-0500 2019-11-26T14:38:12.580-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:12.580-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:12.586-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:12.760-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:12.992-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:13.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:13.086-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:13.086-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:13.260-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:13.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:13.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:13.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:13.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:13.492-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:13.586-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:13.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:13.760-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:13.760-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:13.782-0500 2019-11-26T14:38:13.782-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:13.782-0500 2019-11-26T14:38:13.782-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:13.783-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:13.992-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:14.086-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:14.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:14.260-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:14.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:14.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:14.492-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:14.550-0500 2019-11-26T14:38:14.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:14.586-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:14.760-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:14.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:14.985-0500 2019-11-26T14:38:14.985-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:14.985-0500 2019-11-26T14:38:14.985-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:14.985-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:14.992-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:15.086-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:15.260-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:15.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:15.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:15.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:15.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:15.492-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:15.492-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:15.586-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:15.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:15.760-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:15.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:15.992-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:16.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:16.086-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:16.187-0500 2019-11-26T14:38:16.187-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:16.187-0500 2019-11-26T14:38:16.187-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:16.187-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:16.260-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:16.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:16.492-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:16.550-0500 2019-11-26T14:38:16.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:16.586-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:16.586-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:16.760-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:16.992-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:16.992-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:17.086-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:17.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:17.260-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:17.260-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:17.390-0500 2019-11-26T14:38:17.390-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:17.390-0500 2019-11-26T14:38:17.390-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:17.390-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:17.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:17.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:17.492-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:17.586-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:17.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:17.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:17.760-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:17.992-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:18.087-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:18.260-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:18.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:18.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:18.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:18.492-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:18.550-0500 2019-11-26T14:38:18.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:18.587-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:18.593-0500 2019-11-26T14:38:18.593-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:18.593-0500 2019-11-26T14:38:18.593-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:18.593-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:18.760-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:18.992-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:19.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:19.087-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:19.088-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:19.260-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:19.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:19.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:19.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:19.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:19.492-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:19.588-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:19.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:19.760-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:19.760-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:19.795-0500 2019-11-26T14:38:19.795-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:19.795-0500 2019-11-26T14:38:19.795-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:19.796-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:19.992-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:20.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:20.088-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:20.088-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:20.260-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:20.492-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:20.492-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:20.550-0500 2019-11-26T14:38:20.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:20.551-0500 2019-11-26T14:38:20.551-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:20.588-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:20.760-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:20.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:20.992-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:20.998-0500 2019-11-26T14:38:20.998-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:20.998-0500 2019-11-26T14:38:20.998-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:20.998-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:21.088-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:21.260-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:21.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:21.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:21.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:21.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:21.492-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:21.589-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:21.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:21.760-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:21.948-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:21.992-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:21.992-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:22.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:22.089-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:22.201-0500 2019-11-26T14:38:22.201-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:22.201-0500 2019-11-26T14:38:22.201-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:22.201-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:22.260-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:22.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:22.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:22.492-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:22.550-0500 2019-11-26T14:38:22.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:22.589-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:22.760-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:22.992-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:23.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:23.089-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:23.260-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:23.260-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:23.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:23.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:23.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:23.403-0500 2019-11-26T14:38:23.403-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:23.403-0500 2019-11-26T14:38:23.403-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:23.403-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:23.492-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:23.589-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:23.589-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:23.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:23.760-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:23.992-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:24.089-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:24.260-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:24.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:24.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:24.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:24.492-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:24.550-0500 2019-11-26T14:38:24.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:24.589-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:24.606-0500 2019-11-26T14:38:24.605-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:24.606-0500 2019-11-26T14:38:24.606-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:24.606-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:24.647-0500 I CONNPOOL [ReplCoordExternNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:24.760-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:24.992-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:25.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:25.089-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:25.260-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:25.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:25.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:25.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:25.492-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:25.492-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:25.589-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:25.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:25.760-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:25.760-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:25.808-0500 2019-11-26T14:38:25.808-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:25.808-0500 2019-11-26T14:38:25.808-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:25.808-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:25.992-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:26.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:26.089-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:26.090-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:26.260-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:26.492-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:26.550-0500 2019-11-26T14:38:26.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:26.590-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:26.760-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:26.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:26.972-0500 I CONNPOOL [ShardRegistry] Dropping all pooled connections to localhost:20004 due to ShutdownInProgress: Pool for localhost:20004 has expired.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:26.972-0500 I NETWORK [conn55] end connection 127.0.0.1:46028 (24 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:26.992-0500 I SHARDING [Balancer] caught exception while doing balance: Could not find host matching read preference { mode: "primary" } for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:26.992-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:26.992-0500 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "nz_desktop:20000-2019-11-26T14:38:26.992-0500-5ddd7f325cde74b6784bbe0a", server: "nz_desktop:20000", shard: "config", clientAddr: "", time: new Date(1574797106992), what: "balancer.round", ns: "", details: { executionTimeMillis: 20000, errorOccured: true, errmsg: "Could not find host matching read preference { mode: "primary" } for set shard-rs0" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:26.992-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:27.011-0500 2019-11-26T14:38:27.010-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:27.011-0500 2019-11-26T14:38:27.011-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:27.011-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:27.090-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:27.091-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:27.260-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:27.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:27.358-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:27.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:27.392-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:27.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:27.395-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:27.591-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:27.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:27.620-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:27.760-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:28.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:28.091-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:28.091-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:28.213-0500 2019-11-26T14:38:28.213-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:28.213-0500 2019-11-26T14:38:28.213-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:28.214-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:28.260-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:28.260-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:28.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:28.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:28.550-0500 2019-11-26T14:38:28.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:28.591-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:28.760-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:29.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:29.092-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:29.260-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:29.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:29.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:29.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:29.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:29.416-0500 2019-11-26T14:38:29.416-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:29.416-0500 2019-11-26T14:38:29.416-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:29.416-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:29.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:29.592-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:29.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:29.760-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:30.092-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:30.260-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:30.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:30.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:30.550-0500 2019-11-26T14:38:30.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:30.592-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:30.619-0500 2019-11-26T14:38:30.619-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:30.619-0500 2019-11-26T14:38:30.619-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:30.619-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:30.760-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:30.760-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:31.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:31.092-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:31.260-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:31.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:31.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:31.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:31.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:31.592-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:31.592-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:31.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:31.760-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:31.822-0500 2019-11-26T14:38:31.821-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:31.822-0500 2019-11-26T14:38:31.822-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:31.822-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:31.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:32.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:32.092-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:32.260-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:32.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:32.550-0500 2019-11-26T14:38:32.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:32.592-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:32.760-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:33.024-0500 2019-11-26T14:38:33.024-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:33.024-0500 2019-11-26T14:38:33.024-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:33.025-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:33.092-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:33.092-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:33.260-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:33.260-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:33.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:33.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:33.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:33.592-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:33.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:33.760-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:34.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:34.092-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:34.227-0500 2019-11-26T14:38:34.227-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:34.227-0500 2019-11-26T14:38:34.227-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:34.227-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:34.260-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:34.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:34.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:34.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:34.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:34.550-0500 2019-11-26T14:38:34.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:34.592-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:34.760-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:35.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:35.092-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:35.260-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:35.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:35.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:35.412-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:35.429-0500 2019-11-26T14:38:35.429-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:35.430-0500 2019-11-26T14:38:35.429-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:35.430-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:35.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:35.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:35.592-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:35.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:35.760-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:35.760-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:36.092-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:36.260-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:36.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:36.550-0500 2019-11-26T14:38:36.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:36.592-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:36.592-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:36.632-0500 2019-11-26T14:38:36.632-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:36.632-0500 2019-11-26T14:38:36.632-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:36.632-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:36.760-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:36.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:36.994-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:36.994-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:37.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:37.092-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:37.135-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:37.260-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:37.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:37.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:37.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:37.494-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:37.592-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:37.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:37.760-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:37.835-0500 2019-11-26T14:38:37.835-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:37.835-0500 2019-11-26T14:38:37.835-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:37.835-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:37.994-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:38.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:38.092-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:38.092-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:38.260-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:38.260-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:38.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:38.494-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:38.550-0500 2019-11-26T14:38:38.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:38.592-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:38.760-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:38.994-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:39.037-0500 2019-11-26T14:38:39.037-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:39.038-0500 2019-11-26T14:38:39.037-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:39.038-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:39.092-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:39.260-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:39.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:39.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:39.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:39.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:39.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:39.494-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:39.494-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:39.592-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:39.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:39.760-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:39.948-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:39.994-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:40.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:40.092-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:40.240-0500 2019-11-26T14:38:40.240-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:40.240-0500 2019-11-26T14:38:40.240-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:40.240-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:40.260-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:40.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:40.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:40.494-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:40.550-0500 2019-11-26T14:38:40.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:40.592-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:40.760-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:40.760-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:40.994-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:40.994-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:41.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:41.092-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:41.261-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:41.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:41.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:41.442-0500 2019-11-26T14:38:41.442-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:41.443-0500 2019-11-26T14:38:41.443-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:41.443-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:41.494-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:41.592-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:41.592-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:41.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:41.761-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:41.761-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:41.994-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:42.092-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:42.261-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:42.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:42.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:42.494-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:42.550-0500 2019-11-26T14:38:42.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:42.592-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:42.645-0500 2019-11-26T14:38:42.645-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:42.645-0500 2019-11-26T14:38:42.645-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:42.646-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:42.761-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:42.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:42.994-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:43.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:43.092-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:43.092-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:43.261-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:43.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:43.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:43.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:43.494-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:43.494-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:43.592-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:43.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:43.761-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:43.848-0500 2019-11-26T14:38:43.848-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:43.848-0500 2019-11-26T14:38:43.848-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:43.848-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:43.994-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:44.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:44.092-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:44.261-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:44.261-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:44.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:44.494-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:44.550-0500 2019-11-26T14:38:44.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:44.592-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:44.761-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:44.994-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:44.994-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:45.051-0500 2019-11-26T14:38:45.050-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:45.051-0500 2019-11-26T14:38:45.051-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:45.051-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:45.092-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:45.261-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:45.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:45.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:45.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:45.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:45.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:45.494-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:45.592-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:45.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:45.761-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:45.994-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:46.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:46.092-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:46.253-0500 2019-11-26T14:38:46.253-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:46.253-0500 2019-11-26T14:38:46.253-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:46.254-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:46.261-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:46.311-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:46.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:46.494-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:46.550-0500 2019-11-26T14:38:46.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:46.592-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:46.592-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:46.761-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:46.761-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:46.994-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:47.087-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:47.092-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:47.261-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:47.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:47.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:47.456-0500 2019-11-26T14:38:47.456-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:47.456-0500 2019-11-26T14:38:47.456-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:47.456-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:47.494-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:47.494-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:47.592-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:47.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:47.761-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:47.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:47.994-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:48.092-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:48.092-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:48.261-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:48.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:48.494-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:48.508-0500 I CONNPOOL [TaskExecutorPool-0] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:48.550-0500 2019-11-26T14:38:48.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:48.593-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:48.659-0500 2019-11-26T14:38:48.659-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:48.659-0500 2019-11-26T14:38:48.659-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:48.659-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:48.761-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:48.994-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:48.994-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:49.092-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:49.261-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:49.261-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:49.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:49.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:49.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:49.494-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:49.509-0500 I CONNPOOL [TaskExecutorPool-0] Dropping all pooled connections to localhost:20004 due to ShutdownInProgress: Pool for localhost:20004 has expired.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:49.509-0500 I CONNPOOL [TaskExecutorPool-0] Dropping all pooled connections to localhost:20004 due to ShutdownInProgress: Pool for localhost:20004 has expired.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:49.592-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:49.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:49.761-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:49.862-0500 2019-11-26T14:38:49.862-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:49.862-0500 2019-11-26T14:38:49.862-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:49.862-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:49.994-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:50.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:50.093-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:50.261-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:50.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:50.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:50.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:50.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:50.494-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:50.550-0500 2019-11-26T14:38:50.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:50.551-0500 2019-11-26T14:38:50.551-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:50.593-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:50.761-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:50.994-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:51.064-0500 2019-11-26T14:38:51.064-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:51.064-0500 2019-11-26T14:38:51.064-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:51.065-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:51.094-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:51.261-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:51.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:51.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:51.494-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:51.494-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:51.593-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:51.593-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:51.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:51.761-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:51.761-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:51.994-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:52.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:52.093-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:52.261-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:52.267-0500 2019-11-26T14:38:52.267-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:52.267-0500 2019-11-26T14:38:52.267-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:52.267-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:52.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:52.494-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:52.550-0500 2019-11-26T14:38:52.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:52.593-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:52.761-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:52.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:52.994-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:52.994-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:53.093-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:53.093-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:53.261-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:53.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:53.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:53.469-0500 2019-11-26T14:38:53.469-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:53.470-0500 2019-11-26T14:38:53.470-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:53.470-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:53.494-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:53.593-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:53.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:53.761-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:53.994-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:54.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:54.093-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:54.261-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:54.261-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:54.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:54.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:54.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:54.494-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:54.550-0500 2019-11-26T14:38:54.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:54.593-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:54.672-0500 2019-11-26T14:38:54.672-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:54.672-0500 2019-11-26T14:38:54.672-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:54.673-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:54.761-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:54.994-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:55.093-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:55.261-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:55.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:55.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:55.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:55.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:55.494-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:55.494-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:55.593-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:55.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:55.761-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:55.875-0500 2019-11-26T14:38:55.875-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:55.875-0500 2019-11-26T14:38:55.875-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:55.876-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:55.994-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:56.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:56.093-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:56.261-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:56.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:56.494-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:56.550-0500 2019-11-26T14:38:56.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:56.593-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:56.593-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:56.761-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:56.761-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:56.994-0500 I SHARDING [Balancer] caught exception while doing balance: Could not find host matching read preference { mode: "primary" } for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:56.994-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:56.994-0500 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "nz_desktop:20000-2019-11-26T14:38:56.994-0500-5ddd7f505cde74b6784bbe42", server: "nz_desktop:20000", shard: "config", clientAddr: "", time: new Date(1574797136994), what: "balancer.round", ns: "", details: { executionTimeMillis: 20000, errorOccured: true, errmsg: "Could not find host matching read preference { mode: "primary" } for set shard-rs0" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:56.994-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:38:57.078-0500 2019-11-26T14:38:57.078-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:57.078-0500 2019-11-26T14:38:57.078-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:57.078-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:57.093-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:57.261-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:57.358-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:57.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:57.392-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:57.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:57.395-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:57.593-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:57.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:57.620-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:57.697-0500 I CONNPOOL [ReplCoordExternNetwork] Dropping all pooled connections to localhost:20003 due to ShutdownInProgress: Pool for localhost:20003 has expired.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:57.697-0500 I NETWORK [conn23] end connection 127.0.0.1:52218 (11 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:57.761-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:57.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:38:58.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:58.093-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:58.093-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:58.261-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:38:58.280-0500 2019-11-26T14:38:58.280-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:58.280-0500 2019-11-26T14:38:58.280-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:58.281-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:38:58.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:38:58.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:58.550-0500 2019-11-26T14:38:58.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:58.593-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:58.761-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:59.093-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:59.261-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:59.261-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:59.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:38:59.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:38:59.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:38:59.483-0500 2019-11-26T14:38:59.483-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:59.483-0500 2019-11-26T14:38:59.483-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:38:59.483-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:38:59.593-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:38:59.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:38:59.761-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:00.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:00.093-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:00.261-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:00.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:00.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:00.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:00.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:39:00.550-0500 2019-11-26T14:39:00.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:00.593-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:00.686-0500 2019-11-26T14:39:00.686-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:00.686-0500 2019-11-26T14:39:00.686-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:00.686-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:00.761-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:01.093-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:01.261-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:01.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:01.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:01.593-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:01.593-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:01.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:01.761-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:01.761-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:01.889-0500 2019-11-26T14:39:01.888-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:01.889-0500 2019-11-26T14:39:01.889-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:01.889-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:02.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:02.093-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:02.261-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:02.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:02.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:39:02.550-0500 2019-11-26T14:39:02.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:02.593-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:02.761-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:02.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:39:03.092-0500 2019-11-26T14:39:03.092-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:03.092-0500 2019-11-26T14:39:03.092-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:03.092-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:03.093-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:03.093-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:03.261-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:03.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:03.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:03.593-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:03.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:03.761-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:04.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:04.093-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:04.261-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:04.261-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:04.295-0500 2019-11-26T14:39:04.295-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:04.295-0500 2019-11-26T14:39:04.295-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:04.296-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:04.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:04.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:04.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:39:04.550-0500 2019-11-26T14:39:04.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:04.593-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:04.761-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:05.093-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:05.261-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:05.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:05.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:05.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:05.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:39:05.498-0500 2019-11-26T14:39:05.498-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:05.499-0500 2019-11-26T14:39:05.499-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:05.499-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:05.508-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:05.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:05.593-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:05.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:05.761-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:06.093-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:06.261-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:06.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:06.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:39:06.550-0500 2019-11-26T14:39:06.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:06.593-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:06.593-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:06.701-0500 2019-11-26T14:39:06.701-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:06.701-0500 2019-11-26T14:39:06.701-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:06.702-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:06.761-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:06.761-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:06.998-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:07.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:07.093-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:07.135-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:07.261-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:07.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:07.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:07.498-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:07.498-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:07.593-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:07.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:07.761-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:07.904-0500 2019-11-26T14:39:07.904-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:07.904-0500 2019-11-26T14:39:07.904-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:07.904-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:07.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:07.998-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:08.093-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:08.093-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:08.261-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:08.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:08.498-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:08.550-0500 2019-11-26T14:39:08.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:08.593-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:08.761-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:08.998-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:08.998-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:09.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:09.093-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:09.107-0500 2019-11-26T14:39:09.107-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:09.107-0500 2019-11-26T14:39:09.107-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:09.107-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:09.261-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:09.262-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:09.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:09.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:09.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:09.498-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:09.593-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:09.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:09.762-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:09.948-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:09.998-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:10.093-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:10.262-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:10.262-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:10.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:39:10.309-0500 2019-11-26T14:39:10.309-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:10.310-0500 2019-11-26T14:39:10.309-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:10.310-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:10.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:10.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:10.498-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:10.550-0500 2019-11-26T14:39:10.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:10.593-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:10.762-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:10.998-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:11.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:11.093-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:11.262-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:11.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:11.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:11.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:11.498-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:11.498-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:11.512-0500 2019-11-26T14:39:11.512-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:11.512-0500 2019-11-26T14:39:11.512-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:11.512-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:11.593-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:11.593-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:11.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:11.762-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:11.998-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:12.093-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:12.262-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:12.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:12.498-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:12.550-0500 2019-11-26T14:39:12.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:12.593-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:12.715-0500 2019-11-26T14:39:12.715-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:12.715-0500 2019-11-26T14:39:12.715-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:12.715-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:12.762-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:12.763-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:12.998-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:12.998-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:13.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:13.093-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:13.093-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:13.263-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:13.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:13.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:13.498-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:13.593-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:13.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:13.763-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:13.763-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:13.917-0500 2019-11-26T14:39:13.917-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:13.917-0500 2019-11-26T14:39:13.917-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:13.918-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:13.998-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:14.093-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:14.263-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:14.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:14.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:14.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:14.498-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:14.550-0500 2019-11-26T14:39:14.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:14.593-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:14.763-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:14.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:14.998-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:15.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:15.093-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:15.120-0500 2019-11-26T14:39:15.120-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:15.120-0500 2019-11-26T14:39:15.120-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:15.121-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:15.263-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:15.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:15.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:15.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:15.498-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:15.498-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:15.593-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:15.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:15.763-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:15.998-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:16.093-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:16.263-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:16.263-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:16.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:39:16.323-0500 2019-11-26T14:39:16.323-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:16.323-0500 2019-11-26T14:39:16.323-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:16.323-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:16.498-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:16.550-0500 2019-11-26T14:39:16.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:16.593-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:16.593-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:16.763-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:16.998-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:16.998-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:17.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:17.093-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:17.263-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:17.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:17.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:17.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:17.498-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:17.525-0500 2019-11-26T14:39:17.525-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:17.525-0500 2019-11-26T14:39:17.525-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:17.526-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:17.593-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:17.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:17.763-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:17.998-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:18.093-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:18.094-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:18.263-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:18.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:18.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:18.498-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:18.550-0500 2019-11-26T14:39:18.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:18.594-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:18.728-0500 2019-11-26T14:39:18.728-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:18.728-0500 2019-11-26T14:39:18.728-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:18.728-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:18.763-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:18.763-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:18.998-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:19.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:19.094-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:19.094-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:19.263-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:19.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:19.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:19.498-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:19.498-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:19.594-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:19.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:19.763-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:19.931-0500 2019-11-26T14:39:19.931-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:19.931-0500 2019-11-26T14:39:19.931-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:19.931-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:19.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:19.998-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:20.094-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:20.263-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:20.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:20.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:20.498-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:20.550-0500 2019-11-26T14:39:20.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:39:20.551-0500 2019-11-26T14:39:20.551-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:20.594-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:20.763-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:20.998-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:20.998-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:21.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:21.094-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:21.133-0500 2019-11-26T14:39:21.133-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:21.134-0500 2019-11-26T14:39:21.134-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:21.134-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:21.263-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:21.263-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:21.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:21.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:21.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:21.498-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:21.594-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:21.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:21.763-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:21.998-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:22.094-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:22.263-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:22.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:39:22.336-0500 2019-11-26T14:39:22.336-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:22.336-0500 2019-11-26T14:39:22.336-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:22.337-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:22.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:22.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:22.498-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:22.550-0500 2019-11-26T14:39:22.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:22.594-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:22.594-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:22.763-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:22.998-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:23.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:23.094-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:23.263-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:23.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:23.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:23.498-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:23.498-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:23.539-0500 2019-11-26T14:39:23.539-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:23.539-0500 2019-11-26T14:39:23.539-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:23.539-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:23.594-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:23.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:23.763-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:23.763-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:23.998-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:24.094-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:24.094-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:24.263-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:24.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:24.498-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:24.550-0500 2019-11-26T14:39:24.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:24.594-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:24.742-0500 2019-11-26T14:39:24.741-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:24.742-0500 2019-11-26T14:39:24.742-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:24.742-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:24.763-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:24.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:24.998-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:24.998-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:25.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:25.094-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:25.263-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:25.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:25.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:25.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:25.498-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:25.594-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:25.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:25.763-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:25.944-0500 2019-11-26T14:39:25.944-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:25.945-0500 2019-11-26T14:39:25.944-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:25.945-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:25.998-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:26.094-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:26.263-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:26.263-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:26.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:26.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:26.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:26.498-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:26.550-0500 2019-11-26T14:39:26.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:26.594-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:26.763-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:26.997-0500 I SHARDING [Balancer] caught exception while doing balance: Could not find host matching read preference { mode: "primary" } for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:26.997-0500 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "nz_desktop:20000-2019-11-26T14:39:26.997-0500-5ddd7f6e5cde74b6784bbe7a", server: "nz_desktop:20000", shard: "config", clientAddr: "", time: new Date(1574797166997), what: "balancer.round", ns: "", details: { executionTimeMillis: 19999, errorOccured: true, errmsg: "Could not find host matching read preference { mode: "primary" } for set shard-rs0" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:26.998-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:27.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:27.095-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:27.147-0500 2019-11-26T14:39:27.147-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:27.147-0500 2019-11-26T14:39:27.147-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:27.148-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:27.263-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:27.358-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:27.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:27.392-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:27.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:27.395-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:27.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:27.595-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:27.595-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:27.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:27.620-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:27.763-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:28.096-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:28.263-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:28.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:28.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:39:28.350-0500 2019-11-26T14:39:28.350-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:28.350-0500 2019-11-26T14:39:28.350-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:28.350-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[fsm_workload_test:agg_out] 2019-11-26T14:39:28.550-0500 2019-11-26T14:39:28.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:28.596-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:28.596-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:28.763-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:28.763-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:29.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:29.096-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:29.263-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:29.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:29.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:39:29.553-0500 2019-11-26T14:39:29.553-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:29.553-0500 2019-11-26T14:39:29.553-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:29.553-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:29.596-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:29.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:29.763-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:29.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:30.096-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:30.096-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:30.263-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:30.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:30.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:39:30.550-0500 2019-11-26T14:39:30.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:30.596-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:30.755-0500 2019-11-26T14:39:30.755-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:30.756-0500 2019-11-26T14:39:30.755-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:30.756-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:30.763-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:31.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:31.096-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:31.263-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:31.263-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:31.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:31.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:31.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:31.596-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:31.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:31.763-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:31.958-0500 2019-11-26T14:39:31.958-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:31.958-0500 2019-11-26T14:39:31.958-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:31.959-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:32.096-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:32.263-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:32.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:32.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:32.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:32.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:39:32.550-0500 2019-11-26T14:39:32.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:32.596-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:32.763-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:33.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:33.096-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:33.161-0500 2019-11-26T14:39:33.161-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:33.161-0500 2019-11-26T14:39:33.161-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:33.161-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:33.263-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:33.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:33.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:33.596-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:33.597-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:33.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:33.763-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:33.763-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:34.097-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:34.263-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:34.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:34.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:39:34.364-0500 2019-11-26T14:39:34.364-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:34.364-0500 2019-11-26T14:39:34.364-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:34.364-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[fsm_workload_test:agg_out] 2019-11-26T14:39:34.550-0500 2019-11-26T14:39:34.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:34.597-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:34.597-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:34.763-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:34.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:35.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:35.097-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:35.263-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:35.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:35.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:35.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:35.566-0500 2019-11-26T14:39:35.566-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:35.567-0500 2019-11-26T14:39:35.567-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:35.567-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:35.597-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:35.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:35.763-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:36.097-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:36.097-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:36.263-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:36.263-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:36.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:36.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:39:36.550-0500 2019-11-26T14:39:36.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:36.597-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:36.763-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:36.769-0500 2019-11-26T14:39:36.769-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:36.769-0500 2019-11-26T14:39:36.769-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:36.770-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:37.001-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:37.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:37.097-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:37.135-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:37.263-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:37.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:37.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:37.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:37.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:37.501-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:37.501-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:37.597-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:37.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:37.763-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:37.972-0500 2019-11-26T14:39:37.972-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:37.972-0500 2019-11-26T14:39:37.972-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:37.972-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:38.001-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:38.097-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:38.264-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:38.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:38.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:38.501-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:38.550-0500 2019-11-26T14:39:38.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:38.597-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:38.763-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:38.763-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:39.001-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:39.001-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:39.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:39.097-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:39.175-0500 2019-11-26T14:39:39.175-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:39.175-0500 2019-11-26T14:39:39.175-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:39.176-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:39.263-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:39.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:39.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:39.501-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:39.597-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:39.598-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:39.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:39.763-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:39.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:39.948-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:40.001-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:40.099-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:40.264-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:40.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:40.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:39:40.379-0500 2019-11-26T14:39:40.379-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:40.379-0500 2019-11-26T14:39:40.379-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:40.380-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:40.501-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:40.550-0500 2019-11-26T14:39:40.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:40.599-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:40.600-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:40.765-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:41.001-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:41.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:41.101-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:41.265-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:41.266-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:41.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:41.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:41.501-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:41.502-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:41.584-0500 2019-11-26T14:39:41.584-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:41.584-0500 2019-11-26T14:39:41.584-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:41.584-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:41.601-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:41.602-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:41.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:41.767-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:42.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:42.103-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:42.267-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:42.268-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:42.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:42.506-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:42.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:42.550-0500 2019-11-26T14:39:42.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:42.603-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:42.604-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:42.769-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:42.792-0500 2019-11-26T14:39:42.792-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:42.793-0500 2019-11-26T14:39:42.793-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:42.793-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:43.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:43.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:43.105-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:43.269-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:43.270-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:43.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:43.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:43.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:43.605-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:43.606-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:43.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:43.772-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:43.995-0500 2019-11-26T14:39:43.995-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:43.996-0500 2019-11-26T14:39:43.995-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:43.996-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:44.006-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:44.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:44.107-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:44.272-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:44.273-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:44.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:44.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:44.550-0500 2019-11-26T14:39:44.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:44.607-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:44.608-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:44.774-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:45.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:45.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:45.109-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:45.199-0500 2019-11-26T14:39:45.199-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:45.199-0500 2019-11-26T14:39:45.199-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:45.200-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:45.274-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:45.275-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:45.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:45.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:45.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:45.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:45.609-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:45.610-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:45.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:45.776-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:46.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:46.110-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:46.276-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:46.277-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:46.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:39:46.402-0500 2019-11-26T14:39:46.402-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:46.402-0500 2019-11-26T14:39:46.402-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:46.403-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:46.506-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:46.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:46.550-0500 2019-11-26T14:39:46.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:46.610-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:46.611-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:46.778-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:47.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:47.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:47.112-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:47.278-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:47.279-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:47.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:47.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:47.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:47.606-0500 2019-11-26T14:39:47.606-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:47.607-0500 2019-11-26T14:39:47.606-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:47.607-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:47.612-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:47.612-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:47.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:47.780-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:48.006-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:48.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:48.113-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:48.280-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:48.281-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:48.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:48.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:48.509-0500 I NETWORK [conn62] end connection 127.0.0.1:46072 (23 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:48.509-0500 I NETWORK [conn224] end connection 127.0.0.1:48636 (22 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:39:48.550-0500 2019-11-26T14:39:48.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:48.613-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:48.614-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:48.782-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:48.809-0500 2019-11-26T14:39:48.809-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:48.810-0500 2019-11-26T14:39:48.810-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:48.810-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:49.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:49.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:49.115-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:49.282-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:49.283-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:49.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:49.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:49.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:49.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:49.615-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:49.616-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:49.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:49.784-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:50.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:50.014-0500 2019-11-26T14:39:50.014-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:50.014-0500 2019-11-26T14:39:50.014-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:50.014-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:50.117-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:50.284-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:50.285-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:50.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:50.506-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:50.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:50.550-0500 2019-11-26T14:39:50.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:39:50.551-0500 2019-11-26T14:39:50.551-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:50.616-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:50.617-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:50.786-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:51.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:51.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:51.118-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:51.218-0500 2019-11-26T14:39:51.218-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:51.218-0500 2019-11-26T14:39:51.218-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:51.219-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:51.286-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:51.287-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:51.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:51.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:51.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:51.618-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:51.619-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:51.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:51.788-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:52.006-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:52.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:52.120-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:52.288-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:52.289-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:52.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:39:52.422-0500 2019-11-26T14:39:52.422-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:52.422-0500 2019-11-26T14:39:52.422-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:52.423-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:52.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:52.550-0500 2019-11-26T14:39:52.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:52.620-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:52.621-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:52.790-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:53.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:53.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:53.122-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:53.290-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:53.291-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:53.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:53.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:53.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:53.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:53.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:53.622-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:53.622-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:53.626-0500 2019-11-26T14:39:53.626-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:53.626-0500 2019-11-26T14:39:53.626-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:53.627-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:53.791-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:54.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:54.123-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:54.291-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:54.291-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:54.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:54.506-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:54.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:54.550-0500 2019-11-26T14:39:54.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:54.623-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:54.623-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:54.791-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:54.829-0500 2019-11-26T14:39:54.829-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:54.829-0500 2019-11-26T14:39:54.829-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:54.830-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:55.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:55.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:55.123-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:55.291-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:55.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:55.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:55.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:55.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:55.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:55.623-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:55.791-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:56.006-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:56.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:56.032-0500 2019-11-26T14:39:56.032-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:56.032-0500 2019-11-26T14:39:56.032-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:56.032-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:56.123-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:56.123-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:56.291-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:56.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:56.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:39:56.550-0500 2019-11-26T14:39:56.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:56.623-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:56.791-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:56.791-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:57.000-0500 I SHARDING [Balancer] caught exception while doing balance: Could not find host matching read preference { mode: "primary" } for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:57.000-0500 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "nz_desktop:20000-2019-11-26T14:39:57.000-0500-5ddd7f8c5cde74b6784bbeb2", server: "nz_desktop:20000", shard: "config", clientAddr: "", time: new Date(1574797197000), what: "balancer.round", ns: "", details: { executionTimeMillis: 20000, errorOccured: true, errmsg: "Could not find host matching read preference { mode: "primary" } for set shard-rs0" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:57.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:57.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:57.123-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:57.235-0500 2019-11-26T14:39:57.235-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:57.235-0500 2019-11-26T14:39:57.235-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:57.235-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:57.291-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:57.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:57.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:57.358-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:57.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:57.392-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:57.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:57.395-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:57.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:57.620-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:57.623-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:57.791-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:57.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:58.123-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:58.291-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:39:58.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:39:58.438-0500 2019-11-26T14:39:58.438-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:58.438-0500 2019-11-26T14:39:58.438-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:58.438-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:58.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:39:58.550-0500 2019-11-26T14:39:58.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:58.623-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:58.792-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:39:59.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:59.123-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:59.292-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:59.292-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:39:59.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:39:59.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:39:59.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:39:59.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:59.623-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:39:59.623-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:39:59.641-0500 2019-11-26T14:39:59.641-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:59.641-0500 2019-11-26T14:39:59.641-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:39:59.641-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:39:59.792-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:00.123-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:00.292-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:00.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:00.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:40:00.550-0500 2019-11-26T14:40:00.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:00.623-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:00.792-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:00.844-0500 2019-11-26T14:40:00.843-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:00.844-0500 2019-11-26T14:40:00.844-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:00.844-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:01.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:01.123-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:01.123-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:01.293-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:01.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:01.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:01.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:01.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:01.623-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:01.793-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:01.793-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:02.047-0500 2019-11-26T14:40:02.046-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:02.047-0500 2019-11-26T14:40:02.047-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:02.047-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:02.123-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:02.293-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:02.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:02.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:40:02.550-0500 2019-11-26T14:40:02.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:02.623-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:02.793-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:02.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:03.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:03.123-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:03.249-0500 2019-11-26T14:40:03.249-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:03.250-0500 2019-11-26T14:40:03.250-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:03.250-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:03.293-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:03.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:03.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:03.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:03.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:03.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:03.623-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:03.793-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:04.123-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:04.293-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:04.293-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:04.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:40:04.452-0500 2019-11-26T14:40:04.452-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:04.453-0500 2019-11-26T14:40:04.452-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:04.453-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[fsm_workload_test:agg_out] 2019-11-26T14:40:04.550-0500 2019-11-26T14:40:04.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:04.623-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:04.623-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:04.793-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:05.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:05.123-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:05.293-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:05.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:05.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:05.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:05.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:05.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:05.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:05.624-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:05.655-0500 2019-11-26T14:40:05.655-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:05.655-0500 2019-11-26T14:40:05.655-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:05.656-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:05.793-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:06.123-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:06.123-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:06.293-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:06.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:40:06.550-0500 2019-11-26T14:40:06.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:06.623-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:06.793-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:06.793-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:06.858-0500 2019-11-26T14:40:06.858-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:06.858-0500 2019-11-26T14:40:06.858-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:06.859-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:07.002-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:07.003-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:07.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:07.123-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:07.135-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:07.293-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:07.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:07.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:07.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:07.503-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:07.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:07.624-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:07.793-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:07.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:08.003-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:08.003-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:08.061-0500 2019-11-26T14:40:08.061-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:08.061-0500 2019-11-26T14:40:08.061-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:08.061-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:08.124-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:08.293-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:08.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:08.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:08.503-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:08.550-0500 2019-11-26T14:40:08.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:08.625-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:08.793-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:09.003-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:09.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:09.125-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:09.264-0500 2019-11-26T14:40:09.264-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:09.264-0500 2019-11-26T14:40:09.264-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:09.264-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:09.293-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:09.293-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:09.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:09.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:09.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:09.503-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:09.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:09.625-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:09.625-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:09.793-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:09.948-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:10.003-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:10.125-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:10.293-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:10.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:40:10.467-0500 2019-11-26T14:40:10.467-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:10.467-0500 2019-11-26T14:40:10.467-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:10.467-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:10.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:10.503-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:10.503-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:10.550-0500 2019-11-26T14:40:10.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:10.625-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:10.793-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:11.003-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:11.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:11.125-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:11.125-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:11.293-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:11.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:11.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:11.503-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:11.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:11.625-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:11.670-0500 2019-11-26T14:40:11.669-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:11.670-0500 2019-11-26T14:40:11.670-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:11.670-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:11.793-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:11.794-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:12.003-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:12.003-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:12.125-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:12.294-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:12.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:12.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:12.503-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:12.550-0500 2019-11-26T14:40:12.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:12.625-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:12.794-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:12.794-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:12.872-0500 2019-11-26T14:40:12.872-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:12.872-0500 2019-11-26T14:40:12.872-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:12.873-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:13.003-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:13.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:13.125-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:13.294-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:13.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:13.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:13.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:13.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:13.503-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:13.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:13.625-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:13.794-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:13.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:14.003-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:14.075-0500 2019-11-26T14:40:14.075-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:14.075-0500 2019-11-26T14:40:14.075-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:14.075-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:14.125-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:14.294-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:14.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:14.503-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:14.503-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:14.550-0500 2019-11-26T14:40:14.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:14.625-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:14.625-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:14.794-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:15.003-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:15.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:15.125-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:15.278-0500 2019-11-26T14:40:15.278-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:15.278-0500 2019-11-26T14:40:15.278-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:15.278-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:15.294-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:15.294-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:15.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:15.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:15.503-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:15.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:15.625-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:15.794-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:16.003-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:16.003-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:16.125-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:16.125-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:16.294-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:16.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:16.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:40:16.480-0500 2019-11-26T14:40:16.480-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:16.481-0500 2019-11-26T14:40:16.480-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:16.481-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:16.503-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:16.550-0500 2019-11-26T14:40:16.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:16.625-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:16.794-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:17.003-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:17.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:17.125-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:17.294-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:17.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:17.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:17.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:17.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:17.503-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:17.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:17.625-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:17.683-0500 2019-11-26T14:40:17.683-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:17.683-0500 2019-11-26T14:40:17.683-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:17.683-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:17.794-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:17.794-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:18.003-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:18.125-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:18.294-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:18.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:18.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:18.503-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:18.503-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:18.550-0500 2019-11-26T14:40:18.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:18.625-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:18.794-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:18.885-0500 2019-11-26T14:40:18.885-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:18.886-0500 2019-11-26T14:40:18.886-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:18.886-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:18.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:19.003-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:19.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:19.125-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:19.294-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:19.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:19.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:19.503-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:19.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:19.625-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:19.625-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:19.794-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:20.003-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:20.003-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:20.089-0500 2019-11-26T14:40:20.088-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:20.089-0500 2019-11-26T14:40:20.089-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:20.089-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:20.125-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:20.294-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:20.294-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:20.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:20.503-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:20.550-0500 2019-11-26T14:40:20.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:40:20.551-0500 2019-11-26T14:40:20.551-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:20.626-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:20.794-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:21.003-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:21.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:21.126-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:21.126-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:21.291-0500 2019-11-26T14:40:21.291-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:21.291-0500 2019-11-26T14:40:21.291-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:21.292-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:21.294-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:21.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:21.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:21.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:21.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:21.503-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:21.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:21.626-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:21.794-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:22.003-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:22.126-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:22.294-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:22.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:22.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:40:22.494-0500 2019-11-26T14:40:22.494-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:22.494-0500 2019-11-26T14:40:22.494-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:22.495-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:22.503-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:22.503-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:22.550-0500 2019-11-26T14:40:22.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:22.626-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:22.794-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:22.795-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:23.003-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:23.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:23.126-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:23.295-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:23.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:23.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:23.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:23.503-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:23.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:23.626-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:23.697-0500 2019-11-26T14:40:23.697-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:23.697-0500 2019-11-26T14:40:23.697-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:23.697-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:23.795-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:23.795-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:24.003-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:24.003-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:24.126-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:24.295-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:24.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:24.503-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:24.550-0500 2019-11-26T14:40:24.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:24.626-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:24.626-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:24.795-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:24.900-0500 2019-11-26T14:40:24.900-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:24.900-0500 2019-11-26T14:40:24.900-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:24.900-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:24.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:25.003-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:25.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:25.126-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:25.295-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:25.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:25.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:25.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:25.503-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:25.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:25.626-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:25.795-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:26.003-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:26.103-0500 2019-11-26T14:40:26.102-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:26.103-0500 2019-11-26T14:40:26.103-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:26.103-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:26.126-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:26.127-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:26.295-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:26.296-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:26.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:26.503-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:26.503-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:26.550-0500 2019-11-26T14:40:26.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:26.627-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:26.796-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:27.002-0500 I SHARDING [Balancer] caught exception while doing balance: Could not find host matching read preference { mode: "primary" } for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:27.002-0500 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "nz_desktop:20000-2019-11-26T14:40:27.002-0500-5ddd7faa5cde74b6784bbeea", server: "nz_desktop:20000", shard: "config", clientAddr: "", time: new Date(1574797227002), what: "balancer.round", ns: "", details: { executionTimeMillis: 20000, errorOccured: true, errmsg: "Could not find host matching read preference { mode: "primary" } for set shard-rs0" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:27.003-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:27.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:27.127-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:27.127-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:27.296-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:27.296-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:27.305-0500 2019-11-26T14:40:27.305-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:27.305-0500 2019-11-26T14:40:27.305-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:27.306-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:27.358-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:27.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:27.392-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:27.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:27.395-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:27.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:27.620-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:27.627-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:27.796-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:28.127-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:28.296-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:28.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:28.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:28.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:28.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:40:28.508-0500 2019-11-26T14:40:28.508-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:28.508-0500 2019-11-26T14:40:28.508-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:28.508-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[fsm_workload_test:agg_out] 2019-11-26T14:40:28.550-0500 2019-11-26T14:40:28.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:28.627-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:28.796-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:29.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:29.127-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:29.296-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:29.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:29.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:29.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:29.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:29.627-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:29.711-0500 2019-11-26T14:40:29.710-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:29.711-0500 2019-11-26T14:40:29.711-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:29.711-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:29.796-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:29.796-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:30.127-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:30.296-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:30.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:30.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:40:30.550-0500 2019-11-26T14:40:30.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:30.627-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:30.627-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:30.796-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:30.913-0500 2019-11-26T14:40:30.913-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:30.914-0500 2019-11-26T14:40:30.913-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:30.914-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:30.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:31.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:31.127-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:31.296-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:31.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:31.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:31.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:31.627-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:31.796-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:32.119-0500 2019-11-26T14:40:32.118-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:32.119-0500 2019-11-26T14:40:32.119-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:32.119-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:32.127-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:32.127-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:32.296-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:32.296-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:32.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:32.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:40:32.550-0500 2019-11-26T14:40:32.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:32.628-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:32.797-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:33.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:33.128-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:33.128-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:33.296-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:33.321-0500 2019-11-26T14:40:33.321-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:33.321-0500 2019-11-26T14:40:33.321-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:33.322-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:33.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:33.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:33.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:33.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:33.628-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:33.796-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:34.128-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:34.296-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:34.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:34.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:34.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:40:34.524-0500 2019-11-26T14:40:34.524-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:34.524-0500 2019-11-26T14:40:34.524-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:34.525-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[fsm_workload_test:agg_out] 2019-11-26T14:40:34.550-0500 2019-11-26T14:40:34.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:34.628-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:34.796-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:34.796-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:35.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:35.128-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:35.296-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:35.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:35.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:35.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:35.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:35.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:35.628-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:35.727-0500 2019-11-26T14:40:35.727-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:35.727-0500 2019-11-26T14:40:35.727-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:35.727-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:35.796-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:35.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:36.128-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:36.296-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:36.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:36.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:40:36.550-0500 2019-11-26T14:40:36.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:36.628-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:36.628-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:36.796-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:36.930-0500 2019-11-26T14:40:36.930-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:36.930-0500 2019-11-26T14:40:36.930-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:36.930-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:37.004-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:37.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:37.128-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:37.135-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:37.296-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:37.296-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:37.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:37.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:37.504-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:37.504-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:37.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:37.628-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:37.796-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:38.004-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:38.128-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:38.129-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:38.133-0500 2019-11-26T14:40:38.132-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:38.133-0500 2019-11-26T14:40:38.133-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:38.133-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:38.296-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:38.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:38.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:38.504-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:38.550-0500 2019-11-26T14:40:38.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:38.629-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:38.796-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:39.004-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:39.004-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:39.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:39.129-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:39.129-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:39.297-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:39.335-0500 2019-11-26T14:40:39.335-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:39.335-0500 2019-11-26T14:40:39.335-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:39.336-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:39.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:39.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:39.504-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:39.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:39.629-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:39.797-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:39.797-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:39.948-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:40.004-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:40.129-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:40.297-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:40.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:40.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:40.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:40.504-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:40.538-0500 2019-11-26T14:40:40.538-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:40.538-0500 2019-11-26T14:40:40.538-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:40.538-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[fsm_workload_test:agg_out] 2019-11-26T14:40:40.550-0500 2019-11-26T14:40:40.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:40.629-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:40.797-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:40.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:41.004-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:41.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:41.129-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:41.297-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:41.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:41.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:41.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:41.504-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:41.504-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:41.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:41.629-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:41.741-0500 2019-11-26T14:40:41.741-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:41.741-0500 2019-11-26T14:40:41.741-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:41.741-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:41.797-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:42.004-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:42.130-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:42.297-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:42.298-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:42.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:42.504-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:42.550-0500 2019-11-26T14:40:42.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:42.630-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:42.631-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:42.797-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:42.944-0500 2019-11-26T14:40:42.943-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:42.944-0500 2019-11-26T14:40:42.944-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:42.944-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:43.004-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:43.004-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:43.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:43.131-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:43.297-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:43.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:43.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:43.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:43.504-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:43.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:43.631-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:43.631-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:43.797-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:44.004-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:44.131-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:44.146-0500 2019-11-26T14:40:44.146-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:44.147-0500 2019-11-26T14:40:44.146-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:44.147-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:44.297-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:44.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:44.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:44.504-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:44.550-0500 2019-11-26T14:40:44.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:44.631-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:44.797-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:44.797-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:45.004-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:45.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:45.131-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:45.131-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:45.297-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:45.349-0500 2019-11-26T14:40:45.349-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:45.349-0500 2019-11-26T14:40:45.349-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:45.350-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:45.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:45.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:45.504-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:45.504-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:45.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:45.631-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:45.797-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:45.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:46.004-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:46.131-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:46.297-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:46.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:46.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:46.504-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:46.550-0500 2019-11-26T14:40:46.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:40:46.552-0500 2019-11-26T14:40:46.552-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:46.552-0500 2019-11-26T14:40:46.552-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:46.552-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:46.631-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:46.797-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:47.004-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:47.004-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:47.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:47.131-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:47.297-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:47.297-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:47.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:47.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:47.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:47.504-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:47.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:47.631-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:47.755-0500 2019-11-26T14:40:47.754-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:47.755-0500 2019-11-26T14:40:47.755-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:47.755-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:47.797-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:48.004-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:48.132-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:48.297-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:48.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:48.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:48.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:48.504-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:48.550-0500 2019-11-26T14:40:48.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:48.632-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:48.632-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:48.797-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:48.957-0500 2019-11-26T14:40:48.957-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:48.957-0500 2019-11-26T14:40:48.957-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:48.958-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:49.004-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:49.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:49.133-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:49.297-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:49.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:49.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:49.504-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:49.504-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:49.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:49.633-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:49.633-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:49.797-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:49.797-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:50.004-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:50.134-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:50.160-0500 2019-11-26T14:40:50.160-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:50.160-0500 2019-11-26T14:40:50.160-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:50.160-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:50.297-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:50.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:50.504-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:50.550-0500 2019-11-26T14:40:50.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:40:50.551-0500 2019-11-26T14:40:50.551-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:50.634-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:50.634-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:50.797-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:50.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:51.004-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:51.004-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:51.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:51.134-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:51.298-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:51.363-0500 2019-11-26T14:40:51.362-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:51.363-0500 2019-11-26T14:40:51.363-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:51.363-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:51.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:51.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:51.504-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:51.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:51.634-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:51.798-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:52.004-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:52.134-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:52.135-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:52.298-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:52.298-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:52.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:52.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:52.504-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:52.550-0500 2019-11-26T14:40:52.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:40:52.565-0500 2019-11-26T14:40:52.565-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:52.565-0500 2019-11-26T14:40:52.565-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:52.566-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:52.635-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:52.798-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:53.004-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:53.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:53.135-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:53.135-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:53.298-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:53.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:53.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:53.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:53.504-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:53.504-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:53.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:53.635-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:53.768-0500 2019-11-26T14:40:53.768-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:53.768-0500 2019-11-26T14:40:53.768-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:53.769-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:53.798-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:54.004-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:54.135-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:54.298-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:54.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:54.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:54.504-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:54.550-0500 2019-11-26T14:40:54.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:54.635-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:54.798-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:54.799-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:54.971-0500 2019-11-26T14:40:54.971-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:54.971-0500 2019-11-26T14:40:54.971-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:54.971-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:55.004-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:55.004-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:55.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:55.135-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:55.299-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:55.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:55.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:55.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:55.504-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:55.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:55.636-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:55.722-0500 I QUERY [clientcursormon] Cursor id 673340541899209420 timed out, idle since 2019-11-26T14:30:54.434-0500
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:55.722-0500 I QUERY [clientcursormon] Cursor id 4677872113117916318 timed out, idle since 2019-11-26T14:30:54.434-0500
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:55.799-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:55.799-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:56.004-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:56.051-0500 I QUERY [clientcursormon] Cursor id 4843086847687280938 timed out, idle since 2019-11-26T14:30:54.671-0500
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:56.136-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:56.174-0500 2019-11-26T14:40:56.173-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:56.174-0500 2019-11-26T14:40:56.174-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:56.174-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:56.299-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:56.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:56.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:56.504-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:56.550-0500 2019-11-26T14:40:56.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:56.636-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:56.636-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:56.800-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:56.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:57.004-0500 I SHARDING [Balancer] caught exception while doing balance: Could not find host matching read preference { mode: "primary" } for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:57.004-0500 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "nz_desktop:20000-2019-11-26T14:40:57.004-0500-5ddd7fc95cde74b6784bbf24", server: "nz_desktop:20000", shard: "config", clientAddr: "", time: new Date(1574797257004), what: "balancer.round", ns: "", details: { executionTimeMillis: 20000, errorOccured: true, errmsg: "Could not find host matching read preference { mode: "primary" } for set shard-rs0" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:57.004-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:57.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:57.134-0500 I CONNPOOL [ShardRegistry] Connecting to localhost:20000
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:57.134-0500 I NETWORK [listener] connection accepted from 127.0.0.1:57132 #156 (20 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:57.135-0500 I NETWORK [conn156] received client metadata from 127.0.0.1:57132 conn156: { driver: { name: "NetworkInterfaceTL", version: "0.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:57.136-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:57.300-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:57.358-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:40:57.377-0500 2019-11-26T14:40:57.376-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:57.377-0500 2019-11-26T14:40:57.377-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:57.377-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:57.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:57.392-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:57.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:57.395-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:57.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:57.620-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:57.636-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:57.801-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:58.136-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:58.137-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:58.301-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:58.301-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:40:58.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:40:58.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:40:58.550-0500 2019-11-26T14:40:58.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:40:58.579-0500 2019-11-26T14:40:58.579-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:58.579-0500 2019-11-26T14:40:58.579-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:58.580-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:58.637-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:58.801-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:40:59.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:59.137-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:59.137-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:59.301-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:40:59.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:40:59.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:59.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:40:59.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:40:59.637-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:40:59.782-0500 2019-11-26T14:40:59.782-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:59.782-0500 2019-11-26T14:40:59.782-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:40:59.782-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:40:59.801-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:00.137-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:00.301-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:00.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:00.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:00.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:00.550-0500 2019-11-26T14:41:00.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:00.637-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:00.801-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:00.801-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:00.985-0500 2019-11-26T14:41:00.985-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:00.985-0500 2019-11-26T14:41:00.985-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:00.985-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:01.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:01.138-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:01.301-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:01.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:01.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:01.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:01.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:01.638-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:01.801-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:01.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:02.138-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:02.188-0500 2019-11-26T14:41:02.187-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:02.188-0500 2019-11-26T14:41:02.188-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:02.188-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:02.301-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:02.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:02.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:02.550-0500 2019-11-26T14:41:02.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:02.638-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:02.638-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:02.801-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:03.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:03.139-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:03.301-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:03.301-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:03.390-0500 2019-11-26T14:41:03.390-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:03.391-0500 2019-11-26T14:41:03.390-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:03.391-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:03.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:03.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:03.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:03.639-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:03.639-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:03.801-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:04.140-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:04.301-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:04.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:04.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:04.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:04.550-0500 2019-11-26T14:41:04.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:04.594-0500 2019-11-26T14:41:04.594-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:04.594-0500 2019-11-26T14:41:04.594-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:04.595-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:04.640-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:04.640-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:04.801-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:05.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:05.140-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:05.301-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:05.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:05.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:05.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:05.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:05.640-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:05.797-0500 2019-11-26T14:41:05.797-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:05.797-0500 2019-11-26T14:41:05.797-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:05.797-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:05.801-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:05.801-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:06.140-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:06.141-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:06.301-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:06.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:06.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:06.550-0500 2019-11-26T14:41:06.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:06.640-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:06.801-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:06.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:07.000-0500 2019-11-26T14:41:07.000-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:07.000-0500 2019-11-26T14:41:07.000-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:07.000-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:07.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:07.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:07.135-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:07.140-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:07.301-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:07.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:07.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:07.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:07.506-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:07.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:07.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:07.640-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:07.801-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:08.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:08.140-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:08.203-0500 2019-11-26T14:41:08.202-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:08.203-0500 2019-11-26T14:41:08.203-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:08.203-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:08.301-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:08.301-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:08.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:08.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:08.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:08.550-0500 2019-11-26T14:41:08.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:08.640-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:08.801-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:09.006-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:09.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:09.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:09.140-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:09.301-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:09.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:09.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:09.405-0500 2019-11-26T14:41:09.405-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:09.406-0500 2019-11-26T14:41:09.406-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:09.406-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:09.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:09.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:09.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:09.640-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:09.640-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:09.801-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:09.948-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:10.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:10.140-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:10.301-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:10.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:10.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:10.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:10.550-0500 2019-11-26T14:41:10.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:10.608-0500 2019-11-26T14:41:10.608-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:10.608-0500 2019-11-26T14:41:10.608-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:10.609-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:10.641-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:10.801-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:10.801-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:11.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:11.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:11.141-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:11.141-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:11.301-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:11.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:11.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:11.506-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:11.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:11.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:11.641-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:11.801-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:11.811-0500 2019-11-26T14:41:11.811-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:11.811-0500 2019-11-26T14:41:11.811-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:11.811-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:11.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:12.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:12.141-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:12.301-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:12.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:12.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:12.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:12.550-0500 2019-11-26T14:41:12.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:12.641-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:12.801-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:13.006-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:13.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:13.014-0500 2019-11-26T14:41:13.014-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:13.014-0500 2019-11-26T14:41:13.014-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:13.014-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:13.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:13.141-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:13.301-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:13.301-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:13.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:13.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:13.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:13.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:13.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:13.641-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:13.801-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:14.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:14.141-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:14.216-0500 2019-11-26T14:41:14.216-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:14.217-0500 2019-11-26T14:41:14.217-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:14.217-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:14.301-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:14.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:14.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:14.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:14.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:14.550-0500 2019-11-26T14:41:14.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:14.641-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:14.641-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:14.801-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:15.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:15.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:15.141-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:15.301-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:15.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:15.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:15.419-0500 2019-11-26T14:41:15.419-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:15.419-0500 2019-11-26T14:41:15.419-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:15.420-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:15.506-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:15.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:15.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:15.641-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:15.801-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:15.801-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:16.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:16.141-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:16.141-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:16.301-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:16.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:16.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:16.550-0500 2019-11-26T14:41:16.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:16.622-0500 2019-11-26T14:41:16.622-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:16.622-0500 2019-11-26T14:41:16.622-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:16.622-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:16.641-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:16.801-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:16.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:17.006-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:17.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:17.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:17.141-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:17.302-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:17.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:17.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:17.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:17.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:17.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:17.641-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:17.802-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:17.825-0500 2019-11-26T14:41:17.824-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:17.825-0500 2019-11-26T14:41:17.825-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:17.825-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:18.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:18.141-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:18.302-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:18.303-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:18.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:18.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:18.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:18.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:18.550-0500 2019-11-26T14:41:18.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:18.641-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:18.803-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:19.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:19.027-0500 2019-11-26T14:41:19.027-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:19.027-0500 2019-11-26T14:41:19.027-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:19.028-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:19.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:19.141-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:19.303-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:19.303-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:19.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:19.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:19.506-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:19.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:19.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:19.641-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:19.641-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:19.803-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:20.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:20.141-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:20.230-0500 2019-11-26T14:41:20.230-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:20.230-0500 2019-11-26T14:41:20.230-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:20.230-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:20.303-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:20.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:20.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:20.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:20.550-0500 2019-11-26T14:41:20.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:20.551-0500 2019-11-26T14:41:20.551-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:20.641-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:20.803-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:21.006-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:21.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:21.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:21.141-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:21.141-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:21.303-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:21.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:21.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:21.433-0500 2019-11-26T14:41:21.433-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:21.433-0500 2019-11-26T14:41:21.433-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:21.433-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:21.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:21.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:21.641-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:21.803-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:21.803-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:22.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:22.141-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:22.303-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:22.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:22.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:22.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:22.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:22.550-0500 2019-11-26T14:41:22.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:22.635-0500 2019-11-26T14:41:22.635-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:22.635-0500 2019-11-26T14:41:22.635-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:22.636-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:22.641-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:22.803-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:22.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:23.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:23.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:23.141-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:23.303-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:23.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:23.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:23.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:23.506-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:23.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:23.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:23.641-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:23.803-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:23.838-0500 2019-11-26T14:41:23.838-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:23.838-0500 2019-11-26T14:41:23.838-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:23.838-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:24.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:24.141-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:24.303-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:24.303-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:24.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:24.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:24.550-0500 2019-11-26T14:41:24.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:24.641-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:24.641-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:24.803-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:25.006-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:25.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:25.041-0500 2019-11-26T14:41:25.041-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:25.041-0500 2019-11-26T14:41:25.041-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:25.041-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:25.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:25.141-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:25.303-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:25.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:25.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:25.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:25.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:25.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:25.641-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:25.803-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:26.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:26.141-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:26.141-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:26.243-0500 2019-11-26T14:41:26.243-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:26.243-0500 2019-11-26T14:41:26.243-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:26.244-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:26.303-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:26.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:26.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:26.506-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:26.550-0500 2019-11-26T14:41:26.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:26.641-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:26.803-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:26.803-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:27.006-0500 I SHARDING [Balancer] caught exception while doing balance: Could not find host matching read preference { mode: "primary" } for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:27.006-0500 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "nz_desktop:20000-2019-11-26T14:41:27.006-0500-5ddd7fe65cde74b6784bbf5f", server: "nz_desktop:20000", shard: "config", clientAddr: "", time: new Date(1574797287006), what: "balancer.round", ns: "", details: { executionTimeMillis: 20000, errorOccured: true, errmsg: "Could not find host matching read preference { mode: "primary" } for set shard-rs0" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:27.006-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:27.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:27.141-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:27.303-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:27.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:27.358-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:27.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:27.392-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:27.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:27.395-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:27.446-0500 2019-11-26T14:41:27.446-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:27.446-0500 2019-11-26T14:41:27.446-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:27.451-0500 I NETWORK [conn149] end connection 127.0.0.1:57372 (19 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:41:27.451-0500 2019-11-26T14:41:27.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:27.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:27.620-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:27.641-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:27.803-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:27.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:27.951-0500 2019-11-26T14:41:27.951-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:27.951-0500 2019-11-26T14:41:27.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:28.141-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:28.303-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:28.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:28.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:28.451-0500 2019-11-26T14:41:28.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:28.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:28.641-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:28.803-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:28.951-0500 2019-11-26T14:41:28.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:29.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:29.141-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:29.303-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:29.303-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:29.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:29.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:29.451-0500 2019-11-26T14:41:29.451-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:29.451-0500 2019-11-26T14:41:29.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:29.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:29.641-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:29.641-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:29.803-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:29.951-0500 2019-11-26T14:41:29.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:30.141-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:30.303-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:30.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:30.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:30.451-0500 2019-11-26T14:41:30.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:30.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:30.550-0500 2019-11-26T14:41:30.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:30.641-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:30.803-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:30.951-0500 2019-11-26T14:41:30.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:31.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:31.141-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:31.141-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:31.303-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:31.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:31.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:31.451-0500 2019-11-26T14:41:31.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:31.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:31.641-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:31.803-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:31.803-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:31.951-0500 2019-11-26T14:41:31.951-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:31.951-0500 2019-11-26T14:41:31.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:32.141-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:32.303-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:32.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:32.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:32.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:32.451-0500 2019-11-26T14:41:32.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:32.641-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:32.803-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:32.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:32.951-0500 2019-11-26T14:41:32.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:33.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:33.141-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:33.303-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:33.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:33.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:33.451-0500 2019-11-26T14:41:33.451-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:33.451-0500 2019-11-26T14:41:33.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:33.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:33.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:33.641-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:33.803-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:33.951-0500 2019-11-26T14:41:33.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:34.141-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:34.303-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:34.303-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:34.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:34.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:34.451-0500 2019-11-26T14:41:34.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:34.550-0500 2019-11-26T14:41:34.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:34.641-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:34.641-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:34.803-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:34.951-0500 2019-11-26T14:41:34.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:35.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:35.142-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:35.303-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:35.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:35.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:35.451-0500 2019-11-26T14:41:35.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:35.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:35.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:35.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:35.642-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:35.643-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:35.803-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:35.951-0500 2019-11-26T14:41:35.951-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:35.951-0500 2019-11-26T14:41:35.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:36.144-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:36.303-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:36.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:36.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:36.451-0500 2019-11-26T14:41:36.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:36.644-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:36.645-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:36.803-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:36.803-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:36.951-0500 2019-11-26T14:41:36.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:37.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:37.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:37.135-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:37.146-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:37.303-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:37.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:37.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:37.451-0500 2019-11-26T14:41:37.451-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:37.451-0500 2019-11-26T14:41:37.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:37.508-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:37.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:37.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:37.646-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:37.647-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:37.803-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:37.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:37.951-0500 2019-11-26T14:41:37.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:38.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:38.147-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:38.303-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:38.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:38.451-0500 2019-11-26T14:41:38.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:38.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:38.550-0500 2019-11-26T14:41:38.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:38.647-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:38.647-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:38.803-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:38.951-0500 2019-11-26T14:41:38.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:39.008-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:39.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:39.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:39.147-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:39.303-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:39.303-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:39.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:39.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:39.451-0500 2019-11-26T14:41:39.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:39.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:39.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:39.647-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:39.803-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:39.948-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:39.951-0500 2019-11-26T14:41:39.951-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:39.951-0500 2019-11-26T14:41:39.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:40.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:40.147-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:40.147-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:40.303-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:40.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:40.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:40.451-0500 2019-11-26T14:41:40.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:40.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:40.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:40.647-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:40.803-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:40.951-0500 2019-11-26T14:41:40.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:41.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:41.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:41.147-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:41.303-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:41.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:41.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:41.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:41.451-0500 2019-11-26T14:41:41.451-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:41.451-0500 2019-11-26T14:41:41.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:41.508-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:41.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:41.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:41.647-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:41.803-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:41.803-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:41.951-0500 2019-11-26T14:41:41.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:42.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:42.147-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:42.304-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:42.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:42.451-0500 2019-11-26T14:41:42.451-0500 I QUERY [js] Failed to end session { id: UUID("0f463b9d-ebfc-42eb-ae2e-efc5a84bf79e") } due to FailedToSatisfyReadPreference: Could not find host matching read preference { mode: "primary", tags: [ {} ] } for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:42.451-0500 2019-11-26T14:41:42.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:42.451-0500 I NETWORK [conn212] end connection 127.0.0.1:47610 (21 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:42.451-0500 I NETWORK [conn193] end connection 127.0.0.1:46004 (2 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:42.452-0500 I NETWORK [conn150] end connection 127.0.0.1:57408 (18 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:42.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:42.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:42.550-0500 2019-11-26T14:41:42.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:42.647-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:42.804-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:42.804-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:42.951-0500 2019-11-26T14:41:42.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:43.008-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:43.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:43.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:43.147-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:43.304-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:43.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:43.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:43.451-0500 2019-11-26T14:41:43.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:43.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:43.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:43.647-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:43.647-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:43.804-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:43.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:43.951-0500 2019-11-26T14:41:43.951-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:43.951-0500 2019-11-26T14:41:43.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:44.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:44.147-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:44.304-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:44.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:44.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:44.451-0500 2019-11-26T14:41:44.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:44.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:44.647-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:44.804-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:44.951-0500 2019-11-26T14:41:44.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:45.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:45.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:45.147-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:45.147-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:45.304-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:45.304-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:45.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:45.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:45.451-0500 2019-11-26T14:41:45.451-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:45.451-0500 2019-11-26T14:41:45.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:45.508-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:45.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:45.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:45.647-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:45.804-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:45.951-0500 2019-11-26T14:41:45.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:46.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:46.147-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:46.304-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:46.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:46.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:46.451-0500 2019-11-26T14:41:46.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:46.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:46.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:46.550-0500 2019-11-26T14:41:46.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:46.647-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:46.804-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:46.951-0500 2019-11-26T14:41:46.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:47.008-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:47.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:47.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:47.147-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:47.304-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:47.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:47.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:47.451-0500 2019-11-26T14:41:47.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:47.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:47.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:47.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:47.647-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:47.804-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:47.804-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:47.951-0500 2019-11-26T14:41:47.951-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:47.951-0500 2019-11-26T14:41:47.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:48.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:48.147-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:48.304-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:48.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:48.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:48.451-0500 2019-11-26T14:41:48.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:48.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:48.647-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:48.647-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:48.804-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:48.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:48.951-0500 2019-11-26T14:41:48.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:49.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:49.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:49.147-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:49.304-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:49.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:49.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:49.451-0500 2019-11-26T14:41:49.451-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:49.451-0500 2019-11-26T14:41:49.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:49.508-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:49.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:49.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:49.647-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:49.805-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:49.951-0500 2019-11-26T14:41:49.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:50.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:50.147-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:50.148-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:50.305-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:50.305-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:50.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:50.451-0500 2019-11-26T14:41:50.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:50.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:50.550-0500 2019-11-26T14:41:50.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:50.648-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:50.805-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:50.951-0500 2019-11-26T14:41:50.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:51.008-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:51.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:51.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:51.148-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:51.148-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:51.305-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:51.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:51.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:51.451-0500 2019-11-26T14:41:51.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:51.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:51.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:51.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:51.648-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:51.805-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:51.951-0500 2019-11-26T14:41:51.951-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:51.951-0500 2019-11-26T14:41:51.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:52.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:52.148-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:52.305-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:52.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:52.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:52.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:52.451-0500 2019-11-26T14:41:52.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:52.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:52.648-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:52.805-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:52.805-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:52.951-0500 2019-11-26T14:41:52.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:53.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:53.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:53.148-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:53.305-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:53.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:53.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:53.451-0500 2019-11-26T14:41:53.451-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:53.451-0500 2019-11-26T14:41:53.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:53.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:53.508-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:53.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:53.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:53.648-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:53.805-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:53.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:53.951-0500 2019-11-26T14:41:53.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:54.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:54.148-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:54.305-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:54.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:54.451-0500 2019-11-26T14:41:54.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:54.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:54.550-0500 2019-11-26T14:41:54.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:54.648-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:54.648-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:54.805-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:54.951-0500 2019-11-26T14:41:54.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:55.008-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:55.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:55.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:55.148-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:55.305-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:55.305-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:55.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:55.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:55.451-0500 2019-11-26T14:41:55.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:55.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:55.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:55.648-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:55.805-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:55.951-0500 2019-11-26T14:41:55.951-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:55.951-0500 2019-11-26T14:41:55.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:56.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:56.148-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:56.148-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:56.305-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:56.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:56.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:56.451-0500 2019-11-26T14:41:56.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:56.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:56.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:56.648-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:56.805-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:56.951-0500 2019-11-26T14:41:56.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:57.008-0500 I SHARDING [Balancer] caught exception while doing balance: Could not find host matching read preference { mode: "primary" } for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:57.008-0500 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "nz_desktop:20000-2019-11-26T14:41:57.008-0500-5ddd80055cde74b6784bbf98", server: "nz_desktop:20000", shard: "config", clientAddr: "", time: new Date(1574797317008), what: "balancer.round", ns: "", details: { executionTimeMillis: 20000, errorOccured: true, errmsg: "Could not find host matching read preference { mode: "primary" } for set shard-rs0" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:57.008-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:57.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:57.135-0500 I CONNPOOL [ShardRegistry] Ending idle connection to host localhost:20000 because the pool meets constraints; 1 connections to that host remain open
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:57.135-0500 I NETWORK [conn156] end connection 127.0.0.1:57132 (17 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:57.148-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:57.305-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:57.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:57.358-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:57.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:57.392-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:57.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:57.395-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:57.451-0500 2019-11-26T14:41:57.451-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:57.451-0500 2019-11-26T14:41:57.451-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:57.452-0500 2019-11-26T14:41:57.452-0500 I QUERY [js] Failed to end session { id: UUID("130e432a-33ff-4514-93b8-547e3a30c738") } due to FailedToSatisfyReadPreference: Could not find host matching read preference { mode: "primary", tags: [ {} ] } for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:57.452-0500 I NETWORK [conn213] end connection 127.0.0.1:47618 (20 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:57.452-0500 I NETWORK [conn151] end connection 127.0.0.1:57418 (16 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:41:57.453-0500 2019-11-26T14:41:57.453-0500 I NETWORK [js] DBClientConnection failed to send message to localhost:20001 - SocketException: Broken pipe
[fsm_workload_test:agg_out] 2019-11-26T14:41:57.453-0500 2019-11-26T14:41:57.453-0500 I QUERY [js] Failed to end session { id: UUID("5ef8fec0-3ae1-4f6c-a7f5-d5449ae7698c") } due to HostUnreachable: network error while attempting to run command 'endSessions' on host 'localhost:20001'
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:57.453-0500 I NETWORK [conn214] end connection 127.0.0.1:47638 (19 connections now open)
[fsm_workload_test:agg_out] 2019-11-26T14:41:57.461-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:57.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:57.620-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:57.648-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:57.664-0500 2019-11-26T14:41:57.664-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:57.664-0500 2019-11-26T14:41:57.664-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:57.665-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:57.805-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:57.806-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:57.869-0500 2019-11-26T14:41:57.869-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:57.869-0500 2019-11-26T14:41:57.869-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:57.870-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[fsm_workload_test:agg_out] 2019-11-26T14:41:57.951-0500 2019-11-26T14:41:57.951-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:41:58.076-0500 2019-11-26T14:41:58.076-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:58.076-0500 2019-11-26T14:41:58.076-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:58.076-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:58.148-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:58.286-0500 2019-11-26T14:41:58.286-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:58.287-0500 2019-11-26T14:41:58.286-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:58.287-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:58.305-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:41:58.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:41:58.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:58.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:58.505-0500 2019-11-26T14:41:58.505-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:58.505-0500 2019-11-26T14:41:58.505-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:58.505-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[fsm_workload_test:agg_out] 2019-11-26T14:41:58.550-0500 2019-11-26T14:41:58.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:58.648-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:58.739-0500 2019-11-26T14:41:58.739-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:58.739-0500 2019-11-26T14:41:58.739-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:58.740-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:58.805-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:58.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:41:59.006-0500 2019-11-26T14:41:59.006-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:59.006-0500 2019-11-26T14:41:59.006-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:59.006-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:41:59.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:59.148-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:59.305-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:59.337-0500 2019-11-26T14:41:59.337-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:59.337-0500 2019-11-26T14:41:59.337-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:59.337-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:41:59.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:41:59.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:41:59.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:59.648-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:41:59.648-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:41:59.795-0500 2019-11-26T14:41:59.795-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:59.795-0500 2019-11-26T14:41:59.795-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:41:59.796-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:41:59.805-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:00.148-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:00.305-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:00.305-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:00.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:00.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:42:00.510-0500 2019-11-26T14:42:00.510-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:00.510-0500 2019-11-26T14:42:00.510-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:00.511-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[fsm_workload_test:agg_out] 2019-11-26T14:42:00.550-0500 2019-11-26T14:42:00.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:00.648-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:00.805-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:01.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:01.148-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:01.148-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:01.305-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:01.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:01.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:01.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:01.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:01.648-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:01.713-0500 2019-11-26T14:42:01.713-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:01.713-0500 2019-11-26T14:42:01.713-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:01.713-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:01.805-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:02.148-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:02.305-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:02.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:02.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:02.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:42:02.550-0500 2019-11-26T14:42:02.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:02.648-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:02.805-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:02.805-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:02.916-0500 2019-11-26T14:42:02.915-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:02.916-0500 2019-11-26T14:42:02.916-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:02.916-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:03.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:03.148-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:03.305-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:03.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:03.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:03.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:03.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:03.648-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:03.805-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:03.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:42:04.118-0500 2019-11-26T14:42:04.118-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:04.118-0500 2019-11-26T14:42:04.118-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:04.119-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:04.148-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:04.306-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:04.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:04.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:42:04.550-0500 2019-11-26T14:42:04.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:04.648-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:04.648-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:04.806-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:05.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:05.149-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:05.306-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:05.307-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:05.321-0500 2019-11-26T14:42:05.321-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:05.321-0500 2019-11-26T14:42:05.321-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:05.321-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:05.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:05.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:05.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:05.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:05.649-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:05.649-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:05.807-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:06.149-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:06.307-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:06.308-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:06.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:06.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:42:06.523-0500 2019-11-26T14:42:06.523-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:06.523-0500 2019-11-26T14:42:06.523-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:06.524-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[fsm_workload_test:agg_out] 2019-11-26T14:42:06.550-0500 2019-11-26T14:42:06.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:06.649-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:06.808-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:07.011-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:07.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:07.135-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:07.149-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:07.149-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:07.308-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:07.308-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:07.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:07.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:07.511-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:07.511-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:07.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:07.649-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:07.726-0500 2019-11-26T14:42:07.726-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:07.726-0500 2019-11-26T14:42:07.726-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:07.726-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:07.808-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:08.011-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:08.149-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:08.308-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:08.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:08.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:08.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:08.511-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:08.550-0500 2019-11-26T14:42:08.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:08.649-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:08.808-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:08.929-0500 2019-11-26T14:42:08.928-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:08.929-0500 2019-11-26T14:42:08.929-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:08.929-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:09.011-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:09.011-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:09.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:09.149-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:09.308-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:09.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:09.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:09.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:09.511-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:09.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:09.649-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:09.808-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:09.808-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:09.948-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:10.011-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:10.131-0500 2019-11-26T14:42:10.131-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:10.131-0500 2019-11-26T14:42:10.131-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:10.132-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:10.149-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:10.308-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:10.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:10.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:10.511-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:10.550-0500 2019-11-26T14:42:10.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:10.649-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:10.649-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:10.808-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:10.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:11.011-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:11.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:11.149-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:11.308-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:11.334-0500 2019-11-26T14:42:11.334-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:11.334-0500 2019-11-26T14:42:11.334-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:11.334-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:11.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:11.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:11.511-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:11.511-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:11.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:11.649-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:11.809-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:12.011-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:12.149-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:12.149-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:12.308-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:12.308-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:12.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:12.511-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:12.537-0500 2019-11-26T14:42:12.537-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:12.537-0500 2019-11-26T14:42:12.537-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:12.537-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[fsm_workload_test:agg_out] 2019-11-26T14:42:12.550-0500 2019-11-26T14:42:12.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:12.649-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:12.808-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:13.011-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:13.011-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:13.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:13.150-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:13.308-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:13.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:13.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:13.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:13.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:13.511-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:13.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:13.650-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:13.740-0500 2019-11-26T14:42:13.739-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:13.740-0500 2019-11-26T14:42:13.740-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:13.740-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:13.808-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:14.011-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:14.151-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:14.308-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:14.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:14.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:14.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:14.511-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:14.550-0500 2019-11-26T14:42:14.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:14.651-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:14.808-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:14.808-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:14.942-0500 2019-11-26T14:42:14.942-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:14.942-0500 2019-11-26T14:42:14.942-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:14.943-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:15.011-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:15.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:15.151-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:15.309-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:15.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:15.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:15.511-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:15.511-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:15.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:15.651-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:15.651-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:15.809-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:15.809-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:16.011-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:16.145-0500 2019-11-26T14:42:16.145-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:16.145-0500 2019-11-26T14:42:16.145-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:16.145-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:16.151-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:16.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:16.309-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:16.511-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:16.550-0500 2019-11-26T14:42:16.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:16.651-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:16.809-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:16.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:17.011-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:17.011-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:17.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:17.151-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:17.151-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:17.309-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:17.348-0500 2019-11-26T14:42:17.348-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:17.348-0500 2019-11-26T14:42:17.348-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:17.348-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:17.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:17.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:17.511-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:17.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:17.651-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:17.809-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:18.011-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:18.151-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:18.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:18.309-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:18.309-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:18.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:18.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:18.511-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:18.550-0500 2019-11-26T14:42:18.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:42:18.551-0500 2019-11-26T14:42:18.551-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:18.551-0500 2019-11-26T14:42:18.551-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:18.551-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:18.651-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:18.809-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:19.011-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:19.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:19.151-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:19.309-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:19.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:19.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:19.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:19.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:19.511-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:19.511-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:19.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:19.651-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:19.754-0500 2019-11-26T14:42:19.753-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:19.754-0500 2019-11-26T14:42:19.754-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:19.754-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:19.809-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:20.011-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:20.151-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:20.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:20.309-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:20.511-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:20.550-0500 2019-11-26T14:42:20.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:20.651-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:20.651-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:20.809-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:20.809-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:20.957-0500 2019-11-26T14:42:20.956-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:20.957-0500 2019-11-26T14:42:20.957-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:20.957-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:21.011-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:21.011-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:21.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:21.151-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:21.309-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:21.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:21.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:21.511-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:21.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:21.651-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:21.809-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:21.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:22.011-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:22.151-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:22.152-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:22.159-0500 2019-11-26T14:42:22.159-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:22.160-0500 2019-11-26T14:42:22.160-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:22.160-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:22.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:22.309-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:22.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:22.511-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:22.550-0500 2019-11-26T14:42:22.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:22.652-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:22.809-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:23.011-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:23.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:23.152-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:23.152-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:23.309-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:23.309-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:23.363-0500 2019-11-26T14:42:23.363-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:23.364-0500 2019-11-26T14:42:23.363-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:23.364-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:23.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:23.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:23.511-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:23.511-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:23.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:23.652-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:23.809-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:24.011-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:24.152-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:24.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:24.309-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:24.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:24.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:24.511-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:24.550-0500 2019-11-26T14:42:24.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:42:24.566-0500 2019-11-26T14:42:24.566-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:24.566-0500 2019-11-26T14:42:24.566-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:24.567-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:24.652-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:24.809-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:25.011-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:25.011-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:25.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:25.152-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:25.309-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:25.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:25.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:25.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:25.511-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:25.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:25.653-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:25.769-0500 2019-11-26T14:42:25.769-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:25.769-0500 2019-11-26T14:42:25.769-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:25.770-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:25.809-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:25.810-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:26.011-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:26.154-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:26.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:26.310-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:26.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:26.511-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:26.550-0500 2019-11-26T14:42:26.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:26.654-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:26.655-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:26.810-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:26.811-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:26.973-0500 2019-11-26T14:42:26.973-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:26.973-0500 2019-11-26T14:42:26.973-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:26.973-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:27.011-0500 I SHARDING [Balancer] caught exception while doing balance: Could not find host matching read preference { mode: "primary" } for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:27.011-0500 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "nz_desktop:20000-2019-11-26T14:42:27.011-0500-5ddd80235cde74b6784bbfd1", server: "nz_desktop:20000", shard: "config", clientAddr: "", time: new Date(1574797347011), what: "balancer.round", ns: "", details: { executionTimeMillis: 20000, errorOccured: true, errmsg: "Could not find host matching read preference { mode: "primary" } for set shard-rs0" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:27.011-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:27.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:27.156-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:27.312-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:27.358-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:27.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:27.392-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:27.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:27.395-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:27.452-0500 2019-11-26T14:42:27.452-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:27.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:27.620-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:27.656-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:27.657-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:27.812-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:27.813-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:28.158-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:28.176-0500 2019-11-26T14:42:28.176-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:28.176-0500 2019-11-26T14:42:28.176-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:28.176-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:28.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:28.314-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:28.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:42:28.550-0500 2019-11-26T14:42:28.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:28.658-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:28.659-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:28.813-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:28.814-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:29.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:29.160-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:29.315-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:29.379-0500 2019-11-26T14:42:29.379-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:29.379-0500 2019-11-26T14:42:29.379-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:29.380-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:29.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:29.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:29.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:29.660-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:29.661-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:29.814-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:29.815-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:30.162-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:30.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:30.316-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:30.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:42:30.550-0500 2019-11-26T14:42:30.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:42:30.582-0500 2019-11-26T14:42:30.582-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:30.582-0500 2019-11-26T14:42:30.582-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:30.583-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:30.662-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:30.663-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:30.816-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:30.817-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:31.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:31.162-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:31.318-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:31.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:31.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:31.620-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:31.663-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:31.786-0500 2019-11-26T14:42:31.785-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:31.786-0500 2019-11-26T14:42:31.786-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:31.787-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:31.818-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:31.819-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:32.163-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:32.164-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:32.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:32.320-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:32.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:42:32.550-0500 2019-11-26T14:42:32.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:32.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:32.665-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:32.820-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:32.820-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:32.989-0500 2019-11-26T14:42:32.989-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:32.990-0500 2019-11-26T14:42:32.989-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:32.990-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:33.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:33.165-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:33.166-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:33.321-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:33.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:33.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:33.666-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:33.821-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:33.822-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:34.166-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:34.167-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:34.193-0500 2019-11-26T14:42:34.193-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:34.193-0500 2019-11-26T14:42:34.193-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:34.194-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:34.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:34.323-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:34.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:42:34.550-0500 2019-11-26T14:42:34.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:34.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:34.667-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:34.823-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:34.824-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:35.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:35.167-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:35.168-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:35.325-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:35.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:35.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:42:35.397-0500 2019-11-26T14:42:35.397-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:35.397-0500 2019-11-26T14:42:35.397-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:35.398-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:35.509-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:35.669-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:35.825-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:35.826-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:36.169-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:36.170-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:36.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:36.327-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:36.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:42:36.550-0500 2019-11-26T14:42:36.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:42:36.601-0500 2019-11-26T14:42:36.601-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:36.602-0500 2019-11-26T14:42:36.602-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:36.602-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:36.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:36.671-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:36.827-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:36.827-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:37.015-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:37.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:37.135-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:37.171-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:37.172-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:37.328-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:37.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:37.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:37.515-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:37.515-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:37.673-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:37.806-0500 2019-11-26T14:42:37.806-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:37.806-0500 2019-11-26T14:42:37.806-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:37.806-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:37.828-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:37.829-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:38.015-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:38.173-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:38.174-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:38.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:38.330-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:38.515-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:38.550-0500 2019-11-26T14:42:38.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:38.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:38.675-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:38.830-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:38.831-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:39.010-0500 2019-11-26T14:42:39.010-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:39.011-0500 2019-11-26T14:42:39.011-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:39.011-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:39.015-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:39.015-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:39.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:39.175-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:39.176-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:39.332-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:39.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:39.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:39.515-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:39.677-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:39.832-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:39.833-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:39.948-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:40.015-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:40.177-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:40.178-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:40.215-0500 2019-11-26T14:42:40.215-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:40.216-0500 2019-11-26T14:42:40.216-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:40.216-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:40.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:40.334-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:40.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:40.515-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:40.550-0500 2019-11-26T14:42:40.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:40.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:40.679-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:40.834-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:40.835-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:41.015-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:41.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:41.179-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:41.180-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:41.336-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:41.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:41.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:42:41.420-0500 2019-11-26T14:42:41.419-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:41.420-0500 2019-11-26T14:42:41.420-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:41.421-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:41.515-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:41.515-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:41.681-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:41.836-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:41.837-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:42.015-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:42.181-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:42.181-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:42.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:42.337-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:42.515-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:42.550-0500 2019-11-26T14:42:42.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:42.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:42:42.624-0500 2019-11-26T14:42:42.624-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:42.624-0500 2019-11-26T14:42:42.624-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:42.624-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:42.682-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:42.837-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:42.837-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:43.015-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:43.015-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:43.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:43.182-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:43.182-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:43.337-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:43.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:43.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:43.515-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:43.682-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:43.828-0500 2019-11-26T14:42:43.828-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:43.828-0500 2019-11-26T14:42:43.828-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:43.829-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:43.837-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:43.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:44.015-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:44.182-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:44.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:44.337-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:44.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:44.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:44.515-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:44.550-0500 2019-11-26T14:42:44.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:44.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:44.682-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:44.837-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:45.015-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:45.033-0500 2019-11-26T14:42:45.032-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:45.033-0500 2019-11-26T14:42:45.033-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:45.033-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:45.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:45.182-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:45.337-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:45.337-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:45.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:45.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:45.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:45.515-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:45.515-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:45.683-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:45.837-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:46.015-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:46.183-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:46.237-0500 2019-11-26T14:42:46.237-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:46.237-0500 2019-11-26T14:42:46.237-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:46.237-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:46.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:46.337-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:46.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:46.515-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:46.550-0500 2019-11-26T14:42:46.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:46.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:46.683-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:46.684-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:46.837-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:47.015-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:47.015-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:47.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:47.184-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:47.337-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:47.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:47.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:42:47.440-0500 2019-11-26T14:42:47.440-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:47.441-0500 2019-11-26T14:42:47.440-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:47.441-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:47.515-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:47.684-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:47.685-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:47.837-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:47.837-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:48.015-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:48.185-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:48.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:48.337-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:48.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:48.515-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:48.550-0500 2019-11-26T14:42:48.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:48.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:42:48.643-0500 2019-11-26T14:42:48.643-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:48.644-0500 2019-11-26T14:42:48.643-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:48.644-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:48.685-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:48.686-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:48.837-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:48.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:49.015-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:49.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:49.186-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:49.337-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:49.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:49.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:49.515-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:49.515-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:49.686-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:49.687-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:49.837-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:49.846-0500 2019-11-26T14:42:49.846-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:49.846-0500 2019-11-26T14:42:49.846-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:49.847-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:50.015-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:50.188-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:50.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:50.337-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:50.337-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:50.515-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:50.550-0500 2019-11-26T14:42:50.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:50.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:50.688-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:50.689-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:50.837-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:51.015-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:51.015-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:51.051-0500 2019-11-26T14:42:51.050-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:51.051-0500 2019-11-26T14:42:51.051-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:51.051-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:51.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:51.189-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:51.337-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:51.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:51.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:51.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:51.515-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:51.689-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:51.689-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:51.837-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:52.015-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:52.189-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:52.255-0500 2019-11-26T14:42:52.255-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:52.255-0500 2019-11-26T14:42:52.255-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:52.256-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:52.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:52.337-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:52.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:52.515-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:52.550-0500 2019-11-26T14:42:52.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:52.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:52.689-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:52.837-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:52.837-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:53.015-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:53.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:53.189-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:53.189-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:53.337-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:53.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:53.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:42:53.459-0500 2019-11-26T14:42:53.459-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:53.460-0500 2019-11-26T14:42:53.459-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:53.460-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:53.515-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:53.515-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:53.689-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:53.837-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:53.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:54.015-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:54.189-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:54.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:54.337-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:54.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:54.515-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:54.550-0500 2019-11-26T14:42:54.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:54.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:42:54.664-0500 2019-11-26T14:42:54.663-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:54.664-0500 2019-11-26T14:42:54.664-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:54.664-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:54.689-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:54.837-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:55.015-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:55.015-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:55.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:55.189-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:55.337-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:55.337-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:55.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:55.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:55.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:55.515-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:55.689-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:55.837-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:55.868-0500 2019-11-26T14:42:55.868-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:55.868-0500 2019-11-26T14:42:55.868-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:55.868-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:56.015-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:56.190-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:56.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:56.337-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:56.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:56.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:56.515-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:56.550-0500 2019-11-26T14:42:56.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:56.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:56.690-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:56.690-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:56.837-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:57.014-0500 I SHARDING [Balancer] caught exception while doing balance: Could not find host matching read preference { mode: "primary" } for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:57.014-0500 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "nz_desktop:20000-2019-11-26T14:42:57.014-0500-5ddd80415cde74b6784bc008", server: "nz_desktop:20000", shard: "config", clientAddr: "", time: new Date(1574797377014), what: "balancer.round", ns: "", details: { executionTimeMillis: 19999, errorOccured: true, errmsg: "Could not find host matching read preference { mode: "primary" } for set shard-rs0" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:57.015-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:57.071-0500 2019-11-26T14:42:57.071-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:57.071-0500 2019-11-26T14:42:57.071-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:57.071-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:57.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:57.190-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:57.337-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:57.358-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:57.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:57.392-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:57.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:57.395-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:42:57.452-0500 2019-11-26T14:42:57.452-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:57.620-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:57.690-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:57.837-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:57.837-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:58.190-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:58.191-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:42:58.274-0500 2019-11-26T14:42:58.274-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:58.274-0500 2019-11-26T14:42:58.274-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:58.275-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:42:58.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:58.337-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:42:58.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:42:58.550-0500 2019-11-26T14:42:58.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:42:58.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:58.691-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:58.837-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:58.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:42:59.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:59.191-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:59.191-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:59.337-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:42:59.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:42:59.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:42:59.477-0500 2019-11-26T14:42:59.477-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:59.477-0500 2019-11-26T14:42:59.477-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:42:59.477-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:42:59.691-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:42:59.837-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:00.191-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:00.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:00.337-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:00.338-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:00.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:00.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:00.550-0500 2019-11-26T14:43:00.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:00.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:00.680-0500 2019-11-26T14:43:00.680-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:00.680-0500 2019-11-26T14:43:00.680-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:00.680-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:00.691-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:00.838-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:01.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:01.192-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:01.338-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:01.338-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:01.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:01.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:01.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:01.692-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:01.838-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:43:01.882-0500 2019-11-26T14:43:01.882-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:01.882-0500 2019-11-26T14:43:01.882-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:01.883-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:02.192-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:02.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:02.338-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:02.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:02.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:02.550-0500 2019-11-26T14:43:02.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:02.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:02.692-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:02.692-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:02.838-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:03.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:03.086-0500 2019-11-26T14:43:03.086-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:03.086-0500 2019-11-26T14:43:03.086-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:03.086-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:03.192-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:03.338-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:03.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:03.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:03.693-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:03.838-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:03.838-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:04.193-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:04.193-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:43:04.289-0500 2019-11-26T14:43:04.288-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:04.289-0500 2019-11-26T14:43:04.289-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:04.289-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:04.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:04.338-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:04.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:04.550-0500 2019-11-26T14:43:04.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:04.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:04.693-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:04.838-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:04.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:05.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:05.193-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:05.338-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:05.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:05.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:05.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:05.491-0500 2019-11-26T14:43:05.491-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:05.491-0500 2019-11-26T14:43:05.491-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:05.492-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:05.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:05.693-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:05.839-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:06.193-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:06.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:06.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:06.339-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:06.339-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:06.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:06.550-0500 2019-11-26T14:43:06.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:06.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:06.693-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:43:06.694-0500 2019-11-26T14:43:06.694-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:06.694-0500 2019-11-26T14:43:06.694-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:06.695-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:06.839-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:07.016-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:07.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:07.135-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:07.193-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:07.339-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:07.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:07.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:07.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:07.516-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:07.516-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:07.693-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:07.693-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:07.839-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:43:07.897-0500 2019-11-26T14:43:07.897-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:07.897-0500 2019-11-26T14:43:07.897-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:07.897-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:08.016-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:08.193-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:08.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:08.339-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:08.516-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:43:08.550-0500 2019-11-26T14:43:08.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:08.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:08.693-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:08.839-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:08.839-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:09.016-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:09.016-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:09.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:09.100-0500 2019-11-26T14:43:09.100-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:09.100-0500 2019-11-26T14:43:09.100-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:09.100-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:09.193-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:09.193-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:09.339-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:09.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:09.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:09.516-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:09.693-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:09.839-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:09.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:09.948-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:10.016-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:10.193-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:43:10.302-0500 2019-11-26T14:43:10.302-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:10.303-0500 2019-11-26T14:43:10.303-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:10.303-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:10.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:10.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:10.339-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:10.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:10.516-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:43:10.550-0500 2019-11-26T14:43:10.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:10.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:10.693-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:10.839-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:11.016-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:11.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:11.193-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:11.339-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:11.339-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:11.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:11.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:11.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:11.505-0500 2019-11-26T14:43:11.505-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:11.505-0500 2019-11-26T14:43:11.505-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:11.505-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:11.516-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:11.516-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:11.693-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:11.839-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:12.016-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:12.193-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:12.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:12.339-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:12.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:12.516-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:43:12.550-0500 2019-11-26T14:43:12.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:12.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:12.693-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:12.693-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:43:12.708-0500 2019-11-26T14:43:12.708-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:12.708-0500 2019-11-26T14:43:12.708-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:12.708-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:12.839-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:13.016-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:13.016-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:13.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:13.193-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:13.339-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:13.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:13.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:13.516-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:13.693-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:13.839-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:13.839-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:43:13.911-0500 2019-11-26T14:43:13.911-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:13.911-0500 2019-11-26T14:43:13.911-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:13.911-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:14.016-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:14.193-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:14.193-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:14.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:14.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:14.339-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:14.516-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:43:14.550-0500 2019-11-26T14:43:14.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:14.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:14.693-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:14.839-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:14.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:15.016-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:15.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:15.114-0500 2019-11-26T14:43:15.114-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:15.114-0500 2019-11-26T14:43:15.114-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:15.114-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:15.193-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:15.339-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:15.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:15.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:15.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:15.516-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:15.516-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:15.693-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:15.839-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:16.016-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:16.193-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:16.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:16.317-0500 2019-11-26T14:43:16.317-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:16.317-0500 2019-11-26T14:43:16.317-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:16.318-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:16.339-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:16.339-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:16.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:16.516-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:43:16.550-0500 2019-11-26T14:43:16.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:16.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:16.693-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:16.839-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:17.016-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:17.016-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:17.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:17.194-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:17.339-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:17.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:17.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:17.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:17.516-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:43:17.520-0500 2019-11-26T14:43:17.520-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:17.520-0500 2019-11-26T14:43:17.520-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:17.521-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:17.693-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:17.693-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:17.839-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:18.016-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:18.193-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:18.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:18.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:18.339-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:18.516-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:43:18.550-0500 2019-11-26T14:43:18.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:18.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:18.693-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:43:18.723-0500 2019-11-26T14:43:18.723-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:18.723-0500 2019-11-26T14:43:18.723-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:18.724-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:18.839-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:18.840-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:19.016-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:19.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:19.193-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:19.193-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:19.340-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:19.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:19.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:19.516-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:19.516-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:19.693-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:19.840-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:19.840-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:43:19.926-0500 2019-11-26T14:43:19.926-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:19.926-0500 2019-11-26T14:43:19.926-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:19.926-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:20.016-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:20.193-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:20.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:20.340-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:20.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:20.516-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:43:20.550-0500 2019-11-26T14:43:20.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:20.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:20.693-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:20.840-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:20.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:21.016-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:21.016-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:21.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:21.128-0500 2019-11-26T14:43:21.128-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:21.129-0500 2019-11-26T14:43:21.129-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:21.129-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:21.193-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:21.340-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:21.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:21.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:21.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:21.516-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:21.693-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:21.840-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:22.016-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:22.193-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:22.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:22.331-0500 2019-11-26T14:43:22.331-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:22.331-0500 2019-11-26T14:43:22.331-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:22.332-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:22.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:22.340-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:22.340-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:22.516-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:43:22.550-0500 2019-11-26T14:43:22.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:22.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:22.693-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:22.693-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:22.840-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:23.016-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:23.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:23.193-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:23.341-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:23.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:23.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:23.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:23.516-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:23.516-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:43:23.534-0500 2019-11-26T14:43:23.534-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:23.534-0500 2019-11-26T14:43:23.534-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:23.534-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:23.693-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:23.841-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:24.016-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:24.193-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:24.193-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:24.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:24.342-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:24.516-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:43:24.550-0500 2019-11-26T14:43:24.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:24.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:24.693-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:43:24.737-0500 2019-11-26T14:43:24.737-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:24.737-0500 2019-11-26T14:43:24.737-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:24.737-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:24.842-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:24.842-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:25.016-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:25.016-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:25.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:25.193-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:25.342-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:25.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:25.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:25.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:25.516-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:25.693-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:25.842-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:43:25.939-0500 2019-11-26T14:43:25.939-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:25.939-0500 2019-11-26T14:43:25.939-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:25.940-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:25.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:26.016-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:26.193-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:26.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:26.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:26.342-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:26.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:26.516-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:43:26.550-0500 2019-11-26T14:43:26.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:26.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:26.693-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:26.842-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:27.015-0500 I SHARDING [Balancer] caught exception while doing balance: Could not find host matching read preference { mode: "primary" } for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:27.015-0500 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "nz_desktop:20000-2019-11-26T14:43:27.015-0500-5ddd805f5cde74b6784bc040", server: "nz_desktop:20000", shard: "config", clientAddr: "", time: new Date(1574797407015), what: "balancer.round", ns: "", details: { executionTimeMillis: 19999, errorOccured: true, errmsg: "Could not find host matching read preference { mode: "primary" } for set shard-rs0" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:27.016-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:27.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:27.142-0500 2019-11-26T14:43:27.142-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:27.142-0500 2019-11-26T14:43:27.142-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:27.142-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:27.193-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:27.342-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:27.342-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:27.358-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:27.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:27.392-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:27.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:27.395-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:43:27.452-0500 2019-11-26T14:43:27.452-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:27.620-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:27.693-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:27.693-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:27.842-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:28.193-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:28.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:28.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:28.342-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:43:28.344-0500 2019-11-26T14:43:28.344-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:28.344-0500 2019-11-26T14:43:28.344-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:28.345-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:28.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:28.550-0500 2019-11-26T14:43:28.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:28.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:28.694-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:28.842-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:29.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:29.194-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:29.194-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:29.342-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:29.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:29.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:29.547-0500 2019-11-26T14:43:29.547-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:29.547-0500 2019-11-26T14:43:29.547-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:29.547-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:29.694-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:29.842-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:29.843-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:30.194-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:30.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:30.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:30.343-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:30.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:30.550-0500 2019-11-26T14:43:30.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:30.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:30.694-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:43:30.749-0500 2019-11-26T14:43:30.749-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:30.749-0500 2019-11-26T14:43:30.749-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:30.749-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:30.843-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:30.843-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:31.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:31.194-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:31.343-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:31.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:31.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:31.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:31.694-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:31.843-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:31.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:31.952-0500 2019-11-26T14:43:31.952-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:31.952-0500 2019-11-26T14:43:31.952-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:31.952-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:32.194-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:32.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:32.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:32.343-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:43:32.550-0500 2019-11-26T14:43:32.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:32.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:32.694-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:32.694-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:32.843-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:33.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:33.154-0500 2019-11-26T14:43:33.154-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:33.154-0500 2019-11-26T14:43:33.154-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:33.155-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:33.194-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:33.343-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:33.343-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:33.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:33.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:33.694-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:33.843-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:34.194-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:34.195-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:34.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:34.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:34.343-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:43:34.357-0500 2019-11-26T14:43:34.357-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:34.357-0500 2019-11-26T14:43:34.357-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:34.357-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:34.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:34.550-0500 2019-11-26T14:43:34.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:34.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:34.695-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:34.843-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:35.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:35.195-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:35.195-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:35.343-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:35.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:35.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:35.508-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:43:35.559-0500 2019-11-26T14:43:35.559-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:35.560-0500 2019-11-26T14:43:35.559-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:35.560-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:35.695-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:35.843-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:35.843-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:36.195-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:36.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:36.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:36.343-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:36.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:36.550-0500 2019-11-26T14:43:36.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:36.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:36.695-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:43:36.762-0500 2019-11-26T14:43:36.762-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:36.762-0500 2019-11-26T14:43:36.762-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:36.762-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:36.843-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:36.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:37.017-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:37.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:37.135-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:37.195-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:37.343-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:37.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:37.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:37.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:37.517-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:37.517-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:37.695-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:37.843-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:43:37.965-0500 2019-11-26T14:43:37.964-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:37.965-0500 2019-11-26T14:43:37.965-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:37.965-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:38.017-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:38.195-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:38.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:38.343-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:38.343-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:38.517-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:43:38.550-0500 2019-11-26T14:43:38.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:38.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:38.695-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:38.696-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:38.843-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:39.017-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:39.017-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:39.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:39.168-0500 2019-11-26T14:43:39.167-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:39.168-0500 2019-11-26T14:43:39.168-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:39.168-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:39.196-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:39.343-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:39.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:39.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:39.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:39.517-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:39.696-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:39.696-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:39.843-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:39.948-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:40.017-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:40.196-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:40.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:40.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:40.343-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:43:40.371-0500 2019-11-26T14:43:40.370-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:40.371-0500 2019-11-26T14:43:40.371-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:40.371-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:40.517-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:43:40.550-0500 2019-11-26T14:43:40.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:40.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:40.696-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:40.843-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:40.844-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:41.017-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:41.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:41.196-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:41.196-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:41.344-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:41.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:41.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:41.517-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:41.517-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:43:41.573-0500 2019-11-26T14:43:41.573-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:41.573-0500 2019-11-26T14:43:41.573-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:41.574-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:41.697-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:41.844-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:41.845-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:42.017-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:42.197-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:42.197-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:42.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:42.345-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:42.517-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:43:42.550-0500 2019-11-26T14:43:42.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:42.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:42.697-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:43:42.776-0500 2019-11-26T14:43:42.776-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:42.776-0500 2019-11-26T14:43:42.776-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:42.776-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:42.845-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:42.845-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:43.017-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:43.017-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:43.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:43.197-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:43.345-0500 I REPL_HB [ReplCoord-0] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:43.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:43.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:43.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:43.517-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:43.697-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:43.845-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:43.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:43.978-0500 2019-11-26T14:43:43.978-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:43.979-0500 2019-11-26T14:43:43.979-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:43.979-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:44.017-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:44.197-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:44.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:44.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:44.345-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:44.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:44.517-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:43:44.550-0500 2019-11-26T14:43:44.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:44.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:44.697-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:44.845-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:45.017-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:45.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:45.181-0500 2019-11-26T14:43:45.181-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:45.181-0500 2019-11-26T14:43:45.181-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:45.182-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:45.198-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:45.345-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:45.345-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:45.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:45.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:45.517-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:45.517-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:45.698-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:45.698-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:45.845-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:46.017-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:46.199-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:46.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:46.345-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:43:46.384-0500 2019-11-26T14:43:46.384-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:46.384-0500 2019-11-26T14:43:46.384-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:46.385-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:46.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:46.517-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:43:46.550-0500 2019-11-26T14:43:46.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:46.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:46.699-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:46.699-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:46.845-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:47.017-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:47.017-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:47.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:47.199-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:47.345-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:47.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:47.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:47.517-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:43:47.587-0500 2019-11-26T14:43:47.587-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:47.587-0500 2019-11-26T14:43:47.587-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:47.587-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:47.699-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:47.845-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:47.845-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:48.017-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:48.199-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:48.199-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:48.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:48.339-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:48.345-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:48.517-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.550-0500 2019-11-26T14:43:48.550-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:48.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:48.699-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.789-0500 2019-11-26T14:43:48.789-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.790-0500 2019-11-26T14:43:48.789-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.790-0500 ReplSetTest Could not call ismaster on node connection to localhost:20001: Error: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.791-0500 assert.soon failed, msg : Finding primaryThe hang analyzer is automatically called in assert.soon functions. If you are *expecting* assert.soon to possibly fail, call assert.soon with {runHangAnalyzer: false} as the fifth argument (you can fill unused arguments with `undefined`). Running hang analyzer from assert.soon.
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.792-0500 Running hang_analyzer.py for pids [13986,14076,14079,14082,14340,14343,14346]
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.794-0500 2019-11-26T14:43:48.794-0500 I - [js] shell: started program (sh17613): /usr/bin/python ./buildscripts/hang_analyzer.py -c -d 13986,14076,14079,14082,14340,14343,14346
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:48.845-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.944-0500 sh17613| Traceback (most recent call last):
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.945-0500 sh17613| File "./buildscripts/hang_analyzer.py", line 39, in
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.945-0500 sh17613| from buildscripts.resmokelib import core
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.945-0500 sh17613| File "/home/nz_linux/mongo/buildscripts/resmokelib/__init__.py", line 5, in
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.945-0500 sh17613| from . import logging
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.945-0500 sh17613| File "/home/nz_linux/mongo/buildscripts/resmokelib/logging/__init__.py", line 7, in
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.945-0500 sh17613| from . import buildlogger
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.945-0500 sh17613| File "/home/nz_linux/mongo/buildscripts/resmokelib/logging/buildlogger.py", line 10, in
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.945-0500 sh17613| from . import handlers
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.945-0500 sh17613| File "/home/nz_linux/mongo/buildscripts/resmokelib/logging/handlers.py", line 21, in
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.945-0500 sh17613| from . import flush
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.945-0500 sh17613| File "/home/nz_linux/mongo/buildscripts/resmokelib/logging/flush.py", line 10, in
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.945-0500 sh17613| from ..utils import scheduler
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.945-0500 sh17613| File "/home/nz_linux/mongo/buildscripts/resmokelib/utils/__init__.py", line 25
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.945-0500 sh17613| print("Could not open file {}".format(filename), file=sys.stderr)
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.945-0500 sh17613| ^
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.945-0500 sh17613| SyntaxError: invalid syntax
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:48.947-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.954-0500 assert.soon failed, msg : Finding primaryThe hang analyzer is automatically called in assert.soon functions. If you are *expecting* assert.soon to possibly fail, call assert.soon with {runHangAnalyzer: false} as the fifth argument (you can fill unused arguments with `undefined`).
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.954-0500 doassert@src/mongo/shell/assert.js:20:14
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.954-0500 assert.soon@src/mongo/shell/assert.js:350:17
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.954-0500 assert.soonNoExcept@src/mongo/shell/assert.js:365:9
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.954-0500 ReplSetTest/this.getPrimary@src/mongo/shell/replsettest.js:865:9
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.954-0500 setup/this.reestablishConnectionsAfterFailover@jstests/concurrency/fsm_libs/cluster.js:210:21
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.954-0500 runWorkloads@jstests/concurrency/fsm_libs/resmoke_runner.js:189:13
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.954-0500 @jstests/concurrency/fsm_libs/resmoke_runner.js:283:1
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.954-0500 @jstests/concurrency/fsm_libs/resmoke_runner.js:1:2
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.954-0500
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.954-0500 2019-11-26T14:43:48.954-0500 E QUERY [js] Error: assert.soon failed, msg : Finding primaryThe hang analyzer is automatically called in assert.soon functions. If you are *expecting* assert.soon to possibly fail, call assert.soon with {runHangAnalyzer: false} as the fifth argument (you can fill unused arguments with `undefined`). :
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.954-0500 doassert@src/mongo/shell/assert.js:20:14
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.954-0500 assert.soon@src/mongo/shell/assert.js:350:17
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.954-0500 assert.soonNoExcept@src/mongo/shell/assert.js:365:9
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.954-0500 ReplSetTest/this.getPrimary@src/mongo/shell/replsettest.js:865:9
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.954-0500 setup/this.reestablishConnectionsAfterFailover@jstests/concurrency/fsm_libs/cluster.js:210:21
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.954-0500 runWorkloads@jstests/concurrency/fsm_libs/resmoke_runner.js:189:13
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.954-0500 @jstests/concurrency/fsm_libs/resmoke_runner.js:283:1
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.954-0500 @jstests/concurrency/fsm_libs/resmoke_runner.js:1:2
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.954-0500 2019-11-26T14:43:48.954-0500 F - [main] failed to load: jstests/concurrency/fsm_libs/resmoke_runner.js
[fsm_workload_test:agg_out] 2019-11-26T14:43:48.954-0500 2019-11-26T14:43:48.954-0500 E - [main] exiting with code -3
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:48.956-0500 I NETWORK [conn191] end connection 127.0.0.1:45960 (1 connection now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:48.956-0500 I NETWORK [conn216] end connection 127.0.0.1:47646 (18 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:49.017-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:49.085-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:49.199-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:49.345-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:49.358-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:49.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:49.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:49.517-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:49.517-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:49.699-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:49.845-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[fsm_workload_test:agg_out] 2019-11-26T14:43:49.956-0500 2019-11-26T14:43:49.956-0500 I NETWORK [js] trying reconnect to localhost:20001 failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:49.956-0500 2019-11-26T14:43:49.956-0500 I NETWORK [js] reconnect localhost:20001 failed failed
[fsm_workload_test:agg_out] 2019-11-26T14:43:49.956-0500 2019-11-26T14:43:49.956-0500 I QUERY [js] Failed to end session { id: UUID("8d3d5e22-b37c-41d8-a003-41cfe197166e") } due to SocketException: socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:20001, connection attempt failed: SocketException: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused]
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:49.957-0500 I NETWORK [conn103] end connection 127.0.0.1:53200 (10 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:49.957-0500 I NETWORK [conn103] end connection 127.0.0.1:54090 (10 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:49.957-0500 I NETWORK [conn215] end connection 127.0.0.1:47640 (17 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:49.957-0500 I NETWORK [conn99] end connection 127.0.0.1:52840 (11 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:49.958-0500 I NETWORK [conn99] end connection 127.0.0.1:36202 (11 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:49.958-0500 I NETWORK [conn194] end connection 127.0.0.1:46012 (0 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:49.958-0500 I NETWORK [conn57] end connection 127.0.0.1:59156 (0 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:49.958-0500 I NETWORK [conn152] end connection 127.0.0.1:57420 (15 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:49.969-0500 I NETWORK [conn98] end connection 127.0.0.1:52806 (10 connections now open)
[executor:fsm_workload_test:job0] 2019-11-26T14:43:49.971-0500 agg_out.js ran in 689.53 seconds: failed.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:49.969-0500 I NETWORK [conn211] end connection 127.0.0.1:47606 (16 connections now open)
[executor:fsm_workload_test:job0] 2019-11-26T14:43:49.971-0500 FSM workload jstests/concurrency/fsm_workloads/agg_out.js failed, so stopping...
[executor:fsm_workload_test:job0] 2019-11-26T14:43:49.971-0500 Received a StopExecution exception: FSM workload jstests/concurrency/fsm_workloads/agg_out.js failed.
[executor] 2019-11-26T14:43:49.972-0500 Waiting for threads to complete
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:49.969-0500 I NETWORK [conn148] end connection 127.0.0.1:57370 (14 connections now open)
[executor] 2019-11-26T14:43:49.972-0500 Threads are completed!
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:49.969-0500 I NETWORK [conn98] end connection 127.0.0.1:36162 (10 connections now open)
[executor] 2019-11-26T14:43:49.973-0500 Summary of latest execution: 2 test(s) ran in 689.55 seconds (0 succeeded, 0 were skipped, 2 failed, 0 errored)
The following tests failed (with exit code):
jstests/concurrency/fsm_workloads/agg_out.js (253 Failure executing JS file)
agg_out:CheckReplDBHashInBackground (1 DB Exception)
If you're unsure where to begin investigating these errors, consider looking at tests in the following order:
agg_out:CheckReplDBHashInBackground
jstests/concurrency/fsm_workloads/agg_out.js
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:49.969-0500 I NETWORK [conn102] end connection 127.0.0.1:53162 (9 connections now open)
[executor:fsm_workload_test:job0] 2019-11-26T14:43:49.973-0500 Running job0_fixture_teardown...
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:49.969-0500 I NETWORK [conn102] end connection 127.0.0.1:54052 (9 connections now open)
[fsm_workload_test:job0_fixture_teardown] 2019-11-26T14:43:49.973-0500 Starting the teardown of ShardedClusterFixture (Job #0).
[ShardedClusterFixture:job0] Stopping all members of the sharded cluster...
[ShardedClusterFixture:job0] All members of the sharded cluster were expected to be running, but weren't.
[ShardedClusterFixture:job0] Stopping config server...
[ShardedClusterFixture:job0:configsvr] Stopping all members of the replica set...
[ShardedClusterFixture:job0:configsvr] Stopping replica set member on port 20000...
[ShardedClusterFixture:job0:configsvr:primary] Stopping mongod on port 20000 with pid 13986...
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:49.974-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:49.974-0500 I SHARDING [Balancer] caught exception while doing balance: operation was interrupted
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:49.974-0500 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "nz_desktop:20000-2019-11-26T14:43:49.974-0500-5ddd80755cde74b6784bc06d", server: "nz_desktop:20000", shard: "config", clientAddr: "", time: new Date(1574797429974), what: "balancer.round", ns: "", details: { executionTimeMillis: 12958, errorOccured: true, errmsg: "operation was interrupted" } }
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:49.975-0500 W SHARDING [Balancer] Error encountered while logging config change with ID [nz_desktop:20000-2019-11-26T14:43:49.974-0500-5ddd80755cde74b6784bc06d] into collection actionlog: InterruptedDueToReplStateChange: operation was interrupted
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:49.975-0500 I SHARDING [Balancer] CSRS balancer is now stopped
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:49.975-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:49.975-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20000.sock
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:49.975-0500 I - [signalProcessingThread] Stopping further Flow Control ticket acquisitions.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:49.975-0500 I REPL [signalProcessingThread] shutting down replication subsystems
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:49.975-0500 I REPL [signalProcessingThread] Stopping replication reporter thread
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:49.976-0500 I REPL [signalProcessingThread] Stopping replication fetcher thread
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:49.976-0500 I REPL [signalProcessingThread] Stopping replication applier thread
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:49.976-0500 I REPL [OplogApplier-0] Finished oplog application
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.017-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:50.199-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:50.309-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:50.345-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:50.345-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:50.475-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.517-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set shard-rs0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:50.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:50.700-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:50.845-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.885-0500 I REPL [BackgroundSync] Stopping replication producer
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.886-0500 I REPL [signalProcessingThread] Stopping replication storage threads
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.886-0500 I ASIO [OplogApplierNetwork] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.887-0500 I ASIO [ReplCoordExternNetwork] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.887-0500 I ASIO [ReplNetwork] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.887-0500 I NETWORK [signalProcessingThread] Dropping all ongoing scans against replica sets
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.887-0500 I ASIO [ReplicaSetMonitor-TaskExecutor] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.887-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20002 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.887-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20005 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.887-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20003 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.887-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20006 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.887-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20004 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:50.887-0500 I NETWORK [conn21] end connection 127.0.0.1:51066 (9 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:50.887-0500 I NETWORK [conn25] end connection 127.0.0.1:51402 (8 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:50.887-0500 I NETWORK [conn33] end connection 127.0.0.1:45866 (15 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:50.887-0500 I NETWORK [conn25] end connection 127.0.0.1:52288 (8 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.887-0500 I ASIO [signalProcessingThread] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:50.887-0500 I NETWORK [conn22] end connection 127.0.0.1:34428 (9 connections now open)
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.888-0500 W SHARDING [shard-registry-reload] cant reload ShardRegistry :: caused by :: CallbackCanceled: Callback canceled
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.888-0500 I ASIO [shard-registry-reload] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.888-0500 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.894-0500 I STORAGE [signalProcessingThread] Deregistering all the collections
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.894-0500 I STORAGE [WTOplogJournalThread] Oplog journal thread loop shutting down
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.895-0500 I STORAGE [signalProcessingThread] Timestamp monitor shutting down
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.895-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.897-0500 I STORAGE [signalProcessingThread] Shutting down session sweeper thread
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.897-0500 I STORAGE [signalProcessingThread] Finished shutting down session sweeper thread
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.897-0500 I STORAGE [signalProcessingThread] Shutting down journal flusher thread
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.897-0500 I STORAGE [signalProcessingThread] Finished shutting down journal flusher thread
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.897-0500 I STORAGE [signalProcessingThread] Shutting down checkpoint thread
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.897-0500 I STORAGE [signalProcessingThread] Finished shutting down checkpoint thread
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.920-0500 I STORAGE [signalProcessingThread] shutdown: removing fs lock...
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.920-0500 I CONTROL [signalProcessingThread] now exiting
[ShardedClusterFixture:job0:configsvr:primary] 2019-11-26T14:43:50.920-0500 I CONTROL [signalProcessingThread] shutting down with code:0
[ShardedClusterFixture:job0:configsvr:primary] Successfully stopped the mongod on port 20000.
[ShardedClusterFixture:job0:configsvr] Successfully stopped replica set member on port 20000.
[ShardedClusterFixture:job0:configsvr] Successfully stopped all members of the replica set.
[ShardedClusterFixture:job0] Successfully stopped config server.
[ShardedClusterFixture:job0] Stopping mongos...
[ShardedClusterFixture:job0:mongos0] Stopping mongos on port 20007 with pid 14692...
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:50.932-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:50.932-0500 I NETWORK [signalProcessingThread] shutdown: going to close all sockets...
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:50.932-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20007.sock
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:50.932-0500 I NETWORK [signalProcessingThread] Dropping all ongoing scans against replica sets
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:50.932-0500 I ASIO [ReplicaSetMonitor-TaskExecutor] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:50.932-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20002 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:50.932-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20004 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:50.932-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20000 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:50.932-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20003 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:50.932-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20006 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:50.932-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20005 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:50.932-0500 I NETWORK [conn61] end connection 127.0.0.1:46064 (14 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:50.932-0500 I NETWORK [conn31] end connection 127.0.0.1:51264 (8 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:50.933-0500 I NETWORK [conn32] end connection 127.0.0.1:34626 (8 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:50.932-0500 I NETWORK [conn36] end connection 127.0.0.1:52514 (7 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:50.933-0500 I NETWORK [conn36] end connection 127.0.0.1:51628 (7 connections now open)
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:50.933-0500 I ASIO [signalProcessingThread] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:50.933-0500 I ASIO [ShardRegistry] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:50.933-0500 I ASIO [TaskExecutorPool-0] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:50.933-0500 W SHARDING [signalProcessingThread] error encountered while cleaning up distributed ping entry for nz_desktop:20007:1574796655:8358214168427282717 :: caused by :: ReplicaSetMonitorRemoved: ReplicaSetMonitor for set config-rs is removed
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:50.933-0500 W SHARDING [shard-registry-reload] cant reload ShardRegistry :: caused by :: CallbackCanceled: Callback canceled
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:50.933-0500 I ASIO [shard-registry-reload] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:50.933-0500 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
[ShardedClusterFixture:job0:mongos0] 2019-11-26T14:43:50.933-0500 I CONTROL [signalProcessingThread] shutting down with code:0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:50.939-0500 I NETWORK [conn63] end connection 127.0.0.1:46076 (13 connections now open)
[ShardedClusterFixture:job0:mongos0] Successfully stopped the mongos on port 20007
[ShardedClusterFixture:job0] Successfully stopped mongos.
[ShardedClusterFixture:job0] Stopping mongos...
[ShardedClusterFixture:job0:mongos1] Stopping mongos on port 20008 with pid 14729...
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:50.940-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:50.940-0500 I NETWORK [signalProcessingThread] shutdown: going to close all sockets...
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:50.940-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20008.sock
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:50.940-0500 I NETWORK [signalProcessingThread] Dropping all ongoing scans against replica sets
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:50.940-0500 I ASIO [ReplicaSetMonitor-TaskExecutor] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:50.940-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20006 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:50.940-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20000 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:50.940-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20003 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:50.940-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20002 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:50.941-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20005 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:50.941-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20004 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:50.941-0500 I NETWORK [conn70] end connection 127.0.0.1:46122 (12 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:50.941-0500 I NETWORK [conn37] end connection 127.0.0.1:51682 (6 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:50.941-0500 I ASIO [signalProcessingThread] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:50.941-0500 I NETWORK [conn37] end connection 127.0.0.1:52572 (6 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:50.941-0500 I NETWORK [conn33] end connection 127.0.0.1:51318 (7 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:50.941-0500 I NETWORK [conn34] end connection 127.0.0.1:34676 (7 connections now open)
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:50.941-0500 I ASIO [ShardRegistry] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:50.941-0500 I ASIO [TaskExecutorPool-0] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:50.941-0500 W SHARDING [signalProcessingThread] error encountered while cleaning up distributed ping entry for nz_desktop:20008:1574796656:7765268974563860519 :: caused by :: ReplicaSetMonitorRemoved: ReplicaSetMonitor for set config-rs is removed
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:50.941-0500 W SHARDING [shard-registry-reload] cant reload ShardRegistry :: caused by :: CallbackCanceled: Callback canceled
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:50.941-0500 I ASIO [shard-registry-reload] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:50.942-0500 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
[ShardedClusterFixture:job0:mongos1] 2019-11-26T14:43:50.942-0500 I CONTROL [signalProcessingThread] shutting down with code:0
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:50.946-0500 I NETWORK [conn126] end connection 127.0.0.1:46542 (11 connections now open)
[ShardedClusterFixture:job0:mongos1] Successfully stopped the mongos on port 20008
[ShardedClusterFixture:job0] Successfully stopped mongos.
[ShardedClusterFixture:job0] Stopping shard...
[ShardedClusterFixture:job0:shard0] Stopping all members of the replica set...
[ShardedClusterFixture:job0:shard0] All members of the replica set were expected to be running, but weren't.
[ShardedClusterFixture:job0:shard0] Stopping replica set member on port 20003...
[ShardedClusterFixture:job0:shard0:secondary1] Stopping mongod on port 20003 with pid 14082...
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:50.946-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:50.947-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:50.947-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20003.sock
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:50.947-0500 I - [signalProcessingThread] Stopping further Flow Control ticket acquisitions.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:50.947-0500 I REPL [signalProcessingThread] shutting down replication subsystems
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:50.947-0500 I REPL [signalProcessingThread] Stopping replication reporter thread
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:50.947-0500 I REPL [signalProcessingThread] Stopping replication fetcher thread
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:50.947-0500 I REPL [signalProcessingThread] Stopping replication applier thread
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:50.948-0500 I REPL [OplogApplier-0] Finished oplog application
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.200-0500 I REPL_HB [ReplCoord-8] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:51.345-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:51.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:51.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:51.474-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.670-0500 I REPL [BackgroundSync] Stopping replication producer
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.670-0500 I REPL [signalProcessingThread] Stopping replication storage threads
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.671-0500 I ASIO [OplogApplierNetwork] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.671-0500 I ASIO [ReplCoordExternNetwork] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.672-0500 I ASIO [ReplNetwork] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.672-0500 I CONNPOOL [ReplNetwork] Dropping all pooled connections to localhost:20002 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:51.672-0500 I NETWORK [conn11] end connection 127.0.0.1:51160 (5 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.672-0500 I ASIO [ShardRegistry] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.672-0500 I CONNPOOL [ShardRegistry] Dropping all pooled connections to localhost:20000 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.672-0500 I ASIO [TaskExecutorPool-0] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.673-0500 W SHARDING [signalProcessingThread] error encountered while cleaning up distributed ping entry for nz_desktop:20003:1574796657:-2123896925116328441 :: caused by :: ShutdownInProgress: Shutdown in progress
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.673-0500 W SHARDING [shard-registry-reload] cant reload ShardRegistry :: caused by :: CallbackCanceled: Callback canceled
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.673-0500 I ASIO [shard-registry-reload] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.673-0500 I NETWORK [signalProcessingThread] Dropping all ongoing scans against replica sets
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.673-0500 I ASIO [ReplicaSetMonitor-TaskExecutor] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.673-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20006 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.673-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20003 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.673-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20004 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.673-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20005 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:51.673-0500 I NETWORK [conn87] end connection 127.0.0.1:52554 (6 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:51.673-0500 I NETWORK [conn192] end connection 127.0.0.1:47354 (10 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.673-0500 I NETWORK [conn86] end connection 127.0.0.1:53820 (5 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:51.673-0500 I NETWORK [conn81] end connection 127.0.0.1:35916 (6 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:51.673-0500 I NETWORK [conn91] end connection 127.0.0.1:52930 (4 connections now open)
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.673-0500 I ASIO [signalProcessingThread] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.674-0500 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.677-0500 I STORAGE [signalProcessingThread] Deregistering all the collections
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.677-0500 I STORAGE [WTOplogJournalThread] Oplog journal thread loop shutting down
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.678-0500 I STORAGE [signalProcessingThread] Timestamp monitor shutting down
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.678-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.679-0500 I STORAGE [signalProcessingThread] Shutting down session sweeper thread
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.679-0500 I STORAGE [signalProcessingThread] Finished shutting down session sweeper thread
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.679-0500 I STORAGE [signalProcessingThread] Shutting down journal flusher thread
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.679-0500 I STORAGE [signalProcessingThread] Finished shutting down journal flusher thread
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.679-0500 I STORAGE [signalProcessingThread] Shutting down checkpoint thread
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:51.679-0500 I STORAGE [signalProcessingThread] Finished shutting down checkpoint thread
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:51.699-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20003 failed after 2 retries, response status: InterruptedAtShutdown: interrupted at shutdown
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:51.699-0500 I REPL [ReplCoord-4] Member localhost:20003 is now in state RS_DOWN - interrupted at shutdown
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:51.845-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:52.199-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20003 failed after 2 retries, response status: InterruptedAtShutdown: interrupted at shutdown
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:52.346-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:52.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:52.700-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20003 failed after 2 retries, response status: InterruptedAtShutdown: interrupted at shutdown
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:52.846-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:52.846-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:53.013-0500 I STORAGE [signalProcessingThread] shutdown: removing fs lock...
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:53.014-0500 I CONTROL [signalProcessingThread] now exiting
[ShardedClusterFixture:job0:shard0:secondary1] 2019-11-26T14:43:53.014-0500 I CONTROL [signalProcessingThread] shutting down with code:0
[ShardedClusterFixture:job0:shard0:secondary1] Successfully stopped the mongod on port 20003.
[ShardedClusterFixture:job0:shard0] Successfully stopped replica set member on port 20003.
[ShardedClusterFixture:job0:shard0] Stopping replica set member on port 20002...
[ShardedClusterFixture:job0:shard0:secondary0] Stopping mongod on port 20002 with pid 14079...
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.076-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.076-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.076-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20002.sock
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.076-0500 I - [signalProcessingThread] Stopping further Flow Control ticket acquisitions.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.077-0500 I REPL [signalProcessingThread] shutting down replication subsystems
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.077-0500 I REPL [signalProcessingThread] Stopping replication reporter thread
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.077-0500 I REPL [signalProcessingThread] Stopping replication fetcher thread
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.077-0500 I REPL [signalProcessingThread] Stopping replication applier thread
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.077-0500 I REPL [OplogApplier-0] Finished oplog application
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.200-0500 I CONNPOOL [ReplCoord-0] dropping unhealthy pooled connection to localhost:20003
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.200-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20003
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.200-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20003 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20003 (127.0.0.1:20003) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.346-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20001 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20001 (127.0.0.1:20001) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:53.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:53.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.700-0500 I REPL_HB [ReplCoord-4] Heartbeat to localhost:20003 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20003 (127.0.0.1:20003) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.834-0500 I REPL [BackgroundSync] Stopping replication producer
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.834-0500 I REPL [signalProcessingThread] Stopping replication storage threads
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.834-0500 I ASIO [OplogApplierNetwork] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.835-0500 I ASIO [ReplCoordExternNetwork] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.836-0500 I ASIO [ReplNetwork] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.836-0500 I ASIO [ShardRegistry] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.836-0500 I CONNPOOL [ShardRegistry] Dropping all pooled connections to localhost:20000 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.837-0500 I ASIO [TaskExecutorPool-0] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.837-0500 W SHARDING [signalProcessingThread] error encountered while cleaning up distributed ping entry for nz_desktop:20002:1574796657:-961983587514488543 :: caused by :: ShutdownInProgress: Shutdown in progress
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.837-0500 W SHARDING [shard-registry-reload] cant reload ShardRegistry :: caused by :: CallbackCanceled: Callback canceled
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.837-0500 I ASIO [shard-registry-reload] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.837-0500 I NETWORK [signalProcessingThread] Dropping all ongoing scans against replica sets
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.837-0500 I ASIO [ReplicaSetMonitor-TaskExecutor] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.837-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20004 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.837-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20002 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.837-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20005 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.837-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20003 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.837-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20006 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.837-0500 I NETWORK [conn50] end connection 127.0.0.1:52022 (3 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:53.837-0500 I NETWORK [conn160] end connection 127.0.0.1:46958 (9 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:53.837-0500 I NETWORK [conn72] end connection 127.0.0.1:52152 (5 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:53.837-0500 I NETWORK [conn66] end connection 127.0.0.1:35514 (5 connections now open)
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.837-0500 I ASIO [signalProcessingThread] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.838-0500 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.841-0500 I STORAGE [signalProcessingThread] Deregistering all the collections
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.841-0500 I STORAGE [WTOplogJournalThread] Oplog journal thread loop shutting down
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.841-0500 I STORAGE [signalProcessingThread] Timestamp monitor shutting down
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.842-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.843-0500 I STORAGE [signalProcessingThread] Shutting down session sweeper thread
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.843-0500 I STORAGE [signalProcessingThread] Finished shutting down session sweeper thread
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.843-0500 I STORAGE [signalProcessingThread] Shutting down journal flusher thread
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.843-0500 I STORAGE [signalProcessingThread] Finished shutting down journal flusher thread
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.843-0500 I STORAGE [signalProcessingThread] Shutting down checkpoint thread
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:53.843-0500 I STORAGE [signalProcessingThread] Finished shutting down checkpoint thread
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:54.622-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:55.236-0500 I STORAGE [signalProcessingThread] shutdown: removing fs lock...
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:55.236-0500 I CONTROL [signalProcessingThread] now exiting
[ShardedClusterFixture:job0:shard0:secondary0] 2019-11-26T14:43:55.236-0500 I CONTROL [signalProcessingThread] shutting down with code:0
[ShardedClusterFixture:job0:shard0:secondary0] Successfully stopped the mongod on port 20002.
[ShardedClusterFixture:job0:shard0] Successfully stopped replica set member on port 20002.
[ShardedClusterFixture:job0:shard0] Stopping replica set member on port 20001...
[ShardedClusterFixture:job0:shard0:primary] Stopping mongod on port 20001 with pid 14076...
[ShardedClusterFixture:job0:shard0:primary] mongod on port 20001 was expected to be running, but wasn't. Process exited with code -6.
[ShardedClusterFixture:job0:shard0] Error while stopping replica set member on port 20001: mongod on port 20001 was expected to be running, but wasn't. Process exited with code -6.
[ShardedClusterFixture:job0:shard0] Stopping the replica set fixture failed.
[ShardedClusterFixture:job0] Error while stopping shard: Error while stopping replica set member on port 20001: mongod on port 20001 was expected to be running, but wasn't. Process exited with code -6.
[ShardedClusterFixture:job0] Stopping shard...
[ShardedClusterFixture:job0:shard1] Stopping all members of the replica set...
[ShardedClusterFixture:job0:shard1] Stopping replica set member on port 20006...
[ShardedClusterFixture:job0:shard1:secondary1] Stopping mongod on port 20006 with pid 14346...
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.298-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.299-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.299-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20006.sock
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.299-0500 I - [signalProcessingThread] Stopping further Flow Control ticket acquisitions.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.299-0500 I REPL [signalProcessingThread] shutting down replication subsystems
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.299-0500 I REPL [signalProcessingThread] Stopping replication reporter thread
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.299-0500 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to localhost:20004: CallbackCanceled: Reporter no longer valid
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.300-0500 I REPL [signalProcessingThread] Stopping replication fetcher thread
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.300-0500 I REPL [signalProcessingThread] Stopping replication applier thread
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.300-0500 I REPL [BackgroundSync] Replication producer stopped after oplog fetcher finished returning a batch from our sync source. Abandoning this batch of oplog entries and re-evaluating our sync source.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.300-0500 I REPL [BackgroundSync] Stopping replication producer
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.300-0500 I REPL [OplogApplier-0] Finished oplog application
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.300-0500 I REPL [signalProcessingThread] Stopping replication storage threads
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.300-0500 I ASIO [OplogApplierNetwork] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.300-0500 I ASIO [ReplCoordExternNetwork] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.300-0500 I CONNPOOL [ReplCoordExternNetwork] Dropping all pooled connections to localhost:20004 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:55.301-0500 I NETWORK [conn22] end connection 127.0.0.1:45680 (8 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.301-0500 I ASIO [ReplNetwork] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.301-0500 I CONNPOOL [ReplNetwork] Dropping all pooled connections to localhost:20005 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:55.301-0500 I NETWORK [conn12] end connection 127.0.0.1:45648 (7 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.301-0500 I NETWORK [conn11] end connection 127.0.0.1:50864 (4 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.302-0500 I ASIO [ShardRegistry] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.302-0500 I CONNPOOL [ShardRegistry] Dropping all pooled connections to localhost:20000 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.302-0500 I ASIO [TaskExecutorPool-0] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.302-0500 W SHARDING [signalProcessingThread] error encountered while cleaning up distributed ping entry for nz_desktop:20006:1574796657:8805374381407459879 :: caused by :: ShutdownInProgress: Shutdown in progress
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.302-0500 W SHARDING [shard-registry-reload] cant reload ShardRegistry :: caused by :: CallbackCanceled: Callback canceled
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.302-0500 I ASIO [shard-registry-reload] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.302-0500 I NETWORK [signalProcessingThread] Dropping all ongoing scans against replica sets
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.302-0500 I ASIO [ReplicaSetMonitor-TaskExecutor] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.302-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20006 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.302-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20003 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.302-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20002 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.303-0500 I NETWORK [conn82] end connection 127.0.0.1:35926 (4 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:55.303-0500 I NETWORK [conn193] end connection 127.0.0.1:47366 (6 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.303-0500 I ASIO [signalProcessingThread] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.303-0500 I NETWORK [conn88] end connection 127.0.0.1:52568 (3 connections now open)
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.303-0500 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.309-0500 I STORAGE [signalProcessingThread] Deregistering all the collections
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.310-0500 I STORAGE [WTOplogJournalThread] Oplog journal thread loop shutting down
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.310-0500 I STORAGE [signalProcessingThread] Timestamp monitor shutting down
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.310-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.311-0500 I STORAGE [signalProcessingThread] Shutting down session sweeper thread
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.312-0500 I STORAGE [signalProcessingThread] Finished shutting down session sweeper thread
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.312-0500 I STORAGE [signalProcessingThread] Shutting down journal flusher thread
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.312-0500 I STORAGE [signalProcessingThread] Finished shutting down journal flusher thread
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.312-0500 I STORAGE [signalProcessingThread] Shutting down checkpoint thread
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.312-0500 I STORAGE [signalProcessingThread] Finished shutting down checkpoint thread
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.360-0500 I STORAGE [signalProcessingThread] shutdown: removing fs lock...
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.361-0500 I CONTROL [signalProcessingThread] now exiting
[ShardedClusterFixture:job0:shard1:secondary1] 2019-11-26T14:43:55.361-0500 I CONTROL [signalProcessingThread] shutting down with code:0
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.391-0500 I CONNPOOL [ReplNetwork] Ending connection to host localhost:20006 due to bad connection status: HostUnreachable: Connection reset by peer; 0 connections to that host remain open
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.391-0500 I CONNPOOL [ReplNetwork] Connecting to localhost:20006
[ShardedClusterFixture:job0:shard1:secondary1] Successfully stopped the mongod on port 20006.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.391-0500 I REPL_HB [ReplCoord-5] Heartbeat to localhost:20006 failed after 2 retries, response status: HostUnreachable: Error connecting to localhost:20006 (127.0.0.1:20006) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1] Successfully stopped replica set member on port 20006.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.391-0500 I REPL [ReplCoord-5] Member localhost:20006 is now in state RS_DOWN - Error connecting to localhost:20006 (127.0.0.1:20006) :: caused by :: Connection refused
[ShardedClusterFixture:job0:shard1] Stopping replica set member on port 20005...
[ShardedClusterFixture:job0:shard1:secondary0] Stopping mongod on port 20005 with pid 14343...
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.392-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to localhost:20001
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.392-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.392-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.392-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20005.sock
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.392-0500 I - [signalProcessingThread] Stopping further Flow Control ticket acquisitions.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.392-0500 I REPL [signalProcessingThread] shutting down replication subsystems
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.393-0500 I REPL [signalProcessingThread] Stopping replication reporter thread
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.393-0500 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to localhost:20004: CallbackCanceled: Reporter no longer valid
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.393-0500 I REPL [signalProcessingThread] Stopping replication fetcher thread
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.393-0500 I REPL [signalProcessingThread] Stopping replication applier thread
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.393-0500 I REPL [BackgroundSync] Replication producer stopped after oplog fetcher finished returning a batch from our sync source. Abandoning this batch of oplog entries and re-evaluating our sync source.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.393-0500 I REPL [BackgroundSync] Stopping replication producer
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.393-0500 I REPL [OplogApplier-0] Finished oplog application
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.393-0500 I REPL [signalProcessingThread] Stopping replication storage threads
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.393-0500 I ASIO [OplogApplierNetwork] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.394-0500 I ASIO [ReplCoordExternNetwork] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.394-0500 I CONNPOOL [ReplCoordExternNetwork] Dropping all pooled connections to localhost:20004 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:55.394-0500 I NETWORK [conn21] end connection 127.0.0.1:45678 (5 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.394-0500 I ASIO [ReplNetwork] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:55.394-0500 I NETWORK [conn14] end connection 127.0.0.1:45650 (4 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.395-0500 I ASIO [ShardRegistry] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.395-0500 I CONNPOOL [ShardRegistry] Dropping all pooled connections to localhost:20000 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.395-0500 I ASIO [TaskExecutorPool-0] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.395-0500 W SHARDING [signalProcessingThread] error encountered while cleaning up distributed ping entry for nz_desktop:20005:1574796657:6300883275503185230 :: caused by :: ShutdownInProgress: Shutdown in progress
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.395-0500 W SHARDING [shard-registry-reload] cant reload ShardRegistry :: caused by :: CallbackCanceled: Callback canceled
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.395-0500 I ASIO [shard-registry-reload] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.395-0500 I NETWORK [signalProcessingThread] Dropping all ongoing scans against replica sets
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.395-0500 I ASIO [ReplicaSetMonitor-TaskExecutor] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20002 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20005 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20006 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.395-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20003 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.395-0500 I NETWORK [conn36] end connection 127.0.0.1:51392 (2 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:55.396-0500 I NETWORK [conn95] end connection 127.0.0.1:46196 (3 connections now open)
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.396-0500 I ASIO [signalProcessingThread] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.396-0500 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.401-0500 I STORAGE [signalProcessingThread] Deregistering all the collections
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.401-0500 I STORAGE [WTOplogJournalThread] Oplog journal thread loop shutting down
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.401-0500 I STORAGE [signalProcessingThread] Timestamp monitor shutting down
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.401-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.402-0500 I STORAGE [signalProcessingThread] Shutting down session sweeper thread
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.402-0500 I STORAGE [signalProcessingThread] Finished shutting down session sweeper thread
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.402-0500 I STORAGE [signalProcessingThread] Shutting down journal flusher thread
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.402-0500 I STORAGE [signalProcessingThread] Finished shutting down journal flusher thread
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.402-0500 I STORAGE [signalProcessingThread] Shutting down checkpoint thread
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.402-0500 I STORAGE [signalProcessingThread] Finished shutting down checkpoint thread
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.442-0500 I STORAGE [signalProcessingThread] shutdown: removing fs lock...
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.442-0500 I CONTROL [signalProcessingThread] now exiting
[ShardedClusterFixture:job0:shard1:secondary0] 2019-11-26T14:43:55.442-0500 I CONTROL [signalProcessingThread] shutting down with code:0
[ShardedClusterFixture:job0:shard1:secondary0] Successfully stopped the mongod on port 20005.
[ShardedClusterFixture:job0:shard1] Successfully stopped replica set member on port 20005.
[ShardedClusterFixture:job0:shard1] Stopping replica set member on port 20004...
[ShardedClusterFixture:job0:shard1:primary] Stopping mongod on port 20004 with pid 14340...
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:55.474-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:55.474-0500 I REPL [RstlKillOpThread] Starting to kill user operations
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:55.474-0500 I REPL [RstlKillOpThread] Stopped killing user operations
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:55.474-0500 I REPL [RstlKillOpThread] State transition ops metrics: { lastStateTransition: "stepDown", userOpsKilled: 0, userOpsRunning: 2 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:55.575-0500 I REPL [RstlKillOpThread] Starting to kill user operations
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:55.575-0500 I REPL [RstlKillOpThread] Stopped killing user operations
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:55.575-0500 I REPL [RstlKillOpThread] State transition ops metrics: { lastStateTransition: "stepDown", userOpsKilled: 0, userOpsRunning: 2 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:55.575-0500 I STORAGE [signalProcessingThread] Failed to stepDown in non-command initiated shutdown path ExceededTimeLimit: No electable secondaries caught up as of 2019-11-26T14:43:55.575-0500. Please use the replSetStepDown command with the argument {force: true} to force node to step down.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:55.575-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:55.575-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20004.sock
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:55.575-0500 I - [signalProcessingThread] Stopping further Flow Control ticket acquisitions.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:55.576-0500 I REPL [signalProcessingThread] shutting down replication subsystems
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:55.576-0500 I REPL [signalProcessingThread] Stopping replication reporter thread
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:55.576-0500 I REPL [signalProcessingThread] Stopping replication fetcher thread
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:55.576-0500 I REPL [signalProcessingThread] Stopping replication applier thread
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:55.577-0500 I REPL [OplogApplier-0] Finished oplog application
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.127-0500 I REPL [BackgroundSync] Stopping replication producer
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.127-0500 I REPL [signalProcessingThread] Stopping replication storage threads
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.127-0500 I ASIO [OplogApplierNetwork] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.128-0500 I ASIO [ReplCoordExternNetwork] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.129-0500 I ASIO [ReplNetwork] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.129-0500 I CONNPOOL [ReplNetwork] Dropping all pooled connections to localhost:20006 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.129-0500 I CONNPOOL [ReplNetwork] Dropping all pooled connections to localhost:20005 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.130-0500 I ASIO [ShardRegistry] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.130-0500 I CONNPOOL [ShardRegistry] Dropping all pooled connections to localhost:20000 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.130-0500 I ASIO [TaskExecutorPool-0] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.130-0500 W SHARDING [signalProcessingThread] error encountered while cleaning up distributed ping entry for nz_desktop:20004:1574796657:2902281840457103640 :: caused by :: ShutdownInProgress: Shutdown in progress
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.130-0500 W SHARDING [shard-registry-reload] cant reload ShardRegistry :: caused by :: CallbackCanceled: Callback canceled
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.130-0500 I ASIO [shard-registry-reload] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.130-0500 I NETWORK [signalProcessingThread] Dropping all ongoing scans against replica sets
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.130-0500 I ASIO [ReplicaSetMonitor-TaskExecutor] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.130-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20003 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.130-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20004 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.130-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to localhost:20002 due to ShutdownInProgress: Shutting down the connection pool
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.131-0500 I NETWORK [conn67] end connection 127.0.0.1:46110 (2 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.131-0500 I ASIO [signalProcessingThread] Killing all outstanding egress activity.
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.131-0500 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.136-0500 I STORAGE [signalProcessingThread] Deregistering all the collections
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.137-0500 I STORAGE [WTOplogJournalThread] Oplog journal thread loop shutting down
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.137-0500 I STORAGE [signalProcessingThread] Timestamp monitor shutting down
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.137-0500 W QUERY [conn29] GetMore command executor error: FAILURE, status: InterruptedAtShutdown: interrupted at shutdown, stats: { stage: "COLLSCAN", nReturned: 50813, executionTimeMillisEstimate: 43, works: 61577, advanced: 50813, needTime: 5382, needYield: 0, saveState: 5662, restoreState: 5661, isEOF: 0, direction: "forward", minTs: Timestamp(1574796653, 3), docsExamined: 50813 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.137-0500 W QUERY [conn30] GetMore command executor error: FAILURE, status: InterruptedAtShutdown: interrupted at shutdown, stats: { stage: "COLLSCAN", nReturned: 50813, executionTimeMillisEstimate: 51, works: 61643, advanced: 50813, needTime: 5415, needYield: 0, saveState: 5695, restoreState: 5694, isEOF: 0, direction: "forward", minTs: Timestamp(1574796653, 3), docsExamined: 50813 }
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.137-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.137-0500 I STORAGE [signalProcessingThread] Shutting down session sweeper thread
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.137-0500 I NETWORK [conn30] end connection 127.0.0.1:45768 (1 connection now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.137-0500 I NETWORK [conn29] end connection 127.0.0.1:45766 (0 connections now open)
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.137-0500 I STORAGE [signalProcessingThread] Finished shutting down session sweeper thread
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.137-0500 I STORAGE [signalProcessingThread] Shutting down journal flusher thread
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.137-0500 I STORAGE [signalProcessingThread] Finished shutting down journal flusher thread
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.137-0500 I STORAGE [signalProcessingThread] Shutting down checkpoint thread
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.137-0500 I STORAGE [signalProcessingThread] Finished shutting down checkpoint thread
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.170-0500 I STORAGE [signalProcessingThread] shutdown: removing fs lock...
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.170-0500 I CONTROL [signalProcessingThread] now exiting
[ShardedClusterFixture:job0:shard1:primary] 2019-11-26T14:43:56.170-0500 I CONTROL [signalProcessingThread] shutting down with code:0
[ShardedClusterFixture:job0:shard1:primary] Successfully stopped the mongod on port 20004.
[ShardedClusterFixture:job0:shard1] Successfully stopped replica set member on port 20004.
[ShardedClusterFixture:job0:shard1] Successfully stopped all members of the replica set.
[ShardedClusterFixture:job0] Successfully stopped shard.
[ShardedClusterFixture:job0] Stopping the sharded cluster fixture failed.
[fsm_workload_test:job0_fixture_teardown] 2019-11-26T14:43:56.197-0500 An error occurred during the teardown of ShardedClusterFixture (Job #0): Error while stopping shard: Error while stopping replica set member on port 20001: mongod on port 20001 was expected to be running, but wasn't. Process exited with code -6.
[executor:fsm_workload_test:job0] 2019-11-26T14:43:56.199-0500 job0_fixture_teardown ran in 6.23 seconds: failed.
[executor:fsm_workload_test:job0] 2019-11-26T14:43:56.199-0500 The teardown of ShardedClusterFixture (Job #0) failed.
[executor] 2019-11-26T14:43:56.199-0500 Teardown of ShardedClusterFixture (Job #0) of job 0 was not successful
[resmoke] 2019-11-26T14:43:56.199-0500 ================================================================================
[resmoke] 2019-11-26T14:43:56.199-0500 Summary of concurrency_sharded_replication suite: Executed 6 times in 786.10 seconds:
* All 6 test(s) passed in 23.80 seconds.
* All 5 test(s) passed in 26.26 seconds.
* All 5 test(s) passed in 10.09 seconds.
* All 5 test(s) passed in 12.08 seconds.
* All 5 test(s) passed in 24.32 seconds.
* 2 test(s) ran in 689.55 seconds (0 succeeded, 0 were skipped, 2 failed, 0 errored)
The following tests failed (with exit code):
jstests/concurrency/fsm_workloads/agg_out.js (253 Failure executing JS file)
agg_out:CheckReplDBHashInBackground (1 DB Exception)
If you're unsure where to begin investigating these errors, consider looking at tests in the following order:
agg_out:CheckReplDBHashInBackground
jstests/concurrency/fsm_workloads/agg_out.js
[resmoke] 2019-11-26T14:43:56.200-0500 Exiting with code: 2